OS X Time Machine backups to a Debian Linux ZFS volume

I pulled together information from enough other sources to warrant a post of its own for getting this to work. Here are some of the other sources that I found helpful:

Several others posts have what seems to now be outdated information about needing a newer netatalk than stable Linux distributions have available. I’m using the Debian netatalk package version 2.2.5-2, from Debian Stretch (v9). I also checked the versions available in Debian testing (2.2.6-1) and Ubuntu 16.04 LTS (2.2.5-1).

First, set up the Linux server:

I’ve installed a pair of 6 TB hard drives:

$ zpool create tank mirror /dev/disk/by-id/ata-WDC_WD60EZRZ-... /dev/disk/by-id/ata-WDC_WD60EZRZ-...

Add a dedicated user:

$ sudo useradd -m macbook
$ sudo passwd macbook

Install netatalk:

$ sudo apt install netatalk libc6-dev avahi-daemon libnss-mdns

Update the hosts line in nsswitch.conf:

$ sudo vim /etc/nsswitch.conf
hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns myhostname

Update afpd.service:

$ sudo vim /etc/avahi/services/afpd.service
<?xml version="1.0" standalone='no'?><!---nxml--->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
   <name replace-wildcards="yes">%h</name>

Create a location on your ZFS volume to store Time Machine backups, ensure that it is owned by the appropriate user, and establish a quota since Time Machine will ultimately fill whatever storage it has access to:

$ sudo su macbook
$ cd
$ mkdir timemachine
$ sudo zfs create -o mountpoint=/home/macbook/timemachine tank/timemachine
$ sudo zfs set quota=2500G tank/timemachine
$ chown -R macbook:macbook timemachine/

Now configure the folder share where Time Machine will write its backups. I removed the line that automatically shared home directories for all users, as I don’t want to share all users’ home directories.

$ sudo vim /etc/netatalk/AppleVolumes.default
/home/macbook/timemachine "debian Time Machine" options:tm

Restart the avahi and netatalk services:
$ sudo service avahi-daemon restart
$ sudo service netatalk restart

Now, switch to the Macbook

$ defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

In Time Machine settings, there should now be another option (matching the quoted string configured above in AppleVolumes.default). It is necessary to provide the password that was configured when running passwd above. I also opted to enable Time Machine’s native support to encrypt the backup, which requires its own password. Don’t lose that one!

This worked for me and the backup started. However, after only around 1GB backed up, I ran into an error. It read 'Backup Failed The backup on "debian" is already in use.' There was a “Details” button. Clicking it, the expanded dialog read 'Time Machine couldn't complete the backup to "debian". The backup disk image "/Volumes/Time Machine/Name of my macbook.sparsebundle" is already in use.'

I tried three things at once, and have not yet identified which one(s) were essential to solving the problem:

  • While experimenting I had previously left the per-user home directories enabled in AppleVolumes.default and mounted one of them in the Finder. I don’t know whether this is problematic, since it is a parent directory of the separate share where the Time Machine backups will go. I removed that line since it was in that config file by default and I don’t actually want every user’s home directory to be shared.
  • Avoiding switching back and forth between WiFi and wired ethernet while a backup is running.
  • Adjusting the power management settings to keep everything but the display running while connected to AC Power.

Chrome Remote Desktop into a GCE Ubuntu VM

Warning: This post brought to you by the miracle that is in-flight WiFi. As such, it includes an uncomfortably high number of TODOs.

I think it would be awesome to have a one-click virtual (and potentially extremely powerful) workstation (Linux in this case, though other OSes would be nice as well) that offered easy-to-use remote desktop support from a lightweight client machine. I haven’t figured out how to get there in one click, but the steps below did get me to the point of being able to use Chrome Remote Desktop from my Chromebook to connect to an Ubuntu VM running on Google Compute Engine.

There are a lot of ways that this could be improved (e.g., via coding it up so that some or all of these explicit commands can be automated away), but I’ve deemed it coherent enough to write down as-is.

Prepare the client system (a Chromebook)

  1. Install the Chrome Remote Desktop extension
  2. Configure the Secure Shell client as described in a Previous Post
    • Note that the web-based SSH client integated with cloud.google.com is not sufficient as we will be forwarding ports to setup a secure VNC connection to interact with Chrome on the server system. I would love to find a way to avoid the need for this.
  3. Install the Chrome VNC Client

Prepare the server system to be able to create a VNC-based desktop environment and run its own instance of Chrome (which will be the server in our ultimate Chrome Remote Desktop sessions)

  1. Add Google’s apt package signature verification public key
    wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
    sudo sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
  2. Install Chrome, a VNC server, a window manager (using xfce4 here), and the chrome-remote-desktop native Linux package
    sudo apt-get update
    sudo apt-get install google-chrome-stable tightvncserver psmisc xfce4
    # **TODO(jonmccune)**: Use gnome and not xfce4
    sudo apt-get install chrome-remote-desktop xvfb xbase-clients python-psutil    
  3. Open TCP ports 443 and 5222

    See this stack overflow post.

Establish a VNC-over-SSH connection to the server (this is intended to be one-time, since in the future we’ll use Chrome Remote Desktop)

  1. SSH to the VM using the Secure Shell extension, but include -L 5900:localhost:5900 in the SSH Arguments: field. The behavior of that argument is explained in another Previous Post.

  2. On the server, start a VNC server that only listens for connections from localhost: vncserver :1 -localhost. This prompts for a password, though we’re relying on SSH for our security. TODO(jonmccune): how to disable the VNC-specific password?

    • On your client device, open the VNC client and connect to localhost:5900. (You may be prompted for your VNC-specifc password.)

Continue setting up the server via the VNC connection

  1. Open Google Chrome, and sign in to the Google account that you plan to use for Chrome Remote Desktop.

  2. Install the Chrome Remote Desktop extension
    TODO(jonmccune): Enable sharing of the current machine, choose a PIN, etc.

    1. I ran into an issue where the option to enable sharing the current machine was not visible. Returning to the SSH connection, the following commands allowed me to make progress. I’m not sure if I somehow missed an installer step or if this is some quirk of using a particular Linux distro.
    sudo groupadd chrome-remote-desktop
    sudo usermod -a -G chrome-remote-desktop jonmccune
  3. Now open Chrome Remote Desktop on your client system (the Chromebook). If all has gone well, you’ll see your VM instance listed there. Victory!
    TODO(jonmccune): Will Chrome Remote Desktop auto-start when the VM is rebooted?

Using a dedicated Google account

My motivation for setting all of this up is in service of collaboration with others, and so I didn’t want to put my personal Google account credentials in the VM.

The steps above don’t make any mention of using multiple Google accounts, but in practice I’ve set this up using a shared Google account that is dedicated to the purpose of the collaboration. The most convenient approach for me turned out to be to add the collaborative Google account as another user on my Chromebook and take advantage of the multi-user sign-in abilities of ChromeOS. Beware that your collaborators may be able to cause the installation of arbitrary Chrome extensions, but that rathole goes down a long way and is a topic for another day.

Setting up SSH Keys with the Chrome OS Secure Shell Extension

For this exercise the client system is a Chromebook, and the server system is an Ubuntu VM running on Google Compute Engine.

The SSH client of choice on Chrome OS devices is Secure Shell. Per its own documentation, it is possible to use public key-based authentication with the Secure Shell client. However, Secure Shell cannot generate its own keys. My goal here is to be able to SSH into a Google Compute Engine VM running Ubuntu Linux, so I generated the keypair on the target Linux VM using the browser-based SSH client offered by https://console.cloud.google.com/, and then imported them into Secure Shell on my Chromebook. This is appealing because it avoids the need to configure passwords for SSH altogether.

Security note: Generating the keypair on the target machine into which possession of that keypair authorizes access is reasonable. If an attacker already has a foothold in that system, you already lose. However, once that keypair is imported into Secure Shell on one’s client device, it can be convenient to use that key for access to other systems. Consider how much you trust the VM image where ssh-keygen executes before deciding whether to use the same keypair to authorize access to any other systems. Also consider the note about HTML5 filesystems being a relatively young technology in the above link to the Secure Shell documentation about SSH keys. A topic for another day is how to integrate with a physical hardware token like a Yubikey, so that the private SSH key is never exposed to any client device software.

# don’t allow the private key to be written to disk
cd /dev/shm
# generate the actual keypair
ssh-keygen -f gce-instance-ssh
# to SSH into the system where keys are being generated,
# authorize the public key
cat gce-instance-ssh.pub >> ~/.ssh/authorized_keys

This creates files gce-instance-ssh and gce-instance-ssh.pub. Both of these files need to be copied onto the Chromebook for importing into Secure Shell. I decided to do this using cat gce-instance-ssh and cat gce-instance-ssh.pub and then copy-pasting the contents of each. The destination was a Chrome extension that can create and edit plain text files. Secure Shell requires that both gce-instance-ssh and gce-instance-ssh.pub be available to import a keypair. I shift-clicked when selecting the files for the Import (to the right of the Identity: field in the Secure Shell connection dialog) dialog box. When selecting only the private key file, there seems to be little or no UI feedback that anything has happened at all.

If successful, the drop-down next to Identity: will have a new entry, whose name appears to be the basename of the imported key files. In this case, gce-instance-ssh.

Telling Ubuntu Network Manager to leave an interface alone

I wanted to assign a static IP address to a USB-to-Ethernet interface.  Edit /etc/network/interfaces… easy, right?  Then the NetworkManager starts to periodically get involved and make conflicting changes.  I’m not yet ready to embrace its GUIness, so I wanted to just disabled it.  

Turns out it’s easy if you know where to look:

sudo vi /etc/NetworkManager/NetworkManager.conf
# Add a line like this (comma-separated list of
# Ethernet MAC addresses of interfaces to ignore):

# Then restart NetworkManager
sudo service network-manager restart

It should then leave your interface(s) in peace.

Ubuntu Live CD with FOG 1.1.2

TL;DR: To do what the title says, get FOG working and then go to its admin page (http://MY-SERVER-IP/fog/management/index.php). Click the question-mark menu item (mouseover text “FOG Configuration”), then select “PXE Boot Menu” from the links on the left, and finally select “Advanced Configuration Options”. This produces a text box where a standalone iPXE script can be pasted in, which will be available from a magically appearing “Advanced” option at the end of the default list presented when your client system netboots. The script that did the trick for me is:

set arch i386
item fog.precise Run Ubuntu 12.04 LTS LiveCD (32 bit)
choose target && goto ${target}
# The kernel's path is relative to /var/www/fog/service/ipxe.
# The files at the nfsroot are a copy of the mounted contents of
# the ubuntu-12.04.4-desktop-i386.iso. Note that this 'kernel'
# line is one long line. ('nfsroot' is an argument to 'kernel'.)
kernel howtogeek/linux/ubuntu/precise/casper/vmlinuz boot=casper netboot=nfs nfsroot=MY-SERVER-IP:/tftpboot/howtogeek/linux/ubuntu/precise
# initrd should be on its own line so iPXE grabs it via HTTP,
# instead of requiring the kernel to grab via NFS
initrd howtogeek/linux/ubuntu/precise/casper/initrd.lz
boot || goto MENU

Mount an ISO persistently with a line like this in /etc/fstab:
/tftpboot/howtogeek/linux/ubuntu-12.04.4-desktop-i386.iso /tftpboot/howtogeek/linux/ubuntu/precise udf,iso9660 user,loop 0 0

The full version:

FOG is a really quick-and-easy way to setup a DHCP/TFTP/HTTP/NFS server which can be a lifesaver when doing low-level experiments using hardware that might not have a CD/DVD drive, bootable USB, etc. I’ve set this stuff up manually in the past and it is always very tedious. I feel compelled to mention that PXE is a completely insecure protocol. It downloads arbitrarily bytes from a cleartext network connection and executes them in CPU ring 0. Configurations like this are unsuitable for anything other than lab-style environments.

I stumbled upon all of this when I basically wanted to do exactly what this link suggests: http://www.howtogeek.com/61263/how-to-network-boot-pxe-the-ubuntu-livecd/, but unfortunately it’s for an earlier version of FOG, and rather than use the old version I thought I would see what I could do. This HOWTO suggests editing /tftpboot/howtogeek/menus/linux.cfg from a previous HOWTO http://www.howtogeek.com/57601/what-is-network-booting-pxe-and-how-can-you-use-it/, which all comes down to file /tftpboot/pxelinux.cfg/default, which no longer exists in FOG v1.1.2. The reason for this is that the PXE boot menu is dynamically generated on the server using PHP code and a database backend.

With the help of WireShark, I fairly quickly was able to figure out how FOG was working. The TFTP/PXE part was working just fine. The trick is to figure out how it gets the kernel, initrd, and root filesystem for the option it does end up booting. Basically all of the problems I had were with specifying the right access method and path to the kernel, initrd, and root filesystem.

One can pretend to be a client using curl (with its awesome-sauce support to trivially stick in POST arguments):

curl --data "mac0=aa%3Abb%3Acc%3Add%3Aee%3Aff&arch=i386" http://MY-SERVER-IP/fog/service/ipxe/boot.php##params

My apologies for forcing you to probably read the FOG wiki, two howtogeek articles with painfully low information density, and then this, but perhaps it will show up as a search result that can help people frustrated with instructions that apply to stale FOG versions.

How to get rid of Alt-Tab and Alt-` in Ubuntu 12.04 Unity

I was happy with the Alt-Tab behavior before Unity. I tend to switch windows very rapidly and the attempt to animate things, and I find the distinction between Alt-Tab and Alt-` to be very annoying, especially under certain circumstances when Alt-Tab actually does behave as Alt-` does.

Here are some instructions to fix this. I selected the Static Application Switcher. I have not tried the Shift Application Switcher.