One of the mainstays of my household for the past few years has been a 2011 Mac Mini that sits in my utility room, happily serving media via Plex Media Server and downloading media via BitTorrent. (In many cases, I subscribe to streaming services, but choose to watch the shows in question using Plex because the player apps for said services are so abysmal. Looking at you, CBS All Access.) The server is running macOS 10.13, which is as high as this model can go. It’s also out of support when 11.0 drops, and won’t get any more security updates.
So why stick with macOS? Good question. There was a time when I would have said that the machine was hosting my iTunes library, for purposes of Home Sharing, but not anymore, not since I signed up for Apple Music. If I ditched Apple Music now, it wouldn’t be to go back to Home Sharing, it would be for Spotify. So there’s really nothing remaining that my Mini does that is exclusively Mac-based.
Here are the tasks the server currently performs:
- Serve media via Plex Media Server
- Finds new TV shows using Sonarr
- Finds movies using Radarr
- Finds subtitles for media using Bazarr
- Serves as a bridge between my Vera home automation hub (and all the Z-Wave devices it controls) and Apple’s HomeKit, via Homebridge
- Serves as a Time Machine backup destination for my laptop
- Serves some old files that I don’t want to get rid of
- Serves as a print server, allowing me to print to my Dymo label maker from my laptop without connecting via USB
All that shit can be done on a Linux box.
And the fact is, the macOS has a few attributes that make it an unideal server. It’s difficult to daemonize some apps, like Plex. You need to have a user logged in. And the GUI on the Mac is always active, even if it’s in a closet with no monitor attached. This is nice, as I can always VNC into it, but it’s also overhead that doesn’t need to be there. The Docker implementation for macOS leaves a lot to be desired too.
So I decided to make the leap, and pave the Mini and install Linux. The flavor I chose is Ubuntu Linux 20.04 LTS Server. (The Server edition doesn’t install things like a GUI — it’s all command-line, baby!) I figure that will allow me to squeeze some more life out of this old thing.
Full disclosure: The process I describe herein represents my second stab at doing this. The lessons I’ve learned are informed by all the things I could have done better the first time. Learn from my mistakes. One of the primary things I want to do "better" this time is to Dockerize as many services as I can. Docker allows you to run services without a lot of extraneous installations. For example, Radarr and Sonarr require Mono, which installs a lot of stuff. With Docker, that’s all included. So this is partially an experiment to see how little I can install on this box.
Important step! Before I did anything else, I looked into whether I could use my HFS+ formatted external drives… such as the RAID where my media is stored, under Linux. Yes, I can, but only if the volumes are not journaled. So the first thing to do is to turn off journaling, using the diskutil command. These volumes had been non-journaled for a while, so that wasn’t a problem, but you should be aware of it if you’re going to be doing the same thing.
Erasing and Installing
First thing to do is bring the Mini to where there’s a monitor and keyboard. Actually, first thing was to make sure, before I got rid of the macOS, that I had a current Time Machine backup. Don’t wanna do anything I can’t undo later, after all. (During my first attempt, that saved my ass a few times, as there were configuration files in that backup that could indeed transfer to the new system.)
So I took the Mini into my home office (also known as my bedroom) and hooked it up to my monitor and keyboard, and also ethernet. That’s one of the first lessons I learned the first time through — the installer I’m using doesn’t see the Mini’s wifi chip. No problem there, as it will be hardwired anyway. I just had to plug it into the nearest access point. Ethernet cables, I got in spades.
So of course I need an installer. I moseyed over to Ubuntu’s website and downloaded the ISO for their Server version. Then, using my laptop and the UNIX
dd command, I cloned that image onto an old USB stick, that I now realized I borrowed from my friend Clint and never gave back. Sorry, Clint.
So now, bootable USB stick in hand, I reboot with the Option key down, to boot into "EFI Boot" and head into the installer.
And then get a cup of coffee, because this USB stick ain’t exactly USB3.1.
First time around, I ignored the message that there’s a newer version of the installer. This time, I’m in no hurry, so I opted to update it on the spot. This ended up taking almost no time and having no appreciable impact on installation.
Next up is network selection. Again, the wifi chip is not seen. My router has a DHCP reservation for my MIni, and since I’m using the same Ethernet card, it will get the same IP address under Linux that it did under macOS. Don’t have to change any of my bookmarks!
When it comes time to select the drive to install on, I choose the 250GB SSD I’d installed a while back. Folks, if you have an aging computer, replacing that spinning hard drive with an SSD is the best present you can buy yourself. I also enabled Logical Volume Management. Interestingly, the LVM automatically chooses to make my boot volume 111GB, half the size of the 222GB that it shows as usable. Why, I’m not sure. But I know LVM allows for some flexibility in volume creation, and I don’t need much space on my server drive, so I’m gonna leave that as it is for now.
Next it asks to set up a user and hostname. I go with "
calliope" for the hostname, as that was what the Mini was called. (I name my computers after Greek muses. Calliope was the muse of epic poetry.)
Next, and most important, I enabled SSH server. Can’t get to the box headless without that.
Next comes a selection of services that Ubuntu Server can install automatically. The only one I’m interested in is Docker. One of these days, I’ll explore some of the ones I haven’t heard of, but this is not that day. Anyway, here’s a shot of the list.
And now the actual installation begins. More coffee. Once it’s done, its the old familiar Linux startup sequence, with an interminable string of incomprehensible information and little green "OK"s. Love that shit. Kinda annoying that those messages keep coming even for a bit after login.
But that’s moot. As soon as I confirm it has the right IP and I can SSH into it, down it goes and back into the utility room, where I hook it up to the printer and to my 5TB media RAID.
That RAID is an older unit, with connections for USB2, FireWire 800, and eSATA. Fortunately, this Mini does indeed have FireWire 800, and it works out of the box in Ubuntu. Yay! Also, I have a second internal drive in the Mini (its original, actually), that stores my "Cold Storage" file share, with all my old graphics projects and what not.
So, because these volumes are formatted as HFS+, next order of business is to install a driver for that filesystem. So…
sudo apt-get install hfsprogs
I’ve decided everything will be mounted at /mnt. My cold storage at
/mnt/storage. I named my media drive "Hamsterdam" years and years ago (possibly while The Wire was still on the air), and that drive’s successors have all been called "New Hamsterdam." I see no reason to stop now, so the drive will mount at
sudo mkdir /mnt/storage sudo mkdir /mnt/newhamsterdam
To figure out what to mount, I use the lsblk command to list all block devices on the system. Here’s what I get:
Well, the 5.5 TB volume is obvious — it’s /dev/sdc2. The storage volume is a terabyte drive, and it looks like /dev/sda2 is the most likely one there. So I give it the commands:
sudo mount -t hfsplus -o force,rw /dev/sdc2 /mnt/newhamsterdam sudo mount -t hfsplus -o force,rw /dev/sda2 /mnt/storage
That worked — I can see and write to the drives. (Full disclosure: The first time around, I did have to change permissions on the media drive to match my current user.)
Now to share these volumes. I do want to be able to access them from my laptop. So it’s time to set up Samba.
Samba is the open-source implementation of the SMB protocol, used by Windows and Mac machines to share files. Following this tutorial, I will set that up now.
First, I install Samba itself:
sudo apt-get update sudo apt-get install samba
Once that’s installed, time to tell it where my shares are. I’m gonna start by putting one in my home directory, just to have a place to dump stuff from my laptop as needed (such as all my backed-up config files).
mkdir /home/bhawkins/shared sudo nano /etc/samba/smb.conf
Then, in the conf file, I append:
[Shared] comment = Samba on Ubuntu path = /home/bhawkins/Shared read only = no browsable = yes
Now I restart the Samba daemon:
sudo service smbd restart
(Honestly, I’m not really clear on which services should be started with "service" and which with "systemctl" — I’ll need to look into that, maybe post about it.)
Now I have to give my user a password in Samba. That should, in theory, keep synced with my user password should I ever change it.
sudo smbpasswd -a bhawkins
Now it’s time to connect! Let’s see if my laptop sees it.
Sure does! OK, now to add some more shares. Back in /etc/samba/smb.conf, I add:
[New Hamsterdam] comment = New Hamsterdam media drive path = /mnt/newhamsterdam read only = no browsable = yes [Storage] comment = Cold Storage path = /mnt/storage read only = yes browsable = yes
Since I don’t plan on adding anything to Cold Storage, I can set it to read-only. I can change that later if I want.
OK, this post is getting long, so I should end here and start on Part 2.