This article describes the process of upgrading my home server’s hardware and software. I wanted to move from a single Ubuntu 22.04 LTS install to Proxmox, upgrade the RAM, and get a more well-planned Docker setup.
1. Backup Configurations
I used rsync to backup my configuration files and data from my Docker apps to an external drive. I had nothing else I wanted to back up as my other data is not on the boot drive.
$
sudo mount /dev/sdc1 /mnt/sdc1
$
sudo rsync -aP /server/ /mnt/sdc1/backup/server
2. Upgrade Hardware
I swapped the RAM from a 2x8 kit to a 2x16 kit for more headroom with the VMs. The kit I used is the TeamGroup Vulcan Z 2x16 3200MHz CL16 kit.
3. Install Proxmox and create VMs
I created a Proxmox install USB with Rufus and ran the installer, and when it finished I ran the Post Install script from Proxmox VE Helper Scripts.
Then, I pulled the Ubuntu 24.04LTS and Fedora 40 ISOs from their respective sites to build the VMs. I opted for Fedora 40 KDE edition. For Windows, I installed the Windows 11 ISO and ran tiny11builder on it, then uploaded the result to Proxmox. I also downloaded the VirtIO drivers.
Then, I created the three VMs.
- Ubuntu 24.04LTS
- I changed the System > Machine setting to q35 instead of i440fx, and the System > BIOS setting to OVMF.
- I gave it 500GB of disk space, 4 threads, and 16GiB of RAM.
- Fedora 40
- I gave it the same system settings as the Ubuntu VM.
- I gave it 250GB of disk space, 3 threads, and 8GiB of RAM.
- Windows 11
- I gave this the same settings and specs as the Fedora VM.
- I added the VirtIO drivers and enabled TPM.
4. Mount Storage Drives
To mount my storage drives as-is, I followed the storage drive passthrough guide.
I had to find the correct drives in /dev/disk/by-id with this command:
$
find /dev/disk/by-id/ -type l|xargs -I{} ls -l {}|grep -v -E '[0-9]$' |sort -k11|cut -d' ' -f9,10,11,12 Then, to attach them to the VM, I ran this command:
$
qm set [VM ID] -scsi2 /dev/disk/by-id/[DISK ID]
This command passed through the drive to my Ubuntu VM, so I could then mount it as normal through fstab.
In the future, I’d like to either set drives up as LVM in Proxmox, or build a dedicated NAS instead. For now, I don’t want to bother with drastically changing my setup.
5. Set Up Docker and Containers
I installed Docker on the Ubuntu VM. From there, I installed Docker Compose and spun up a Portainer container. I mounted my USB backup drive by passing through the USB device to the Ubuntu VM. I used my backed up Portainer configurations to manually recreate my Docker apps, with a bit more planning. Because I backed up my configurations, the apps were generally able to start as if nothing had changed.
6. Set Up Network Shares
To access my data from the other VMs, I set up Samba shares on the Ubuntu VM.
$
sudo apt install samba
$
sudo nano /etc/samba/smb.conf
It surprised me how simple it was. I tried this in the past when I ran with Ubuntu Desktop and it wouldn’t work unless I did it through the GUI. Now, I added the config and it worked immediately.
7. Set Up Reverse Proxy, Cloudflare Tunnels, and Authentication
7-1. SWAG and Cloudflare Tunnels
Previously, I used Cloudflare tunnels to access my services from outside my network. I’d like to continue using them, but I would like to set it up through a reverse proxy to simplify the setup and allow for setting up authentication.
For my reverse proxy, I originally used SWAG, and I found it to be simple to set up so I will use it again. I’m also using two SWAG Docker Mods, auto-reload and dashboard.
I set up a wildcard tunnel in Cloudflare *.(domain) and pointed it to https://swag:443. Importantly, I set the tunnel’s TLS origin server name to my domain name. Without this, it spits out a 502 error complaining about the certificate being valid for the domain, but not for SWAG.
Then, I added a CNAME record that pointed to (tunnelID).cfargotunnel.com. After enabling the configs for services in SWAG, I could connect successfully with the tunnel. I also enabled Always Use HTTPS in the Cloudflare SSL/TLS settings.
7-2. SWAG and Authentik
To authenticate for my services, I wanted to have a single sign-on solution. I used Authentik in the past to authenticate with Cloudflare tunnels, but I want to use it with SWAG now.
Setting it up for services that don’t need account authentication is easy, because it’s just commenting out two lines in the proxy config file, then adding a provider and application in Authentik.
Setting it up for services that can utilize Authentik logins is a bit more involved. For example, I wanted to set up a Vikunja instance that could let me use my Authentik login.
First, to actually get the Vikunja container to recognize a config file, I had to add this to the compose file:
volumes:
- ${MOUNT_SERVER}/vikunja/files:/app/vikunja/files - type: bind
source: ${MOUNT_SERVER}/vikunja/config.yml target: /etc/vikunja/config.yml
Then, I set up an OAuth2 provider, copied the keys, and added this to the Vikunja config file:
auth:
local:
enabled: false
openid:
enabled: true
redirecturl: "https://vikunja.{DOMAIN}/auth/openid/" providers:
- name: authentik
authurl: "https://auth.{DOMAIN}/application/o/vikunja/" logouturl: "https://auth.{DOMAIN}/application/o/vikunja/end-session/" clientid: "{CLIENT ID}" # copy from Authentik clientsecret: "{CLIENT SECRET}" # copy from Authentik scope: openid profile email
After that, it worked as expected. While this was simple, I had to do some digging to get it working, and I’ll have to do this manually for any other apps I want to set up. I’ve set up Portainer and Memos in a similar way.
8. Conclusion
With that, I’ve completed all the upgrades I had wanted to do. I achieved my goals of ensuring portability of my services and increasing security. I also now have the flexibility of multiple VMs to work with.
In the future, I would like to do a few things:
- Set up a dedicated NAS.
- Set up a DNS server for my network.