Notes on setting up Proxmox¶
I got proxmox up and running the way I want, so now I just want to record my notes.
Proxmox Freezing Nic issue
If your proxmox server freezes during any rsync of lots of data and you have an older computer. This is because of an intel nic bug that you can fix. You can confirm if you check your system logs after the freeze. If you see e1000e: Detected Hardware Unit Hang in the logs, then you know you have the issue. You need to first disable NIC offloading with
Replace eno1 with your network interface. Can be found using ip address. Now edic the config so that it persists
and add the following line (and replace eno1 with your interface):
Reference reddit.
Installing Proxmox¶
I used Fedora Media Write to write the proxmox iso to usb and boot boot from the usb. BalenaEtcher is generally good too and should get the job done.
You should be pompted at some point to create a Node or give a name to a node, which is the name of the node of the machine that you are installing proxmox onto. I just named my proxmox, but something like "hp-z220" or the asset name may have been more appropriate.
DNS¶
Make sure your dns is working
If not, you may want to set the DNS at the Node -> System -> DNS, you can set it to clouldflare if you'd like 1.1.1.1.
Repositories¶
Proxmox defaults you to using the enterprise repositories, which is a little annoying since this will cause an error when trying to run an apt update if you don't have an enterprise subscription.'
Go to Node -> Updates -> Repositories. Click on the two enterprise repositories and disable them. Now click on Add and add the two no-subscription repositories.
Storage¶
Setting up storage for the Node to use as Pools for VM Drives¶
Click on the Node (under datacenter), and we can see all of the "Disks" that have been created for the node by their type. I generally stick to LVM-Thin for single disks that are storing VM disk volumes, and zfs when I'm using an array. Both support snapshots. When click you click the button to create, you will see the option to select the appropriate disk that you are formatting. Now if you don't see the disk that you want in this drop down, click on "Disks" in the menu tree to see what you're working it. If you see the disk you want, click on it and wipe it. If you can't wipe it because it's mounted, you need to unmount it first. I didn't see a way to mount or unmount Disks to the node in this menu but it can be very easily accomplished with cockpit.
Cockpit can easily be installed
Note you will have to create a non-root user to be able to log into cockpit.
Create thin provisioned zfs¶
Thin provisioned LVM-Thin is obvious, but for zfs you need to take one extra step.
-
Create your ZFS pool using the ZFS tool under Disks at your node. I set compression to lz4, and ashift to 12. If you start with a Single Disk and move to Mirror later, you can add drives to the mirror with some simple commands.
-
Before creating any drives using this zfs pool, go to the
Datacenterlevel, click onStorage, click your newly created pool, and click edit. You can now check the box forThin provision.
Info
If you want to change the compression strategy after creating the pool, you can issue this command. This will only take affect for new writes, not previous writes
Device Passthrough and Sharing¶
I'll talk about actual usb passthrough in a second, but I first want to talk about an external USB device that we want to make up file-level data to from multiple trusted VMs. What you would do is first mount the drive to a path on the host, i.e. /mnt/external_drive, (this can easily be done with Cockpit, or use the mount command and add to fstab).
Info
If the external drive is ntfs, make sure you install those tools
Next, at the Datacenter level, you need to create a Directory Mapping. Give it a name and set it to the path that you mounted the drive at.
Now, at a VM level, select Hardware, hit Add -> Virtiofs, you can now select the Directory Mapping ID you created in the previous step.
To mount the drive inside of the VM, run the following:
Then add it to fstab
Add this line
Converting a RAID1 to zfs mirror for block storage¶
When moving a RAID array to a new OS, the meta data is still on the drives but you still need to reassemble it. This can easily be done inside Cockpit if you see the unassembed Array as it's own drive.
With command line tools you can check if the array exists:
and you can reassemble with
To convert a RAID1 to ZFS mirror, what I found to be easiest is to break up the RAID array and free up a disk. The RAID array would be in a "degraded" state, however, will still operate just fine with just the one drive. After completely wiping the other drive of RAID metadata, you can format it as a zfs storage pool on proxmox
# Mark the disk you want to remove as failed
mdadm --fail /dev/md0 /dev/sda
# Remove it from the RAID array
mdadm --remove /dev/md0 /dev/sda
Now remove the mdadm meta data
Info
Note, /dev/md0 is where my array was mounted. Check to see if you can do this without having it mounted if you never mounted the array. In this case, sda is the disk I'm removing from the array. Make sure to keep track of which disk is the old RAID, and which disk is the new zfs.
After creating the zfs pool with the other drive, you can add a new disk into the VM, mount it, and rsync the data from the RAID disk into the zfs disk. The RAID disk should be mounted via a Virtiofs that I previously described.
Expanding a Disk¶
If you want to expand a disk given to a VM, navigate to VM → Hardware → Hard Disk → Resize and set the new size (must be greater).
After resizing in Proxmox, the guest OS does not automatically see the extra space. You must expand the partition and filesystem inside the VM.
Assuming the disk inside is ext4, that should be something like this. Double check what you need to do for whatever filesystem you're using inside the VM.