Cockpit¶
Cockpit summary
Install cockpit
Enable it
Cockpit will now be running on port 9090.
For SMB file sharing first install samba
Enable it on startup
And start it
You'll need curl if you don't already have it
Then add the file sharing plugin
curl -sSL https://repo.45drives.com/setup | sudo bash
sudo apt-get update
sudo apt-get install cockpit-file-sharing
You'll need to set a samba password for your user
Restart samba
Creating a RAID 1 array with two disks¶
Go to the storage tab. You should see all the drives on the right and nothing under devices. Click the menu button to create a RAID device.
To view raid devices in the cmd line
You can also see the array with
The lsblk
command will show you all your devices and you should see the raid1 array here as well
You can mount the drive manually with
to unmount
Simulating failing a drive¶
Note that I reference the array first, then a disk from the array.
After a drive fails, you can remove it from cockpit or from the command line
After removing the drive. You can turn off your machine, add another drive, then add it to the RAID array in Cockpit. It will then mirror the drive.
ZFS Instructions¶
For ZFS storage pool, install the utils
You'll need to install git
then follow instructions here: https://github.com/45Drives/cockpit-zfs-manager
Make sure to follow the instructions to enable Windows to see the snapshots.
I had to explicitly set the the snapdir to true
Copy zfs folder to cockpit
git clone https://github.com/45drives/cockpit-zfs-manager.git
sudo cp -r cockpit-zfs-manager/zfs /usr/share/cockpit
So that you can view the snapshots in windows in the mapped drive you need to add the to the samba config
shadow: snapdir = .zfs/snapshot
shadow: sort = desc
shadow: format = %Y.%m.%d-%H.%M.%S
shadow: localtime = yes
vfs objects = acl_xattr shadow_copy2
You can take snapshots in Cockpit manually, but unfortunately it does not provide a cron to this.
Fortunately, Sanoid can maintain snapshots for us quite easily.
Follow the installation instructions here: https://github.com/jimsalterjrs/sanoid/blob/master/INSTALL.md
I went ahead and created the following config at /etc/sanoid/sanoid.conf
[tank/immich]
use_template = production
[template_production]
frequently = 0
hourly = 36
daily = 30
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes
You can see other templates in the repo: https://github.com/jimsalterjrs/sanoid/blob/master/sanoid.conf
If you want to view the timer, you can use
Now you're up and running and have snapshots!
Simulating disk failure¶
For zfs, this is called "offline". We can do this in Cockpit but simply using the menu option and clicking offline. This will change the state to FAULTED for this disk, and make our storage Pool Degraded
. If we had another disk at the ready, we could go ahead and replace the disk, but alas we do not. So let's go ahead and detach it. We actually get a healthy status now that we've detached the failed drive.
Let's shut down the machine and try to replace the disk. I'm going to add yet another disk for simulating replication.
You can easily add the new disk to the pool which will default it to a Mirror.
For replication¶
By default, we already have syncoid install from our Ubuntu installation.
I'm going to create a new pool called backup and use "disk" mode for a single disk, then format the 4th drive. To replicate, I can simply run
Nginx Proxy¶
Make sure you configure the allowed domains and needed nginx configuration here: https://garrett.github.io/cockpit-project.github.io/external/wiki/Proxying-Cockpit-over-NGINX