Maxwell is essentially a gaming workstation that I use to try out new things locally within my apartment. It is dual-boot, with Windows 10 on a 60 GB SSD and CentOS 7 on a 240 GB SSD. I first built it around February/March 2017, with a goal of creating a workstation that I could use for both experimentation and running BOINC- to be completely honest, part of my motivation was to see how close I could get to surpassing (or at least equalling) some of the older scientific lab hardware I had dealt with in the past using only consumer-grade parts.

Maxwell is named after James Clerk Maxwell (not to be confused with the neo-soul musician Maxwell or the Gundam Wing character Duo Maxwell).



Core I7-6700 FC-LGA14C

# of Cores (logical)


CPU Clock Speed (GHz)


Memory (GB)



60 GB SSD (Windows volume), 240 GB SSD (Linux volume), 2 1 TB WD-Blues (ZFS mirrored)


CentOS 7 and Windows

IP Address


Part List


Of these, the following can go away because I don't need a GUI for this host / don't want the rest:

KVM is going to be managed via Proxmox in the future, as will ZFS on Linux (VMs will have volumes exposed to them via NFS). That means that only the following need to be managed via Ansible:


Tasks for Maxwell Rebuild

Storage Tasks

Reprovisioning Tasks

Making Maxwell a Managed Host

VM Creation

Detailed Notes of ZFS Mirror to RAIDZ1 Transition

Note: Encrypted snapshots are on external HD if this goes badly.

1. Disable user cronjobs for user jpellman.

2. As root in screen session: Go to multi-user target with systemctl isolate multi-user, turn off BOINC, unmount /home/boinc and /home. Ensure that /home/jpellman isn't being mounted on Bruno using sshfs.

3. Create a sparse file using the number of bytes provided by fdisk -l: truncate -s 1000204886016 /root/raidz1_faux_drive.img

4. Offline one of the drives in the ZFS mirror: zpool offline pool0 ata-WDC_WD10EZEX-08WN4A0_WD-WCC6Y3NSTU5Z

5. Clear out the partition label for the offlined disk:

zpool export pool0
zpool labelclear -f /dev/disk/by-id/ata-WDC_WD10EZEX-08WN4A0_WD-WCC6Y3NSTU5Z-part1

6. Create a new volume with the offline drive and the spare WD Blue you added:  zpool create datastore raidz1 /root/raidz1_faux_drive.img /dev/disk/by-id/ata-WDC_WD10EZEX-08WN4A0_WD-WCC6Y3NSTU5Z /dev/disk/by-id/ata-WDC_WD10EZEX-00WN4A0_WD-WCC6Y7AKHNY8 

7. Turn deduplication and compression on by default at the pool level.

zfs set compression=lz4 datastore
zfs set dedup=on datastore

8. Offline the sparse image.

zpool offline datastore /root/raidz1_faux_drive.img

9. Transfer data from the old pool to the new pool.

zpool import pool0
zfs send -R pool0/[email protected] | zfs receive datastore/apache
zfs send -R pool0/[email protected] | zfs receive datastore/home 

10. Mount the new pool and verify that it looks right.

zfs get mountpoint datastore/home
# Set it if not appearing above
zfs set mountpoint=/home datastore/home 
zfs mount datastore/home 

11. Destroy the old pool.  zpool destroy pool0 

12. Add in the other disk.  zpool replace datastore /root/raidz1_faux_drive.img /dev/disk/by-id/ata-WDC_WD10EZEX-08WN4A0_WD-WCC6Y7ZT6K9C 

Other Wants


attachment:IMG_20190121_154736836.jpg attachment:IMG_20190121_154801532.jpg attachment:IMG_20190121_154834420.jpg


Maxwell (last edited 2020-02-10 06:08:32 by Jaipel)