Posted on May 22, 2025

This is meant as an aggregation of weird filesystem things I’ve done in the past so I can remember them in the future.

# Adding a new SSD

Suppose we have a pre-existing system running BTRFS that we’ve already partitioned and begun to use. Then, we run out of memory. So, we buy a new SSD. How can we avoid repartitioning but use the SSD for normal usage (say, nix builds in /nix/store)?

Normally if we were using something like ext4, we’d use LVM and abstract over multiple drives. But, this doesn’t work since we didn’t start with LVM. Simiarly on ZFS we’d just use a pool.

BTRFS has this concept built-in like ZFS pools. To add a new drive, we don’t need to repartition or anything. Instead, assume something like:

 lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
nvme0n1     259:0    0  3.6T  0 disk
nvme1n1     259:1    0  3.6T  0 disk
├─nvme1n1p1 259:2    0    1G  0 part /boot
└─nvme1n1p2 259:3    0  3.6T  0 part /home
                                     /nix/store
                                     /nix
 df -h
Filesystem      Size  Used Avail Use% Mounted on
...
/dev/nvme1n1p2  3.6T  2.4T  1.2T  33% /
/dev/nvme1n1p2  3.6T  2.4T  1.2T  33% /nix
 sudo btrfs filesystem show /
Label: none  uuid: 2d8d366c-8799-4456-8088-a15b5f905770
        Total devices 2 FS bytes used 2.30TiB
        devid    1 size 3.64TiB used 2.34TiB path /dev/nvme1n1p2

nvme0n1 is the drive we want to add to increase /dev/nvme1n1p2’s storage . But since we already are working in terms of subvolume’s, we can just add /dev/nvme1n1p2 to the pre-existing UUID.

btrfs device add /dev/nvme0n1 /

This doesn’t require any modifications to fstab generated by a nixos configuration since we configured these files by btrfs subvolume uuid. This gets us the desired result:

❯ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/nvme1n1p2  7.3T  2.4T  5.0T  33% /
/dev/nvme1n1p2  7.3T  2.4T  5.0T  33% /nix
❯ sudo btrfs filesystem show /
Label: none  uuid: 2d8d366c-8799-4456-8088-a15b5f905770
        Total devices 2 FS bytes used 2.30TiB
        devid    1 size 3.64TiB used 2.34TiB path /dev/nvme1n1p2
        devid    2 size 3.64TiB used 188.00GiB path /dev/nvme0n1

# Failing RAM

When this happened to me, failing RAM presented itself as nondeterministically failing/succeeding nix builds on BTRFS. On ZFS, it presented as failure to mount a pool (a more significant error). The nix system would freeze on pool mounting and require a reboot.

To identify this, I pulled out the memtest86 iso dded onto a thumb drive, and booted off the thumbdrive. I have 4x32GB drives. I ran with each individual stick in order to identify which sticks were broken. The tests take a nontrivial amount of time (I ran each overnight). Then, the RAM can be RMA-ed.