Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to replace broken hard drive on a bare metal server? #786

Open
onnimonni opened this issue Sep 20, 2024 · 11 comments
Open

How to replace broken hard drive on a bare metal server? #786

onnimonni opened this issue Sep 20, 2024 · 11 comments
Labels
question Not a bug or issue, but a question asking for help or information

Comments

@onnimonni
Copy link

onnimonni commented Sep 20, 2024

Hey,

I'm preparing to the case where one or more of my harddrives will eventually fail. To simulate this I put one of my machines into rescue mode and completely wiped partitions of one drive with: wipefs -a /dev/nvme1n1 and rebooted (nvme1n1 contained the /boot ESP partition in my case and nvme0n1 had the fallback boot).

It booted up nicely and now I'm wondering what is the recommended way to recreate the partitions to a new drive and let the drive to join to the existing zfs pool?

I tried to just by deploying the same disko config again and it fails because of the missing /boot partition:

$ nix run nixpkgs#nixos-rebuild -- switch --fast --flake .#myHost --target-host root@$MY_SERVER_IP --build-host root@$MY_SERVER_IP

...

updating GRUB 2 menu...
installing the GRUB 2 boot loader into /boot...
Installing for x86_64-efi platform.
/nix/store/dy8a03zyj7yyw6s3zqlas5qi0phydxf2-grub-2.12/sbin/grub-install: error: unknown filesystem.
/nix/store/a6y2v48wfbf8xzw6nhdzifda00g7ly7z-install-grub.pl: installation of GRUB EFI into /boot failed: No such file or directory
Failed to install bootloader
Shared connection to X.Y.Z.W closed.
warning: error(s) occurred while switching to the new configuration
@Mic92
Copy link
Member

Mic92 commented Sep 20, 2024

Disko can run incrementally, we don't recommend it for users that don't have good recovery options since we have not tested all edge cases. But if you are testing, you can check if it works for your configuration.
In this case you would run the disko cli with the --mode format option instead of --format disko.

@onnimonni
Copy link
Author

onnimonni commented Sep 20, 2024

Thanks for the suggestion 🙇. And yes this machine doesn't have anything important yet so loosing all of my data is okay.

I tried your suggestion by moving my flake and all .nix files into the server /root/ path and got this error:

[root@localhost:~]# ls -lah
total 19
-rw-r--r-- 1  501 lp   5639 Sep 20 09:19 disko-zfs.nix
-rw-r--r-- 1  501 lp    891 Sep 18 20:06 flake.nix
-rw-r--r-- 1  501 lp    910 Sep 19 13:46 myHost.nix
-rw-r--r-- 1  501 lp    198 Sep 20 09:03 programs.nix
-rw-r--r-- 1 root root    6 Sep 20 13:11 test.txt

[root@localhost:~]# nix --experimental-features "nix-command flakes" run github:nix-community/disko -- --mode format ./disko-fzs.nix
aborted: disko config must be an existing file or flake must be set

@iFreilicht
Copy link
Contributor

@onnimonni There's a typo in your command. You wrote fzs.nix instead of zfs.nix.

@iFreilicht iFreilicht added the question Not a bug or issue, but a question asking for help or information label Sep 20, 2024
@onnimonni
Copy link
Author

Ah that's true and thanks for the help. I guess this doesn't work because I needed to use configurable list of drives

$ nix run github:nix-community/disko -- --mode format ./disko-zfs.nix
warning: Nix search path entry '/nix/var/nix/profiles/per-user/root/channels' does not exist, ignoring
error:
       … while evaluating the attribute 'value'

         at /nix/store/p2zlnhfbwx66hmp4l8m3qyyj3yrfr9zh-9qq0zf30wi74pz66rr05zmxq0nv17q1p-source/lib/modules.nix:821:9:

          820|     in warnDeprecation opt //
          821|       { value = addErrorContext "while evaluating the option `${showOption loc}':" value;
             |         ^
          822|         inherit (res.defsFinal') highestPrio;

       … while calling the 'addErrorContext' builtin

         at /nix/store/p2zlnhfbwx66hmp4l8m3qyyj3yrfr9zh-9qq0zf30wi74pz66rr05zmxq0nv17q1p-source/lib/modules.nix:821:17:

          820|     in warnDeprecation opt //
          821|       { value = addErrorContext "while evaluating the option `${showOption loc}':" value;
             |                 ^
          822|         inherit (res.defsFinal') highestPrio;

       (stack trace truncated; use '--show-trace' to show the full trace)

       error: function 'anonymous lambda' called without required argument 'disko'

       at /root/disko-zfs.nix:17:1:

           16| # Only small modifications were needed, TODO: check if this could be srvos module too
           17| { lib, config, disko, ... }:
             | ^
           18| {

But I got it working by directly using the flake instead of just the disko config. This was probably because I was using the config and options in the disko config file.

$ nix run github:nix-community/disko -- --mode format --flake .#myHost

After this the partitions were created properly but it didn't mount the nvme1n1 ESP boot partition to /boot or join the degraded mirrored zpool:

[root@localhost:~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
nvme0n1     259:0    0  3.5T  0 disk
├─nvme0n1p1 259:2    0    1M  0 part
├─nvme0n1p2 259:3    0    1G  0 part /boot-fallback-dev-nvme0n1
└─nvme0n1p3 259:4    0  3.5T  0 part
nvme1n1     259:1    0  3.5T  0 disk
├─nvme1n1p1 259:5    0    1M  0 part
├─nvme1n1p2 259:7    0    1G  0 part
└─nvme1n1p3 259:9    0  3.5T  0 part

[root@localhost:~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  3.48T  2.08G  3.48T        -         -     0%     0%  1.00x  DEGRADED  -

[root@localhost:~]# zpool status
  pool: zroot
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
config:

        NAME                       STATE     READ WRITE CKSUM
        zroot                      DEGRADED     0     0     0
          mirror-0                 DEGRADED     0     0     0
            disk-_dev_nvme0n1-zfs  ONLINE       0     0     0
            12202984158813695731   UNAVAIL      0     0     0  was /dev/nvme1n1

I did then try to replace the old partition with new one but it failed:

$ zpool replace zroot 12202984158813695731 nvme1n1p3
invalid vdev specification
use '-f' to override the following errors:
/dev/nvme1n1p3 is part of active pool 'zroot'
$ zpool replace zroot 12202984158813695731 nvme1n1p3 -f
invalid vdev specification
the following errors must be manually repaired:
/dev/nvme1n1p3 is part of active pool 'zroot'

@onnimonni
Copy link
Author

I did get the "new" disk back to zpool by running:

$ zpool detach zroot 12202984158813695731
$ zpool attach zroot disk-_dev_nvme0n1-zfs nvme1n1p3

And then rebooted the machine and also the /boot partition from the "new" disk was now mounted properly:

[root@localhost:~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
nvme1n1     259:0    0  3.5T  0 disk
├─nvme1n1p1 259:1    0    1M  0 part
├─nvme1n1p2 259:2    0    1G  0 part /boot-fallback-dev-nvme0n1
└─nvme1n1p3 259:4    0  3.5T  0 part
nvme0n1     259:3    0  3.5T  0 disk
├─nvme0n1p1 259:5    0    1M  0 part
├─nvme0n1p2 259:6    0    1G  0 part /boot
└─nvme0n1p3 259:7    0  3.5T  0 part

[root@localhost:~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  3.48T  2.08G  3.48T        -         -     0%     0%  1.00x    ONLINE  -

[root@localhost:~]# zpool status
  pool: zroot
 state: ONLINE
  scan: resilvered 2.32G in 00:00:02 with 0 errors on Sat Sep 21 09:12:54 2024
config:

	NAME                       STATE     READ WRITE CKSUM
	zroot                      ONLINE       0     0     0
	  mirror-0                 ONLINE       0     0     0
	    disk-_dev_nvme0n1-zfs  ONLINE       0     0     0
	    nvme0n1p3              ONLINE       0     0     0

errors: No known data errors

Happy to hear feedback about this approach but I'm glad to see this worked out 👍

I'm willing to summarize this guide and do a PR to create docs/replace-broken-disk.md guide if you feel the steps here were the "right" ones.

@iFreilicht
Copy link
Contributor

iFreilicht commented Sep 21, 2024

Glad to hear it!

Hmm, I would say that ideally, disko should be able to do this automatically. Like I mentioned in #107, modelling a degraded pool and re-attaching devices when disko runs.

Feel free to write a guide on this, I'll be happy to review it! Make sure that while writing, every step is extremely clear, and make sure to show a full configuration that allows readers to follow the exact steps. Ideally, go through all the steps again on your test machine and document that while you're doing it to make sure the guide actually works.

@jan-leila
Copy link

For if someone else wants to give a stab writing the documentation here the repo being used for your test configuration (and the step in its history) is this https://github.com/onnimonni/hetzner-auction-nixos-example/tree/45aaf7100167f08f417224fd6a1b1dac74795fb9 right @onnimonni?

@jan-leila
Copy link

I gave a stab at drafting up some docs on this but haven't been able to test because I don't have any unused hardware laying around to do so. Feel free to take as much or as little inspiration from them as you would like.

@Mic92
Copy link
Member

Mic92 commented Sep 24, 2024

We usually simulate these steps with qemu's nvme emulation: https://qemu-project.gitlab.io/qemu/system/devices/nvme.html
You can just create use truncate to create multiple filesystem images for testing.

@Mic92
Copy link
Member

Mic92 commented Sep 24, 2024

This is a script I had flying around:

#!/usr/bin/env nix-shell
#!nix-shell -i bash -p bash -p qemu_kvm -p iproute2

set -x -eu -o pipefail

VM_IMAGE=""
CPUS="${CPUS:-$(nproc)}"
MEMORY="${MEMORY:-4096}"
SSH_PORT="${SSH_PORT:-2222}"
IMAGE_SIZE="${IMAGE_SIZE:-10G}"

extra_flags=()
if [[ -n ${OVMF-} ]]; then
  extra_flags+=("-bios" "$OVMF")
fi

# https://hydra.nixos.org/job/nixos/unstable-small/nixos.iso_minimal.x86_64-linux
iso=/nix/store/xgkfnwhi3c2lcpsvlpcw3dygwgifinbq-nixos-minimal-23.05pre483386.f212785e1ed-x86_64-linux.iso
nix-store -r "$iso"

for arg in "${@}"; do
  case "$arg" in
  prepare)
    truncate -s"$IMAGE_SIZE" nixos-nvme1.img nixos-nvme2.img
    ;;
  start)
    qemu-system-x86_64 -m "${MEMORY}" \
      -boot n \
      -smp "${CPUS}" \
      -enable-kvm \
      -cpu max \
      -netdev "user,id=mynet0,hostfwd=tcp::${SSH_PORT}-:22" \
      -device virtio-net-pci,netdev=mynet0 \
      -drive file=nixos-nvme2.img,if=none,id=nvme1,format=raw \
      -device nvme,serial=deadbeef1,drive=nvme1 \
      -drive file=nixos-nvme1.img,if=none,id=nvme2,format=raw \
      -device nvme,serial=deadbeef2,drive=nvme2 \
      -cdrom "$iso"/iso/*.iso \
      "${extra_flags[@]}"
    # after start, go to the console and run:
    # passwd
    # than you can ssh into the machine:
    # ssh -p 2222 nixos@localhost
    ;;
  destroy)
    rm -f "$VM_IMAGE"
    ;;
  *)
    echo "USAGE: $0 (prepare|start|destroy)"
    ;;
  esac
done

@jan-leila
Copy link

oh that would be very useful for testing out the steps I have written, I'll give it a stab sometimes later this week (likely on Friday or Saturday when I'm stuck on a plane/layover)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Not a bug or issue, but a question asking for help or information
Projects
None yet
Development

No branches or pull requests

4 participants