-
-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to replace broken hard drive on a bare metal server? #786
Comments
Disko can run incrementally, we don't recommend it for users that don't have good recovery options since we have not tested all edge cases. But if you are testing, you can check if it works for your configuration. |
Thanks for the suggestion 🙇. And yes this machine doesn't have anything important yet so loosing all of my data is okay. I tried your suggestion by moving my flake and all .nix files into the server
|
@onnimonni There's a typo in your command. You wrote |
Ah that's true and thanks for the help. I guess this doesn't work because I needed to use configurable list of drives
But I got it working by directly using the flake instead of just the disko config. This was probably because I was using the
After this the partitions were created properly but it didn't mount the
I did then try to replace the old partition with new one but it failed:
|
I did get the "new" disk back to zpool by running:
And then rebooted the machine and also the
Happy to hear feedback about this approach but I'm glad to see this worked out 👍 I'm willing to summarize this guide and do a PR to create |
Glad to hear it! Hmm, I would say that ideally, disko should be able to do this automatically. Like I mentioned in #107, modelling a degraded pool and re-attaching devices when disko runs. Feel free to write a guide on this, I'll be happy to review it! Make sure that while writing, every step is extremely clear, and make sure to show a full configuration that allows readers to follow the exact steps. Ideally, go through all the steps again on your test machine and document that while you're doing it to make sure the guide actually works. |
For if someone else wants to give a stab writing the documentation here the repo being used for your test configuration (and the step in its history) is this https://github.com/onnimonni/hetzner-auction-nixos-example/tree/45aaf7100167f08f417224fd6a1b1dac74795fb9 right @onnimonni? |
I gave a stab at drafting up some docs on this but haven't been able to test because I don't have any unused hardware laying around to do so. Feel free to take as much or as little inspiration from them as you would like. |
We usually simulate these steps with qemu's nvme emulation: https://qemu-project.gitlab.io/qemu/system/devices/nvme.html |
This is a script I had flying around:
|
oh that would be very useful for testing out the steps I have written, I'll give it a stab sometimes later this week (likely on Friday or Saturday when I'm stuck on a plane/layover) |
Hey,
I'm preparing to the case where one or more of my harddrives will eventually fail. To simulate this I put one of my machines into rescue mode and completely wiped partitions of one drive with:
wipefs -a /dev/nvme1n1
and rebooted (nvme1n1
contained the/boot
ESP partition in my case andnvme0n1
had the fallback boot).It booted up nicely and now I'm wondering what is the recommended way to recreate the partitions to a new drive and let the drive to join to the existing zfs pool?
I tried to just by deploying the same disko config again and it fails because of the missing
/boot
partition:The text was updated successfully, but these errors were encountered: