As i’m at a point of increasing my actual 4 server count, how should I handle storing the OS of each server?

Problem: I have no money and I’m looking for a way to buy more servers without buying and taking care of new boot ssds.

The actual setup is that each server has their own 120/240gb ssd to boot from, and one of my servers is a NAS.

at first I thought of PXE persistent boot, but how would I assign each machine their own image is one of the problems…

I’ve found this post talking about Disk-less persistent PXE, but it’s 5 years old, it talks about SAN booting, but most people I’ve seen in this sub are against fiber-channel protocol, probably there’s a better way?

Without mentioning speed requirements (like a full-flash NAS or 10+gbit), Is it possible to add more servers, without purchasing a dedicated boot device for each one?

  • AceBlade258@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I use NFS roots for my hypervisors, and iSCSI for the VM storage. I previously didn’t have iSCSI in the mix and was just using qcow2 files on the NFS share, but that had some major performance problems when there was a lot of concurrent access to the share.

    The hypervisors use iPXE to boot (mostly; one of them has gPXE on the NIC, so I didn’t need to have it boot to iPXE before the NFS boot).

    In the past I have also use a purely iSCSI environment with the hypervisors using iPXE to boot from iSCSI. I moved away from it because it’s easier to maintain the single NFS root for all the hypervisors for updates and the like.

    • JoaGamo@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      How? Are you loading a configuration in a device plugged in each hypervisor server? Any project i should read further?

      • AceBlade258@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        The servers use their built-in NIC’s PXE to load iPXE (I still haven’t yet figured out how to flash iPXE to a NIC), and then iPXE loads a boot script that boots from NFS.

        Here is the most up-to-date version of the guide I used to learn how to NFS boot: https://www.server-world.info/en/note?os=CentOS_Stream_9&p=pxe&f=5 - this guide is for CentOS, so you will probably need to do a little more digging to find one for Debian (which is what Proxmox is built on).

        iPXE is the another component: https://ipxe.org/

        It’s worth pointing out that this was a steep learning curve for me, but I found it super worth it in the end. I have a pair of redundant identical servers that act as the “core” of my homelab, and everything else stores its shit on them.