As i’m at a point of increasing my actual 4 server count, how should I handle storing the OS of each server?

Problem: I have no money and I’m looking for a way to buy more servers without buying and taking care of new boot ssds.

The actual setup is that each server has their own 120/240gb ssd to boot from, and one of my servers is a NAS.

at first I thought of PXE persistent boot, but how would I assign each machine their own image is one of the problems…

I’ve found this post talking about Disk-less persistent PXE, but it’s 5 years old, it talks about SAN booting, but most people I’ve seen in this sub are against fiber-channel protocol, probably there’s a better way?

Without mentioning speed requirements (like a full-flash NAS or 10+gbit), Is it possible to add more servers, without purchasing a dedicated boot device for each one?

  • 96Retribution@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    You could use iSCSI for block storage and not SAN. Each machine would have its own LUN.

    However, time/heart ache/frustration and learning curves are all worth something. Newegg has a reasonable SSD for $16 total including free shipping. I’d find a way to save up $64 bucks myself, even if it took a month or two and boot one new machine every other week.

  • thomasbuchinger@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    What OS are you running? If the main storage is on the network, chances are the OS can run from anywhere.

    • get a 100GB HDD from Craigslist for free or a few bucks
    • any old crappy USB stick.

    If you still want to go for PXE, you don’t need any fancy networking. All you need is a DHCP-Server and a TFTP server with a Kernel and an initramfs. I think DNSmasq can handle everything with a bit of configuration. Or you go for a full server provisioning tool like Cobbler or theforeman

    You assign each server their own image by placing the file in the directory /var/lib/tftpboot//Kernel (something along those lines)

  • Professional-Bug2305@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    You can go down to literally a usb stick or even micros card if they support it. Esxi works on an SD card with a few config tweaks.

  • andre_vauban@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    You will need to PXE boot into a RAM disk and then use iSCSI/NFS/CEPH/etc for persistent storage.

  • DWolfUK40@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    With the price of ssds what they are now for a small 100gb why bother with the additional setup and potentially failure points.

    I’ve run esxi through network and even that wasn’t fun with longish boot times. I certainly wouldn’t like to run proxmox that way. These days there’s really no reason not to have “some” fast direct storage in each server even if it’s just used mainly as cache.

    What you’re looking for is possible but to me the saving of $20 ish per machine just isn’t worth introducing more headaches.

  • rweninger@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I am an advocate of FC Protocol. I love it. Much more then iSCSI. But I hate SAN Booting. It is a pain in the ass. And you need a Server to host the images. You have to build up a SAN Infrastructure. I guess 2 boot SSD’s are cheapter. 2x 64GB NVMe SSD with a PCIe Card or 2x 64GB SATA SSD’s cost next to nothing.

  • AceBlade258@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I use NFS roots for my hypervisors, and iSCSI for the VM storage. I previously didn’t have iSCSI in the mix and was just using qcow2 files on the NFS share, but that had some major performance problems when there was a lot of concurrent access to the share.

    The hypervisors use iPXE to boot (mostly; one of them has gPXE on the NIC, so I didn’t need to have it boot to iPXE before the NFS boot).

    In the past I have also use a purely iSCSI environment with the hypervisors using iPXE to boot from iSCSI. I moved away from it because it’s easier to maintain the single NFS root for all the hypervisors for updates and the like.

    • JoaGamo@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      How? Are you loading a configuration in a device plugged in each hypervisor server? Any project i should read further?

      • AceBlade258@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        The servers use their built-in NIC’s PXE to load iPXE (I still haven’t yet figured out how to flash iPXE to a NIC), and then iPXE loads a boot script that boots from NFS.

        Here is the most up-to-date version of the guide I used to learn how to NFS boot: https://www.server-world.info/en/note?os=CentOS_Stream_9&p=pxe&f=5 - this guide is for CentOS, so you will probably need to do a little more digging to find one for Debian (which is what Proxmox is built on).

        iPXE is the another component: https://ipxe.org/

        It’s worth pointing out that this was a steep learning curve for me, but I found it super worth it in the end. I have a pair of redundant identical servers that act as the “core” of my homelab, and everything else stores its shit on them.

  • alias4007@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    To avoid a single point of failure for each new server, I would add $15 Inland SSD per server to zero $ budget.