I am building my personal private cloud. I am considering using second hand dell optiplexes as worker nodes, but they only have 1 NIC and I’d need a contraption like this for my redundant network.
Then this wish came to my mind. Theoretically, such a one box solution could be faster than gigabit too.
I have been trying to do bonds with USB adapters and while it usually seems to work fine at first, they just seem to randomly drop out when run 24/7 so I stopped doing that. In theory it seems like a good idea though.
You just saved me a headache.
– If you’re doing it for performance, you should compare a low end 2.5gE switch and cards to all that complexity. Higher performance, simpler, more reliable
– if it’s to learn about bonding, consider how many you need and whether doing the same thing multiple times is a benefit
– if it’s for redundancy/reliability, I don’t think this is going to work. My plan is to build a cluster of single board computers and do everything in containers. Keep the apps portable and the hardware replaceable
Sure, but you forgot about reusing perfectly good older 1gbit equipment with sufficient ports to do nice 4gbit bonds. I have been doing that with 4 port Intel NIC PCIe expansion cards for a while on those servers with free slots, but on those thin clients re-purposed as servers that is usually not the case.
I just have that happen in general with USB NICs. Random drops for seemingly no reason.
They’re not meant for infrastructural use, just as travel adapters.