The tech giants make enough money that they could keep on growing forever, from my understanding.
But the fediverse? Sure the main instances that get enough funding are going to be okay, but what about the single-user instances 10 years from now on when there’s a lot more content to download? Won’t they go bankrupt just by trying to annex the big instances?
And I have the impression that the lemmy giants are going to change over time: does that mean that 50 years from now on, the posts I’m posting here today might get lost in time because the instances that annex it will have shut down by then?
I probably misunderstand how the fediverse works, but my worry is that the small instances won’t be able to hold an ever-growing amount of data forever.
I spoke in absolutes for the sake of readability, but I’m as in-the-dark as can be.
Mostly serious answer: the current implementation is not going to scale effectively with growth. The software implementation is still rough around the edges, and the ActivityPub protocol probably needs more knobs to handle bulk data synchronization. Within the service, moderaton is a serious challenge with many unanswered questions.
Likewise, the back end software implementation is monolithic, meaning it’s one software stack that does everything from sign in to subscriptions to synchronization and scheduling. Housekeeping and garbage collection probably isn’t that tight, either. This is mostly speculation as I’ve watched things over the last couple of weeks’ growth.
I believe the data store is based on Postgres RDBMS, which while being robust and scalable is fussy and needs tuning when turning over large amounts of highly unique data.
None of this is an indictment on the devs! Rather the opposite, because the software IS chugging along while experiencing tremendous growth.
I expect over time the back end will devolve into micro services that communicate over a highly scalable, or stream-based messaging bus. Larger instances could probably also benefit from static caching and CDN techniques to keep pages loading quickly even while the back end thrashes.
The structure.if the ecosystem needs to strike a balance between fewer large instances and many-many small instances. In the first scenario, the scaling limit is in the monolithic stack, which introduces I/O bottlenecks and serialization delays (even if massively threaded). In the latter scenario, message state and synchronous distribution become challenging because a full mesh of federations could scale faster than network state tables have room to support. Some middle tier might be needed, and I have no idea what that might even look like.
So to answer your question, can it scale indefinitely? Probably not because we hit scaling limits pretty quickly on a number of dimensions. Nevertheless, smart people.are starting to hang out here, and I expect will take an interest in how it all works. Improvement is inevitable, and I think the early roadblocks will be overcome easily enough
Edit to add: I’m a systems engineer in my day job but I work adjacent to the applications teams. The preceding commentary is just (un-)educated guesswork on my part.
There’s nothing wrong with a monolith. Microservices are not inherently more scalable. Their advantage is around scaling teams. If anything, a monolith can be more performant as in-process calls are much faster thent network calls.
There can be better efficiencies by disaggregating the full stack into microservices and making IPC calls among scalable workers versus strictly service-per-server models which, yes, incur scaling issues from network iowait. Modern network operating systems do this, which allows heavier loaded processes more access to resources while lesser loaded processes are deferred.
I’m not sure what you mean by a “network operating system”, but monoliths are inherently just as scaleable as services.
Imagine you have a service architecture, and you are running 2 of service A, 4 of service B, and 8 of service C.
Alternatively, you could be running a monolith on 14 nodes. Most of the work those 14 nodes will be doing work that would have been covered by service C, it’s just spread out in a different way.
I’m talking about Cisco IOS-XR, Juniper JunOS, Arista.EOS and others.
Those operating systems are disaggregated, meaning different features can be restarted, replicated, scaled out horizontally, or upgraded without having to disturb the other components in runtime.
Maybe we’re getting at the same point from other ends. I’m not a traditional software engineer,but ai have had academic and professional training on these topics.