piggy [they/them]

  • 14 Posts
  • 230 Comments
Joined 2 months ago
cake
Cake day: January 22nd, 2025

help-circle




  • I’m not entirely sure what you mean tbh. Like if something changes in a library you linked against? I guess you would have to rebuild it but you would have to rebuild a shared library too and place it into the system. Actually, you don’t necessarily have to rebuild anything, you can actually just relink it if you still have object files around (like OpenBSD does this to relink the kernel into a random order on every boot), just swap in a different object file for what you changed

    Okay let’s say I am writing MyReallyCoolLibrary V1. I have a myReallyCoolFunction(). You want to use myReallyCoolFunction in your code. Regardless if your system works on API or ABI symbols, what a symbol is is a universal address for a specific functionality. So when my library is compiled it makes a S_myReallyCoolFunction, and when your app is compiled it makes a call S_myReallyCoolFunction and this symbol needs to be resolved from somewhere.

    So static linking is when you compile the app with S_myReallyCoolFunction inside of it so when it sees call S_myReallyCoolFunction it finds the S_myReallyCoolFunction in the app data. Dynamic linking is when it finds call S_myReallyCoolFunction in a library that’s a file on your machine. Plan9 uses static linking.

    So let’s talk about this what it means for “code portability”. Let’s say I make an MyReallyCoolLibrary V1 and I have to change a few things, here are alternate universes that can happen:

    • I don’t change myReallyCoolFunction
    • I change myReallyCoolFunction but I do not change its behavior, I simply refactor the code to be more readable.
    • I change myReallyCoolFunction and I change its behavior.
    • I change myReallyCoolFunction and change it’s interface.
    • I remove myReallyCoolFunction.

    So let’s compute what this should mean for encoding a Symbol in this case.

    • myReallyCoolFunction from V2 can stay declared as S_myReallyCoolFunction
    • myReallyCoolFunction from V2 can stay declared as S_myReallyCoolFunction
    • myReallyCoolFunction from V2 has to be declared as S_myReallyCoolFunctionNew
    • myReallyCoolFunction from V2 has to be declared as S_myReallyCoolFunctionNew
    • I technically no longer have an S_myReallyCoolFunction

    Now these are the practical consequences for your code:

    • none, everything stays the same and code written to V1 can use V2.
    • none, everything stays the same and code written to V1 can use V2.
    • app refactor - everything written for V1 has to change to use V2. The app may no longer be able to work with V2.
    • app refactor - everything written for V1 has to change to use V2.The app may no longer be able to work with V2.
    • app refactor - everything written for V1 has to change to use V2.The app may no longer be able to work with V2.

    So now to make code truly portable I must now remove the app refactor pieces. I have 2 ways of doing that.

    1. Version resolution from inside the system by managing lib paths most likely.
    2. V2 must include all symbols from V1

    With #1 you have the problem everyone complains about today.

    With #2 you essentially carry forward all work ever done. Every mistake, every refactor, every public API that’s ever been written and it’s behaviors must be frozen in amber and reshipped in the next version.

    There is no magic here, it’s a simple but difficult to manage diagram.

    Plan 9 is a carefully tuned system ofc and I obviously have the Plan 9 brainworms but like…

    I agree that Plan 9 is really cool, but in practice Linux is the height of active development OS complexity that our society is able to build right now. Windows in comparison is ossifying, and OSX is much simpler.

    I’ve never written any programs that were subject to such strict verification tbh. I had to look up what “DSL” means lol, Wikipedia says “definitive software library”.

    DSL in this case means Domain Specific Language

    I rly think it’s not such a problem most of the time, code changes all the time and people update it, as they should imo,

    But here’s the problem with this statement, it unravels your definition of “code portability”. The whole point of “code portability” is that I don’t have to update my code. So I’m kind-of confused about what we’re arguing if it’s not Flatpak style portability, it’s not code portability, what are we specifically talking about?

    And that formal verification can only get you as far as verifying there are no bugs but it can’t force you to write good systems or specifications and can’t help you if there are things like cosmic rays striking your processor ofc hehe

    The formal verification can only reify the fact that you need something called Foo and I can provide it. The more formal it is the more accurate we can make the description of what Foo is and the more accurately I can provide something that matches that. But I can’t make it so that your Foo is actually a Bar because you meant a Bar but you didn’t know you needed a Bar. We can match shapes to holes but we cannot imbue the shapes or the holes with meaning and match on that. We can only match geometrically, that is to say (discrete) mathematically.

    I agreee, this isn’t just a technological problem to me but also a social one. Like ideally I would love to see way more money or resources for computer systems research and state-sponsored computer systems. Tbh I feel like most of the reason ppl focus so much on unchanging software, ABIs, APIs, instruction sets, operating systems, etc is cuz capitalists use them to make products and them never changing and just being updated forever is labor reducing lol. When software is designed badly or the world has changed and software no longer suits the world we live in (many such cases), we (the community of computer-touchers lol) should be able to change it. Ofc there will be a transition process for anything and this is quite vague but yeh

    I generally agree with this sentiment but I think the capitalist thing defeating better computing standards, tooling, and functionality is the commodity form. The commodity form and its practical uses don’t care about our nerd shit. The commodity form barely cares to fulfill the functional need it’s reified form (e.g. an apple) provides. That is to say, the commodity form doesn’t care if you make Shitty Apples or Good Apples as long as you can sell Apples. That applies to software, and as software grows more complex, capitalism tends to produce shitty software simply because the purpose of the commodity form is to facilitate trade not to be correct/reliable/be of any quality.




  • You just link against the symbols you use though :/ Lemme go statically link some GTK thing I have lying around and see what the binary size is cuz the entire GTK/GLib/GNOME thing is one of the worst examples of massive overcomplication on modern Unix lol

    If you link against symbols you are not creating something portable. In order for it to be portable the lib cannot ever change symbols. That’s a constraint you can practically only work with if you have low code movement and you control the whole system. (see below for another way but it’s more complex rather than less complex).

    Also I’m not a brother :|

    My bad. I apologize. I am being inconsiderate in my haste to reply.

    It was less complex cuz they made it that way though, we can too. FlatPaks are like the worst example too cuz they’re like dynamically linked things that bring along all the libraries they need to use anyway (unless they started keeping track of those?) so you get the worst of both static and dynamic linking. I just don’t use them lol

    But there’s no other realistic way.

    You mean portable like being able to copy binaries between systems? Cuz back in the 90s you would usually just build whatever it was from source if it wasn’t in your OS or buy a CD or smth from a vendor for your specific setup. Portable to me just means like that programs can be be built from source and run on other operating systems and isn’t too closely attached to wherever it was first created. Being able to copy binaries between systems isn’t something worth pursuing imo (breaking userspace is actually cool and good :3, that stable ABI shit has meant Linux keeps around so much ancient legacy code or gets stuck with badddd APIs for the rest of time or until someone writes some awful emulation layer lol)

    That’s a completely different usage of “portable” and is basically a non-problem in the modern era, as long as and see my response to the symbols point, you are within the same-ish compatibility time frame.

    It’s entirely impossible to do this over a distributed ecosystem over the long term. You need symbol migrations so that if I compile code from 1995 it can upgrade to the correct representation in modern symbols. I’ve built such dependency management systems for making evergreen data in DSLs. Mistakes, deprecation, and essentially everything you have ever written has to be permanent, it’s not a simple way to program. It can only be realized in tightly and directly controlled environments like Plan 9 or if you’re the architect of an org.

    Dependency management is an organization problem that is complex, temporal, and intricate. You cannot “technology” your way out of the need to manage the essential complexity here.


  • I agree about static linking but… 100mb of code is absolutely massive, do Rust binaries actually get that large?? Idk how you do that even, must be wild amounts of automatically generated object oriented shit lol

    My brother in Christ if you have to put every lib in the stack into a GUI executable you’re gonna have 100mb of libs regardless of what system you’re using.

    Also Plan 9 did without dynamic linking in the 90s. They actually found their approach was smaller in a lot of cases over having dynamic libraries around: https://groups.google.com/g/comp.os.plan9/c/0H3pPRIgw58/m/J3NhLtgRRsYJ

    Plan 9 was a centrally managed system without the speed of development of a modern OS. Yes they did it better because it was less complex to manage. Plan 9 doesn’t have to cope with the fact that the FlatPak for your app needs lib features that don’t come with your distro.

    Also wdym by this? Ppl have been writing portable programs for Unix since before we even had POSIX

    It was literally not practical to have every app be portable because of space constraints.







  • piggy [they/them]@hexbear.nettohexbear@hexbear.netwe fucked up
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    This is pretty easy to work around:

    1. Host a core file on hexbear.net itself in a magical secret directory and turn off directory access.
    2. When creating the database there’s a screen that asks “How long do you want to wait to decrypt” set that to the maximum.
    3. Make a really long password that’s easy to remember for example a stanza from a song.
    4. Add a Keyfile to distribute only to admins.

    It’s hard to collect all this data.

    Even if you find the database you won’t crack it in this lifetime.

    Even if you find the database and know the password you need the key file.

    Even if you find the database and have a keyfile you need the password.

    Ideally this data shouldn’t change, in practice try to find hosts like AWS that allow you to set up orgs and link accounts and only hold the “root account” details in the database.



  • Depends on how much the other instance owners are willing to go to bat for us and their hosting setup. If Grad can technically do it I can see them doing it, maybe same with ML. But this is a stop gap thing.

    In practice once we have a “final” domain, we will essentially have to refederate under that one.



  • piggy [they/them]@hexbear.nettohexbear@hexbear.netwe fucked up
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    2 months ago

    So hexbear.club is available, you can just s/hexbear.net/hexbear.club/g in the lemmy setup for federation shit. Annoying I’m sure but not the end of the world.

    In practice what I want to suggest to you guys is when you’re rebuilding the hosting accounts/stack to use either something OSS like KeepassXC or a service like 1Password (which may be easier to admin vs playing around with multiple vaults/access levels for Keepass) so you can manage access to various sites you need to keep the service up.




  • piggy [they/them]@hexbear.nettohexbear@hexbear.netwe fucked up
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    2 months ago

    True Hexbear Fedayeen have hexbear hard coded in their hosts file and are currently enjoying their beanis

    On OSX/Linux just add 37.187.73.130 hexbear.net to the bottom of /etc/hosts and you’ll get your beanis back.

    On Windows its at C:\Windows\System32\drivers\etc\hosts

    On Phones it’s much harder to describe than a single line so all your beanis are lost.