• 2 Posts
  • 20 Comments
Joined 5 months ago
cake
Cake day: June 23rd, 2024

help-circle
  • The Linux kernel actually uses quite a bit of OOP ideas. You have modules that are supposed to have a clear interface with the rest of the world, and they (ab)use structs to basically work like objects. If you try hard enough, you can even do “inheritance” with them, like with their struct kobject. It is actually somewhat well-thought-out, imo. No need to go full OOP, just pick some of the good parts, and avoid the MappingModelFactoryServiceImpl hell or the madness that is C++.










  • LH0ezVT@sh.itjust.worksOPtoScience Memes@mander.xyzBased on a true story
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 months ago

    It’s been a few years, but I’ll try to remember.

    Usually (*), your CPU can address pages (chunks of memory that are assigned to a program) in 4KiB steps. So when it does memory management (shuffle memory pages around, delete them, compress them, swap them to disk…), it does so in chunks of 4KiB. Now, let’s say you have a GPU that needs to store data in the memory and sometimes exchange it with the CPU. But the designers knew that it will almost always use huge textures, so they simplified their design and made it able to only access memory in 2MiB chunks. Now each time the CPU manages a chunk of memory for the GPU, it needs to take care that it always lands on a multiple of 2MiB.

    If you take fragmentation into account, this leads to all kinds of funny issues. You can get gaps in you memory, because you need to “skip ahead” to the next 2MiB border, or you have a free memory area that is large enough, but does not align to 2MiB…

    And it gets even funnier if you have several different devices that have several different alignment requirements. Just one of those neat real-life quirks that can make your nice, clean, theoretical results invalid.

    (*): and then there are huge pages, but that is a different can of worms


  • No, not really. This is from the perspective of a developer/engineer, not an end user. I spent 6 months trying to make $product from $company both cheaper and more robust.

    In car terms, you don’t have to optimize or even be aware of the injection timings just to drive your car around.

    Æcktshually, Windows or any other OS would have similar issues, because the underlying computer science problems are probably practically impossible to solve in an optimal way.


  • LH0ezVT@sh.itjust.worksOPtoScience Memes@mander.xyzBased on a true story
    link
    fedilink
    English
    arrow-up
    45
    ·
    edit-2
    2 months ago

    Get a nice cup of tea and calm down. I literally never said or implied any of that. Why do you feel that you need to personally attack me in particular?

    All I said was that a supposedly easy topic turned into reading a lot of obscure code and papers which weren’t really my field at the time.

    For the record, I am well aware that the state of embedded system security is an absolute joke and I’m waiting for the day when it all finally halts and catches fire.

    But that was just not the topic of this work. My work was efficient memory management under a lot of (specific) constraints, not memory safety.

    Also, the root problem is NP-hard, so good luck finding a universal solution that works within real-life resource (chip space, power, price…) limits.







  • Around 10k years before us, we developed from hunter-gatherer cavemen to neolithic city builders with irrigated farms, organized religion and and a feudal society in like 1000 years. That is also pretty quick. Sure, pyramids took a bit longer. But while pyramids are pretty damn impressive, no pyramids does not mean an “uncivilized” society.





  • You are literally describing the idea of Debian. Yes, stable is old, but that is the whole purpose. You get (mostly) security updates only for a few years. No big updates, no surprises. Great for stuff like company PCs, servers, and other systems you want to just work™ with minimal admin work.

    And testing is, well, for testing. Ironing out bugs and preparing the next stable. Although what you describes sounds more like unstable, the one where they explicitly say that they will break stuff to try out other stuff.

    So, everything works as intended and advertised here. If you want a different approach to stability, I guess you will have to use a different distro, sorry.

    I guess when you last tried it, it was at a time when a new stable came out, so testing was more or less equal to stable.

    About the firefox: It ships Firefox ESR these days, meaning you get an older, less often updated tested firefox (with security updates, of course). Again, this is the whole point. Less updates, less admin work, more time to find and fix bugs. Remember the whole Quantum add-on mess, for example?

    As others have said, you can install other versions of firefox (like the “normal” one) via flatpak, snap… nowadays. The same goes for other software, where you would need the newest and shiniest version sooner. I’m using debian on my work/uni laptop and a bunch of servers, and it works pretty well for me.