piggy [they/them]

  • 1 Post
  • 39 Comments
Joined 5 days ago
cake
Cake day: January 22nd, 2025

help-circle
  • As a sidenote “putting things in boxes” is the very thing that itself upholds bourgeois democracies and national boundaries as well.

    I mean at this raw of an argument you might as well argue for Lysenkoism because unlike Darwinism/Mendelian selection it doesn’t “put things in boxes”. In practice things are put in boxes all the time, it’s how most systems work. The reality is that as communists we need to mediate the negative effects of the fact that things are in boxes, not ignore the reality that things are in boxes.

    The failure of capitalism is the fact that it’s systems of meaning making converge into the arbitrage of things in boxes. At the end of the day this is actually the most difficult part of building communism, the Soviet Union throughout it’s history still fell ill with the “things in boxes” disease. It’s how you get addicted to slave labor, it’s how you make political missteps because it’s so easy to put people in a “kulak” in a box that doesn’t even mean anything anymore, it’s how you start disagreements with other communist nations because you really insist that they should put certain things into a certain box.


  • Geopolitical power comes mainly from 3 things, resources, technology, and controlling your “excess” (i.e. the people that do the “worst” work) population. Historically borders have been an effective means to more-or-less control all 3.

    Controlling your own borders is really childs play, controlling other people’s borders is where the fun really starts. Sykes-Picot for example ensured that the Middle East would fight over resources (water, arable land) and who the “excess” population should be by drawing borders in creative ways preventing the reformation of the Ottoman Empire after its defeat.


  • It doesn’t work in the average case. I’ve seen this tactic from the company that I work for and multiple companies I have contacts at. Bosses think they can simply use “AI” to fix their hollowed out documentation, on-boarding, employee education systems by pushing a bunch of half correct, barely legible “documentation” through an LLM.

    It just spits out garbage for 90% of people doing this. It’s a garbage in garbage out process. In order for it to even be useful you need a specific type of LLM (a RAG) and for your documentation to be high quality.

    Here’s an example project: https://github.com/snexus/llm-search

    The demo works well because it uses a well documented open source library. It’s also not a guarantee that it won’t hallucinate or get mixed up. A RAG works simply by priming the generator with “context” related to your query, if your model weights are strong enough your context won’t outweigh the allure of statistical hallucination.




  • Haha… Boy do I have stories… I worked in a terrible evil company (aren’t they all but this one was a bit egregious).

    The CEO was an absolute moron whose only skill was being a contracts guy and being a money raising guy. We had an internal app for employees to do their work on in the field. He was adamant about getting it in the app store after he took some meeting with another moron. We kept telling him there’s no point, and there’s a shit ton of work because weh ave to get the app to apples standards. He wouldn’t take no for an answer. So we allocated the resources to go ahead, some other projects got pushed way back for this.

    A month goes by and we have another meeting, and he says why isn’t X done. We told him, we had to deprioritize X to get the app in the app store. He says well who decided that. We tell him that he did. You know how a normal person would be a bit ashamed of this right? Well guess what he just had a little tantrum and still blamed everyone else but himself.

    Same guy fired a dude (VP level) because his nepo hire had it out for him. That dude documented all his work out in the open, and then when that section of the business collapsed a day later they had to hire him back as a contractor and the CEO still didn’t trust him and trusted his nepo hire, and didn’t see the fact that his decision making was the inefficiency.

    When I retire I swear to god I’m going to write “this is how capitalism actually works” books about my experiences working with these people.




  • I’m confident a lot of startups will spring out of the ground that will be developing DeepSeek wrappers and offering the same service as your OpenAIs

    This is true. But I don’t think OpenAI is even cornering the tech market really. The company I work for makes a lot of content for various things and a lot of engineers are tech fetishists and a lot of executives are IP protectionist obsessives. We are banned from using publicly available AI offerings, we don’t contract with Open AI but we do contract with Maia for creating models (because their offering specifically talks through the “steal your IP” problems). So OpenAI itself is not actually in many of these spaces.

    But yeah your average chat girlfriend startup is going to remove the ChatGPT albatross from its neck, given it’s engineers/founders are just headlines guys. A lot of this ecosystem is really the “Uber but for <MY THING HERE>” style guys.


  • I agree with the majority of your comment.

    no one is gonna pay thousands of dollars for a corporate LLM that’s only 10% better than the free one.

    This is simply not true in how businesses actually work. It certainly limits your customer base organically but there are plenty of businesses who in “tech terms” overpay for things that are even free because of things like liability and corruption. Enterprise sales is completely perverse in its logic and economics. In fact most open source giants (e.g. Redhat) exist because of the fact that corps do in-fact overpay for free things for various reasons.



  • So LLM’s the “AI” that everyone is typically talking about are really good at one statistical thing:

    “CLASSIFYING”

    What is “CLASSIFYING” you ask? Well it’s basically attempting to take a data and put it into specific boxes. If you want to classify all the dogs you could classify them based on breed for example. LLMs are really good at classifying better than anything we’ve ever made and they adapt very well to new scenarios and create emergent classifications of data fed to them.

    However they are not good at basically anything else. The “generation” that these LLMs do is based on the classifier and the model, which basically generates responses based on statistically what the next word is. So for example it’s entirely possible that if you fed an LLM the entirety of Shakespeare and only Shakespeare and you gave it “Two households both alike” as a prompt, it practically may spit out the rest or Romeo and Juliet.

    However this means AI’s are not good at the following:

    • discerning truth from fiction
    • following technical processes (like counting r’s in strawberry)
    • having “human like” understandings of the connections between concepts (think of the “is soup a salad” type memes)

    So… is what I said above really just how AI is being used in the US, and is that the reason for the huge bubble in asset values of companies like Nvidia and Microsoft.

    Don’t get me wrong, yes this is a solution in search of problem. But the real reason that there is a bubble in the US for these things is because companies are making that bubble on purpose. The reason isn’t even rooted in any economic reality. The reason is rooted in protectionism. If it takes a small lake of water and 10 data centers to run ChatGPT, that means it’s unlikely you will lose a competitive edge because you are misleading your competition. If every year you need more and more compute to run the models it concentrates who can run them and who ultimately has control of them. This is what the market has been doing for about 3 years now. This is what DeepSeek has undone.

    The similarities to BitCoin and crypto bubbles are very obvious in the sense that the mining network is controlled by whoever has the most compute. Etherium specifically decided to cut out the “middle man” of who owns compute and basically says whoever pays into the network’s central bank the most controls the network.

    This is what ‘tech as assets’ means practically. Inflate your asset as much as possible regardless of it’s technical usefulness.



  • It’s a capitalist solution to a capitalist problem - the problem of accessing the wholesale pipeline. It’s obviously not this anymore anyway, but that’s what the membership is for. You’re buying at the “convenience store” level, but without the volume that’s what the membership makes up for. Using Costco as your distributer is still a thing in rural areas, esp in Washington. Last time I was near Mt. Rainer there was a convenience store in the town that was clearly breaking up Costco packs, but they were like 1 of 3 places you could actually buy stuff.


  • The cool thing is that if you believe in Marxism there never was room for change in that system because the base (what we can possibly make and who decides what to make) determines the superstructure (culture).

    You’re simply complaining about the change in strategy of selling cultural commodities, something that you never had a say in.



  • Yeah I love having to watch shows that basically called me the fslur otherwise I would have no friends.

    This is such a nostalgia level argument. Monoculture was supremely fucked up in the hierarchies that it allowed people to form in schools and workplaces. If you think racism and homophobia and transphobia is bad now, you aren’t ready for how they actually were in monoculture times. Not only that monoculture was in reality often anti-poor/assimilationist because monoculture is literally consumerism. So if you were poor (or an immigrant) and your parents couldn’t afford (or understand the culture and why you needed these things) the cool new JNCO jeans or to take you to see a Starwar then you were fucked socially twice over once for being poor (or “weird”) and once for not watching the latest thing.

    Videogames weren’t monoculture until like Xbox 360. You were considered a massive dork for being a gamer most of my childhood.

    Also god forbid you had a different taste than other people and had opinions. When the first Iron Man came out and half way through the movie in theater I felt like I was at the Nuremberg rally. Y’all don’t remember everyone pretending bad movies were good. The argument that there was “common ground to fight over” isn’t real. Monoculture meant that the battles were settled before they started, everyone looked at you like a redditor when you started doing anything close to AKSHUALLY LUKE SKYWALKER AND THE REBELS ARE VIETKONG. All these nostalgia media analysis things feel cute and trite now but socially they weren’t really welcome, they only became exposed to normie circles because the monoculture was dissolving.