• 0 Posts
  • 1.41K Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle



  • A single billion, when put in terms of money, is enough that - simply invested in GICs and bonds, earning a very, very conservative 1% interest - it would earn you ten million dollars a year in interest alone.

    I would challenge you to even come up with a reasonable way to spend ten million dollars a year. By my back of the napkin math you could vacation every single day, living in hotels and eating at fancy restaurants, and still not make a dent in that.

    Musk has an estimated net worth of $247 billion. You could fine him 99% of his current wealth, and he would still struggle to spend enough that he wouldn’t end up increasing his remaining wealth every year.


  • In this particular case, I’m really not sure it’s a loophole.

    Antitrust laws exist to constrain companies so large and powerful that they have become, or are becoming monopolistic forces

    What Twitter successfully proved to the EU court is that Musk’s management of the company has been so spectacularly incompetent that Twitter/X no longer has enough reach or cultural relevance to be in any danger of being a monopoly.

    This is, objectively speaking, a serious L for Twitter. They just proved to a court that they’re no longer even close to being the best place to spend your advertising dollars. The major spenders will take note.





  • This is a long post and I’m not even going to try to address all of it, but I want to call out one point in particular, this idea that if we somehow made a quantum leap from the current generation of models to AGI (there is, for the record, zero evidence of there being any path to that happening) that it will magically hand us the solutions to anthropogenic climate change.

    That is absolute nonsense. We know all the solutions to climate change. Very smart people have spent decades telling us what those solutions are. The problem is that those solutions ultimately boil down to “Stop fucking up the planet for the sake of a few rich people getting richer.” It’s not actually a complicated problem, from a technical perspective. The complications are entirely social and political. Solving climate change requires us to change how our global culture operates, and we lack the will to do that.

    Do you really think that if we created an AGI, and it told us to end capitalism in order to save the planet, that suddenly we’d drop all our objections and do it? Do you think that an AGI created by Google or Microsoft would even be capable of saying “Stop allowing your planets resources to be hoarded by a priveliged few”?


  • Powered flight was an important goal, but that wouldn’t have justified throwing all the world’s resources at making Da Vinci’s flying machine work. Some ideas are just dead ends.

    Transformer based generative models do not have any demonstrable path to becoming AGI, and we’re already hitting a hard ceiling of diminishing returns on the very limited set of things that they actually can do. Developing better versions of these models requires exponentially larger amounts of data, at exponentially scaling compute costs (yes, exponentially… To the point where current estimates are that there literally isn’t enough training data in the world to get past another generation or two of development on these things).

    Whether or not AGI is possible, it has become extremely apparent that this approach is not going to be the one that gets us there. So what is the benefit of continuing to pile more and more resources into it?


  • “it’s been incorporated into countless applications”

    I think the phrasing you were looking for there was “hastily bolted onto.” Was the world actually that desperate for tools to make bad summaries of data, and sometimes write short form emails for us? Does that really justify the billions upon billions of dollars that are being thrown at this technology?


  • 'You start out in 1954 by saying, “Nigger, nigger, nigger.” By 1968 you can’t say “nigger”—that hurts you, backfires. So you say stuff like, uh, forced busing, states’ rights, and all that stuff, and you’re getting so abstract. Now, you’re talking about cutting taxes, and all these things you’re talking about are totally economic things and a byproduct of them is, blacks get hurt worse than whites.… “We want to cut this,” is much more abstract than even the busing thing, uh, and a hell of a lot more abstract than “Nigger, nigger.” ’ - Republican strategist Lee Atwater.

    As you say, it’s never been possible to cleanly separate economics and social justice, as if there is somehow no moral dimension to how and where we choose to allocate our resources. Sometimes these things are straight up dogwhistles for more overtly prejudiced acts, and sometimes they reflect deeper and more subtle biases about the world. But there is always a moral dimension to everything we do.


  • Voroxpete@sh.itjust.workstoTechnology@lemmy.worldThe most popular GenAI Tools
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    6
    ·
    10 days ago

    Maybe because we’re all getting really tired of industry propaganda designed to sell us on the “inevitability” of genAI when anyone who’s paying even a little attention can see that the only thing inevitable about this current genAI fad is it crashing and burning.

    (Even when content like this comes from a place of sincere interest, it becomes functionally indistinguishable from the industry propaganda, because the primary goal of the propagandists is to keep genAI in the public conversation, thus convincing their investors that its still the hottest thing around, and that they should keep shoveling money into it so that they don’t miss the boat).

    OpenAI, the company behind that giant bubble in the middle there, loses two dollars and thirty five cents for every dollar of revenue. Not profit. Revenue. Every interaction with ChatGPT costs them a ridiculous amount of money, and the percentage of users willing to actually pay for those interactions is unbelievably small. Their enterprise sales are even smaller. They are burning money at an absolutely staggering pace, and that’s with the deeply discounted rate they currently get on their compute costs.

    No one has proposed anything that will lower their backend costs to the point where this model is profitable, and even doubling prices (which is their current plan) will not make them profitable either. Literally not one person at OpenAI has put forth a concrete plan for the company to reach profitability. And that’s the biggest player in the game. If the most successful genAI company on the planet can’t figure out a way to actually make profit off this thing, it’s dead. Not just OpenAI; the whole idea.

    The numbers don’t lie; users, at best, find it moderately interesting and fun to play around with for a while. Barely anyone wants this, and absolutely nobody needs it. Not one single genAI product has created a meaningful use-case that would justify the staggering cost of building and running a transformer based model. The entire industry is just a party trick that’s massively overstayed it’s welcome.



  • This game is literally perfect.

    I don’t say that lightly. And I’m not saying it’s the greatest game ever made or anything like that. What I’m saying is that everything it’s trying to do, it does perfectly.

    The writing is incredible. The voice performances absolutely nail it, every line read feeling like a mic drop. The art is gorgeous. The music is subtle and evocative. The design of the branching narrative is brilliant.

    There’s not a single thing I can find to criticise. Slay the Princess is an absolute gem and you owe it to yourself to try it.


  • This argument just dismisses all criticism of the rules and implies that the “game” portion of the role-playing game is irrelevant.

    If you truly think that, then I contend that you didn’t understand their argument.

    They are dismissing one specific criticism of the rules; that they can be “abused”.

    Roleplaying games are a collaborative social activity. The goal should be to collectively tell an enjoyable story. Under those circumstances, no one should have any incentive to abuse the rules or their fellow players.

    In other words, criticising the rules because they can be abused is like criticising the design of a hammer because it can potentially be used as a weapon. There is basically no way to design a functional, effective hammer that does not open up the possibility that a bad actor could use it as a weapon. That does not constitute a flaw in the design of the hammer, and trying to redesign the hammer to prevent such an abuse will result in a very bad hammer.

    There are bad rules and good rules, but good rules are good because they facilitate enjoyable play effectively. In other words good rules should help the GM and the players do the things that are fun. The rules do not exist to create a perfectly balanced showdown between equally matched opponents, and they cannot ever exist to do that in a context where you have a GM/DM, because the overwhelming power afforded to someone with near total narrative authority makes it impossible to ever balance that dynamic. Rather, the rules exist to a) introduce an element of chaos to the narrative, and b) guide the game towards outcomes that tend to reflect the individual capabilities and circumstances of the characters involved.

    And within that context there are plenty of examples of good and bad rules design. You can absolutely find, make, or customize a better hammer. But if your criticism comes down to “You could hurt someone with this if you wanted to” then you have absolutely missed the point.

    There is no set of game rules that will ever prevent a toxic table from being toxic. Despite OP’s objections, the only solution to shitty people in your gaming group is to either remove the shitty people, or remove yourself. I get how much that sucks, but it really is the only solution.