• 1 Post
  • 834 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle

  • If anything I think this is pretty solid evidence that they aren’t actually using it. There was enough of a gap that the nuke of that PR was an edit to the original post and I can’t imagine that if it had actually been used that we wouldn’t have seen another flurry of screenshots of bad output.

    I think it also suggests that the engineers at x.ai are treating the whole thing with a level of contempt that I’m having a hard time interpreting. On one hand it’s true that the public GitHub using what is allegedly grok’s actual prompt (at least at time of publishing) is probably a joke in terms of actual transparency and accountability. On the other hand, it feels almost like either a cry for help or a stone-cold denial of how bad things are that the original change that prompted all this could have gone through in the first place.






  • He sure fucking did and it’s great.

    These people are antithetical to what’s good in the world, and their power deprives us of happiness, the ability to thrive, and honestly any true innovation. The Business Idiot thrives on alienation — on distancing themselves from the customer and the thing they consume, and in many ways from society itself. Mark Zuckerberg wants us to have fake friends, Sam Altman wants us to have fake colleagues, and an increasingly loud group of executives salivate at the idea of replacing us with a fake version of us that will make a shittier version of what we make for a customer that said executive doesn’t fucking care about.

    No notes. Perfection. Also love the commentary on how much of the current political moment is driven by the same forces - running the country like a business isn’t just dumb because governments aren’t businesses. It’s dumb because the entire business ethos is cooked to begin with. Like, I cannot find a clearer description for the prevalence of dumbass fascism than the political ascendency of the Business Idiot.






  • Heartwarming: the worst person you know just outed themselves as a fucking moron

    Even the people who are disagreeing are still kinda sneerable though. Like this guy:

    Even in the worst case, DOGE firing too many people is not a particularly serious danger. Aside from Skynet, you should be worried about people using AI to help engineer deadly viruses or nuclear weapons, not firing government employees.

    That’s still assuming that the AI is a valuable tool for the purpose of genetic engineering or nuclear weapons manufacturing or whatever! Like, the hard part of building a nuke is very much in acquiring the materials, engineering everything to go off at the right time, and actually building it without killing yourself. Very little of that is meaningfully assisted by LLMs even if they did work as advertised. And there are so many people in that very thread alone going into detail on how biological engineering is incredibly hard in ways that similarly aren’t bottlenecked by the kinds of things current AI structures can do. The level of comedically missing the point of the folks who keep trying to explain reality is off the charts.




  • I would be more inclined to agree if there was an actual better alternative wait to fill in the gap. Instead we’re probably going to see the loss of US soft power be replaced by EU, Russian, and particularly Chinese soft power. I’m not sufficiently propagandized to say that’s strictly worse than being under US soft power, especially as practiced by the kinds of people that support EA. But it also isn’t really an improvement in terms of enabling autonomous development.


  • Yeah. I don’t think you need the full ideological framework and all its baggage to get to ā€œmedical interventions and direct cash transfers are consistently shown to have strong positive impacts relative to the resources invested.ā€ That framework prevents you from adding on ā€œthey also avoid some of the negative impact that foreign aid can have on domestic institution-building processesā€ which is a really important consideration. Of course, that assumes the goal is to mitigate and remediate the damage done by colonialism and imperialism rather than perpetuting the same structures in a way that the imperialists at the top can feel good about. And for a lot of the donor class that EA orgs are chasing I don’t think that’s actually the case.


  • I also think that some of the long-termism criticisms are not so easily severable from the questions he does address about epistemology and listening to the local people receiving aid. The long-termist nutjobs aren’t an aberration of EA-type utilitarianism. They are it’s logical conclusion. Even if this chapter ends with common sense prevailing over sci-fi nonsense it’s worth noting that this kind of absurdity can’t arise if you define effectiveness as listening to people and helping them get what they need rather than creating your own metrics that may or may not correlate outside of the most extreme cases.