- 5 Posts
- 236 Comments
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English3Ā·2 hours agopromptfarmers, for the āresearchersā trying to grow bigger and bigger models.
/r/singularity redditors that have gotten fed up with Sam Altmanās bs often use Scam Altman.
Iāve seen some name calling using drug analogies: model pushers, prompt pushers, just one more training run bro (for the researchers); just one more prompt (for the users), etc.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English3Ā·2 hours agoI could imagine a lesswronger being delusional/optimistic enough to assume their lesswrong jargon concepts have more academic citations than a handful of arXiv preprints⦠but in this case they just admitted otherwise their only sources are lesswrong and arXiv. Also, if they know wikipediaās policies, they should no the No Original Research rule would block their idea even overlooking single source and conflict of interest.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English5Ā·1 day agoYeah that article was one of the things I had mind. Itās the peak of centrist liberalism where EAs and lesswrongers can think these people are literally going to cause mankindās extinction (or worse) and they canāt even bring themselves to be rude to them. OTOH, if they actually acted coherently on their nominal doomer beliefs, they would be carrying out terrorism on a far greater scale than the Zizians, so maybe it is for the best they are ideologically incapable of direct action.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English11Ā·1 day agoYall ready for another round of LessWrong edit wars on Wikipedia? This time with a wider list of topics!
On the very slightly merciful upside⦠the lesswronger recommends āIf you want to work on a new page, discuss with the community first by going to the talk page of a related topic or meta-page.ā and āIn general, you shouldnāt post before you understand Wikipedia rules, norms, and guidelines.ā so they are ahead of the previous calls made on Lesswrong for Wikipedia edit-wars.
On the downside, theyāve got a laundry list of lesswrong jargon they want Wikipedia articles for. Even one of the lesswrongers responding to them points out these terms are a bit on the under-defined side:
Speaking as a self-identified agent foundations researcher, I donāt think agent foundations can be said to exist yet. Itās more of an aspiration than a field. If someone wrote a wikipedia page for it, it would just be that personās opinion on what agent foundations should look like.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English5Ā·2 days agoTheyāre cosplaying as activists, have no ideas about how to move the public image needle other than weird movie ideas and hope, and are literally marinated in SV technolibertarianism which sees government regulation as Evil.
It is kind of sad. They are missing the ideological pieces that would let them carry out activism effectually so instead theyāve gotten used as a free source of crit-hype in the LLM bubble. ā¦except not that sad because they would ignore real AI dangers in favor of their sci-fi scenarios, so I donāt feel too bad for them.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English4Ā·2 days agoAnd why would a rich guy be against a āwe are trying to convince rich guys to spend their money differentlyā organization.
Well when they are just passively trying to convince the rich guys, they can use the organization to launder reputation or boost ideologies they are in favor of. When the organization actually tries to get regulations passed, even ineffectually, well, that is a threat to the likes of Thiel.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English6Ā·2 days agoThe quirky eschatologist that youāre looking for is RenĆ© Girard, who he personally met at some point. For more details, check out the Behind the Bastards on him.
Thanks for the references. The quirky theology was so outside the range of even the weirder Fundamentalist Christian stuff I didnāt recognize it as such. (And didnāt trust the EA summary because they try so hard to charitably make sense of Thiel).
In this context, Thiel fears the spectre of AGI because it canāt be influenced by his normal approach to power, which is to hide anything that can be hidden and outspend everybody else talking in the open.
Except the EAs are, on net, opposed to the creation of AGI (albeit they are ineffectual in their opposition). So going after the EAs doesnāt make sense if Thiel is genuinely opposed to inventing AGI faster. So I still think Thiel is just going after the EAās because heās libertarian and EA has shifted in the direction of trying to get more government regulation. (As opposed to a coherent theological goal beyond libertarianism). Iāll check out the BtB podcast and see if it changes my mind as to his exact flavor of insanity.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English14Ā·3 days agoSo⦠apparently Peter Thiel has taken to co-opting fundamentalist Christian terminology to go after Effective Altruism? At least it seems that way from this EA post (warning, I took psychic damage just skimming the lunacy). As far as I can tell, heās merely co-opting the terminology, Thielās blather doesnāt have any connection to any variant of Christian eschatology (whether mainstream or fundamentalist or even obscure wacky fundamentalist), but of course, the majority of the EAs donāt recognize that, or the fact that he is probably targeting them for their (kind of weak to be honest) attempts at getting AI regulated at all, and instead they charitably try to steelman him and figure out if he was a legitimate point. ā¦I wish they could put a tenth of this effort into understanding leftist thought.
Some of the comments are⦠okay actually, at least by EA standards, but there are still plenty of people willing to defend Thiel
One comment notes some confusion:
Iām still confused about the overall shape of what Thiel believes.
Heās concerned about the antichrist opposing Jesus during Armageddon. But afaik standard theology says that Jesus will win for certain. And revelation says the world will be in disarray and moral decay when the Second Coming happens.
If chaos is inevitable and necessary for Jesusā return, why is expanding the pre-apocalyptic era with growth/prosperity so important to him?
Yeah, its because he is simply borrowing Christian Fundamentalists Eschatological terminology⦠possibly to try to turn the Christofascists against EA?
Iām dubious Thiel is actually an ally to anyone worried about permanent dictatorship. He has connections to openly anti-democratic neoreactionaries like Curtis Yarvin, he quotes Nazi lawyer and democracy critic Carl Schmitt on how moments of greatness in politics are when you see your enemy as an enemy, and one of the most famous things he ever said is āI no longer believe that freedom and democracy are compatibleā. Rather I think he is using ātotalitarianā to refer to any situation where the government is less economically libertarian than he would like, or āwokeā ideas are popular amongst elite tastemakers, even if the polity this is all occurring in is clearly a liberal democracy, not a totalitarian state.
Note this commenter still uses non-confrontational language (āIām dubiousā) even when directly calling Thiel out.
The top comment, though, is just like the main post, extending charitability to complete technofascist insanity. (Warning for psychic damage)
Nice post! I am a pretty close follower of the Thiel Cinematic Universe (ie his various interviews, essays, etc)
I think Thiel is also personally quite motivated (understandably) by wanting to avoid death. This obviously relates to a kind of accelerationist take on AI that sets him against EA, but again, thereās a deeper philosophical difference here. Classic Yudkowsky essays (and a memorable Bostrom short story, video adaptation here) share this strident anti-death, pro-medical-progress attitude (cryonics, etc), as do some philanthropists like Vitalik Buterin. But these days, you donāt hear so much about āFDA delenda estā or anti-aging research from effective altruism. Perhaps there are valid reasons for this (low tractability, perhaps). But some of the arguments given by EAs against agingās importance are a little weak, IMO (more on this later) ā in Thielās view, maybe suspiciously weak. This is a weird thing to say, but I think to Thiel, EA looks like a fundamentally statist / fascist ideology, insofar as it is seeking to place the state in a position of central importance, with human individuality / agency / consciousness pushed aside.
As for my personal take on Thielās views ā Iām often disappointed at the sloppiness (blunt-ness? or low-decoupling-ness?) of his criticisms, which attack the EA for having a problematic āvibeā and political alignment, but without digging into any specific technical points of disagreement. But I do think some of his higher-level, vibe-based critiques have a point.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 17th August 2025English5Ā·3 days agoThis was discussed last week but I looked at the comments and noticed someone in the comments getting slammed for⦠checks notes⦠noting that Eliezer wasnāt clear on what research paper he was actually responding to (multiple other comments are kind of confused, because they assume he means one paper then other comments correct them that he obviously meant another). The commenter of course edits to back-peddle.
scruiser@awful.systemsto SneerClub@awful.systemsā¢Embryo selection, thinking about risk the wrong way: What we talk about when we talk about risk.English8Ā·3 days agoOne of the comments really annoyed me:
The āgenetics is meaningless at the individual levelā argument has always struck me as a bit of an ivory-tower oversimplification.
No, its pushing back at eugenicist with completely fallacious ideas. See for example Genesmithās posts on Lesswrong. They are like concentrated Genetics Dunning-Kruger and the lesswrongers eat them up.
No one is promising perfect prediction.
Yes they are, see Kelsey Piperās comments about superbabies, or Eliezer worldbuilding about dath Ilanās eugenics, or Genesmithās totally wacko ideas.
scruiser@awful.systemsto SneerClub@awful.systemsā¢Embryo selection, thinking about risk the wrong way: What we talk about when we talk about risk.English7Ā·3 days agoThe numbers that get thrown about donāt mean what the people throwing them around think them to mean
That describes a common rationalist failure mode. They reach for a false sense of quantification by throwing lots of numbers at things, but the numbers are already approximations of much more nuanced, complex, and/or continuous things, so by overemphasizing the numbers, they actually get further from properly understanding. See for example⦠fixation on IQ; slapping probabilities everywhere; extrapolating trend lines (METR task length); and prediction markets.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 4th August 2025English4Ā·12 days agothat couple
I hate that I know what is being talked about the instant I see it.
Also, theyāve appeared on 3 separate top posts in the stubstack this week, so yeah another PR blitz. I find it kind of funny/stupid the news media canāt even bother to find a local eugenicist couple to talk to. I guess having a āstoryā served up to you is enticing enough to utterly fail to provide pushback or question if the story is even relevant to your audience in the first place.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 4th August 2025English6Ā·13 days agoThey are going with the 50% success rate because the ātime horizonsā for something remotely reasonable like 99% or even just 95% are still so tiny they canāt extrapolate a trend out of it and it tears a massive hole in their whole AGI agents soon scenarios().
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 4th August 2025English6Ā·13 days agoI would give it credit for being better than the absolutely worthless approach of āscoring well on a bunch of multiple choice question testsā. And it is possibly vaguely relevant for the
pipe-dreamend goal of outright replacing programmers. But overall, yeah, it is really arbitrary.Also, given how programming is perceived as one of the more in-demand āpotentialā killer-apps for LLMs and how it is also one of the applications it is relatively easy to churn out and verify synthetic training data for (write really precise detailed test cases, then you can automatically verify attempted solutions and synthetic data), even if LLMs are genuinely improving at programming it likely doesnāt indicate general improvement in capabilities.
Eliezerās response is especially stupid given that he has cited his fictional worldbuilding project like it is evidence (we have examples saved over on the reddit sneerclub).
Saw this posted to the Reddit Sneerclub, this essay has some excellent zingers and a good overall understanding of rationalists. A few highlightsā¦
Rationalism is the notion that the universe is a collection of true facts, but since the human brain is an instrument for detecting lions in the undergrowth, almost everyone is helplessly confused about the world, and if you want to believe as many true things and disbelieve as many false things as possibleāand of course you doāyou must use various special techniques to discipline your brain into functioning more like a computer. (In practice, these techniques mostly consist of calling your prejudices āBayesian priors,ā but thatās not important right now.)
Weāre all very familiar with this phenoma, but this author has a pithy way of summarizing it.
The story is not a case study in how rationality will help you understand the world, itās a case study in how rationality will give you power over other people. It might have been overtly signposted as fiction, with all the necessary content warnings in place. That doesnāt mean itās not believed. Despite being genuinely horrible, this story does have one important use: it makes sense out of the rationalist fixation on the danger of a superhuman AI. According to HPMOR, raw intelligence gives you direct power over other people; a recursively self-improving artificial general intelligence is just our name for the theoretical point where infinite intelligence transforms into infinite power.
Yep, the author nails the warped view Rationalists have about intelligence.
Weāre supposedly dealing with a group of idiosyncratic weirdos, all of them trying to independently reconstruct the entirety of human knowledge from scratch. Their politics run all the way from the furthest fringes of the far right to the furthest fringes of the liberal centre.
That is a concise summary of their warped Overton Window, yeah.
scruiser@awful.systemsto SneerClub@awful.systemsā¢look AI doom is all very well, but we have extremely important message board drama to be getting on withEnglish4Ā·20 days agoAnd some people are crediting Eliezer as if he predicted this devastating damage and not something completely different. Or they compare LLMs spewing shit to his scenarios of agentically and intellignetly dangerous and manipulative AGIs.
scruiser@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 27th July 2025English10Ā·20 days agoShould we give up on all altruist causes because the AGI God is nearly here? the answer may surprise you!
tldr; actually you shouldnāt give because the AGI God might not be quite omnipotent and thus would still benefit from your help and maybe there will be multiple Gods, some used for Good and some for Evil so your efforts are still needed. Shrimp are getting their eyeballs cut off right now!
I know like half the facts I would need to estimate it⦠if you know the GPU vRAM required for the video generation, and how long it takes, then assuming no latency, you could get a ballpark number looking at nVida GPU specs on power usage. For instance, if a short clip of video generation needs 90 GB VRAM, then maybe they are using an RTX 6000 Pro⦠https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ , take the amount of time it takes in off hours which shouldnāt have a queue time⦠and you can guessestimate a number of Watt hours? Like if it takes 20 minutes to generate, then at 300-600 watts of power usage that would be 100-200 watt hours. I can find an estimate of $.33 per kWh (https://www.energysage.com/local-data/electricity-cost/ca/san-francisco-county/san-francisco/ ), so it would only be costing $.03 to $.06.
IDK how much GPU-time you actually need though, Iām just wildly guessing. Like if they use many server grade GPUs in parallel, that would multiply the cost up even if it only takes them minutes per video generation.