Itās not always easy to distinguish between existentialism and a bad mood.
- 17 Posts
- 476 Comments
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 14th September 2025English5Ā·2 days agoNice. Hereās the bluesky account as well.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 14th September 2025English7Ā·2 days agoSome quality wordsmithing found in the wild:
transcript
@MosesSternstein (quote-twitted): AI-Capex is the everything cycle, now.
Just under 50% of GDP growth is attributable to AI Capex
@bigblackjacobin: Almost certainly the greatest misallocation of capital you or I will ever see. Thereās no justification for this however you cut it but the beatings will continue until a stillborn god is born.
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢New Scientist reviews the Yudkowsky/Soares book. They don't recommend it.English10Ā·2 days agoRemember, when your code doesnāt compile, it might mean you made a mistake in coding, or your code is about to become selfaware.
Good analogy actually.
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢New Scientist reviews the Yudkowsky/Soares book. They don't recommend it.English8Ā·2 days agoThe arguments made against the book in the review are that it doesnāt make the case for LLMs being capable of independent agency, it reduces all material concerns of an AI takeover to broad claims of ASI being indistinguishable from magic and that its proposed solutions are dumb and unenforceable (again with the global GPU prohibition and the unilateral bombing of rogue datacenters).
That towards the end they note that the x-risk framing is a cognitive short-circuit that causes the faithful to ignore more pressing concerns like the impending climate catastrophe in favor of a mostly fictitious problem like AI doom isnāt really a part of their core thesis against the book.
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢New Scientist reviews the Yudkowsky/Soares book. They don't recommend it.English9Ā·2 days agoThey also seem to broadly agree with the āhey, humans are pretty shit at thinking too, you knowā line of LLM apologetics.
āLLMs and humans are both sentence-producing machines, but they were shaped by different processes to do different work,ā say the pair ā again, Iām in full agreement.
But judging from the rest of the review I can see how you kind of have to be at least somewhat rationalist-adjacent to have a chance of actually reading the thing to the end.
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢New Scientist reviews the Yudkowsky/Soares book. They don't recommend it.English12Ā·2 days agoThe pair also suggest that signs of AI plateauing, as seems to be the case with OpenAIās latest GPT-5 model, could actually be the result of a clandestine superintelligent AI sabotaging its competitors.
copium-intubation.tiff
Also this seems like the natural progression of that time Yud embarrassed himself by cautioning actual ML researchers to be weary of āsudden drops in loss function during trainingā, which was just an insanely uninformed thing to say out loud.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 14th September 2025English1Ā·2 days agothe only people who like prediction markets [ā¦]
Apparently Donald Trump Jr. has found his way into the payroll of a couple of the bigger prediction markets, so they seem to be doing their darndest to change that.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 14th September 2025English2Ā·2 days agoassuming prediction markets are magic
Bet itās more like assuming it will incentivize people with magical predicting genes to reproduce more so we can get a kwisatz haderach to fight AI down the line.
Itās always dumber than expected.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 14th September 2025English14Ā·3 days agoApparently the hacker who publicized a copy of the no fly list was leaked an article containing Yarvinās home address, which she promptly posted on bluesky. Wonāt link because I donāt think weāve had the doxxing discussion but Itās easily findable now.
Iām mostly posting this because the article featured this photo:
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢Oliver Habryka truly understands the soul of SneerClubEnglish1Ā·3 days agoI figure eventually some proprietary work would make it into the wild via autocomplete. Copilot used to be cool with inserting other programmerās names and emails in author notes for instance, though they seem to have started filtering that out in the mean time.
Copilot licenses let you specifically opt out from your prompts and your code being used to train new models, so it would be a big deal.
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢Oliver Habryka truly understands the soul of SneerClubEnglish1Ā·4 days agoWe should be so lucky, the ensuing barrage of lawsuits about illegally cribbing company IP would probably make the book author class action damages pale in comparison.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Google quietly vanishes its net zero carbon pledgeEnglish101Ā·5 days agoThis is too corny and overdramatic for my tastes. It reads a bit like satire, complete with piling on the religious undertones there at the end.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 7th September 2025English4Ā·7 days agoGetting love bombed in that rationalist con he went to recently probably didnāt help matters.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 7th September 2025English16Ā·7 days agoThe common clay of the new west:
transcript
ChatGPT has become worthless
[Business & Professional]
Iām a paid member and asked it to help me research a topic and write a guide and it said it needed days to complete it. Thatās a first. Usually it could do this task on the spot.
It missed the first deadline and missed 5 more. 3 weeks went by and it couldnāt get the task done. Went to Claude and it did it in 10 minutes. No idea what is going on with ChatGpt but I cancelled the pay plan.
Anyone else having this kind of issue?
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢a detailed examination of the leaked Scott Alexander emails, for your referenceEnglish6Ā·7 days agoif one person came out and spilled the beans, itād suggest that there might be more people who didnāt
I mean, after his full throated defense of Lynnās IQ map (featuring disgraced nazi college dropout Cremieux/TP0 as a subject matter expert) what other beans might be interesting enough to spill? Did he lie about becoming a kidney donor?
I think the emails are important because a) they make a case that for all his performative high-mindedness and deference to science and whinging about polygenic selection he came to his current views through the same white supremacist/great replacement milieu as every other pretentious gutter racist out there and b) he is so consistently disingenuous that the previous statement might not even matter much⦠he might honestly believe that priming impressionable well-off techies towards blood and soil fascism precursors was worth it if we end up allowing unchecked human genetic experimentation to come up with 260IQ babies that might have a fighting chance against shAItan.
I guess it could come out that despite his habit of including conflict of interest disclosures, his public views may be way more for sale than is generally perceived.
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢a detailed examination of the leaked Scott Alexander emails, for your referenceEnglish11Ā·8 days agoI wonder if this is just a really clumsy attempt to invent stretching the overton window from first principles or if he really is so terminally rationalist that he thinks a political ideology is a sliding scale of fungible points and being 23.17% ancap can be a meaningful statement.
That the exchange of ideas between friends is supposed to work a bit like the principle of communicating vessels is a pretty weird assumption, too. Also, if he thinks itās ok to admit that he straight up tries to manipulate friends in this way, imagine how he approaches non-friends.
Between this and him casually admitting that he keeps āculture warā topics alive on the substack because they get a ton of clicks, itās a safe bet that he canāt be thinking too highly of his readership, although I suspect there is an esoteric/exoteric teachings divide that is mostly non-obvious from the online perspective.
Architeuthis@awful.systemsto SneerClub@awful.systemsā¢a detailed examination of the leaked Scott Alexander emails, for your referenceEnglish9Ā·8 days agoIn his early blog posts, Scott Alexander talked about how he was not leaping through higher education in a single bound
He starts his recent article on AI psychosis by mixing up psychosis with schizophrenia (he calls psychosis a biological disease), so that tracks.
Other than that, I think itās ok in principle to be ideologically opposed to something even if you and yours happened to benefit from it. Of course, it immediately becomes iffy if itās a mechanism for social mobility that you donāt plan on replacing, since in that case you are basically advocating for pulling up the ladder behind you.
Architeuthis@awful.systemsto TechTakes@awful.systemsā¢Stubsack: weekly thread for sneers not worth an entire post, week ending 7th September 2025English15Ā·9 days agoShamelessly reproduced from the other place:
A quick summary of his last three posts:
āHereās a thought experiment I came up with to try to justify the murder of tens of thousands of children.ā
āLots of people got mad at me for my last post; have you considered that being mad at me makes me the victim and you a Nazi?ā
āIām actually winning so much right now: itās very normal that people keep worriedly speculating that Iāve suffered some sort of mental breakdown.ā
All the stuff about ASI is basically theology, or trying to do armchair psychology to Yog-Sothoth. If autonomous ASI ever happens itās kind of definitionally impossible to know what itāll do, itās beyond us.
The simulating synapses is hard stuff I can take or leave. To argue by analogy, itās not like getting an artificial feather exactly right was ever a bottleneck to developing air travel once we got the basics of aerodynamics down.