Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
I had to attend a presentation from one of these guys, trying to tell a room full of journalists that LLMs could replace us & we needed to adapt by using it and I couldnāt stop thinking that an LLM could never be a trans journalist, but it could probably replace the guy giving the presentation.
Copy/pasting a post I made in the DSP driver subreddit that I might expand over at morewrite because itās a case study in how machine learning algorithms can create massive problems even when they actually work pretty well.
Itās a machine learning system, not an actual human boss. The system is set up to try and find the breaking point, where if you finish your route on time it assumes you can handle a little bit more and if you donāt it backs off.
The real problem is that everything else in the organization is set up so that finishing your routes on time is a minimum standard while the algorithm that creates the routes is designed to make doing so just barely possible. Because itās not fully individualized, this means that doing things like skipping breaks and waiving your lunch (which the system doesnāt appear to recognize as options) effectively push the edge of what the system thinks is possible out a full extra hour, and then the rest of the organization (including the decision-makers about who gets to keep their job) turn that edge into the standard. And thatās how you end up where we are now, where actually taking your legally-protected breaks is at best a luxury for top performers or people who get an easy route for the day, rather than a fundamental part of keeping everyone doing the job sane and healthy.
Part of that organizational problem is also in the DSP setup itself, since it allows Amazon to avoid taking responsibility or accountability for those decisions. All they have to do is make sure their instructions to the DSP donāt explicitly call for anything illegal and they get to deflect all criticism (or LNI inquiries) away from themselves and towards the individual DSP, and if anyone becomes too much of a problem they can pretend to address it by cutting that DSP.
If anyone else is wondering: DSP here I think stands for āDelivery Service Partnerā, and driver for someone driving a vehicle. (I assumed the context āDigital Signal Processingā and driver as in ādevice driverā at first and was quite confused :P)
On-topic: I think regulation needs to come down hard on the delivery industry in general, be it parcels or food or whatever, working conditions there have been terrible for a long time.
Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: āsend Gemini a prompt in white-on-white/0px textā):
Iāve got time, so Iāll fire off a sidenote:
In the immediate term, this bubbleās gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots canāt really resist being jailbroken.
In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminalsā jobs a lot easier for at least a few years.
Daniel Kokoās trying to figure out how to stop the AGI apocalypse.
How might this work? Install TTRPG afficionados at the chip fabs and tell them to roll a saving throw.
Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that donāt roll a 1 are destroyed on the spot.
And if that doesnāt work? Koko ultimately ends up pretty much where Big Yud did: bombing the fuck out of the fabs and the data centers.
āFor example, if a country turns out to have a hidden datacenter somewhere, the datacenter gets hit by ballistic missiles and the country gets heavy sanctions and demands to allow inspectors to pore over other suspicious locations, which if refused will lead to more missile strikes.ā
Suppose further that enough powerful people are concerned about the poverty in Ireland, anti-catholic discrimination, food insecurity, and/or loss of rental revenue, that thereās significant political will to Do Something. Should we ban starvation? Should we decolonise? Should we export their produce harder to finally starve Ireland? Should we sign some kind of treaty? Should we have a national megaproject to replace the population with the British? Many of these options are seriously considered.
Enter the baseline option: Let the Irish sell their babies as a delicacy.
Funnily enough, there are a lot of data centres in Ireland. Maybe there will be a missile strike and Irelandās population will shrink back to 19th century numbers
The sanctions and inspections idea is so silly esp after what the USA/Trump did to Iran. (I mean the deciding that Iran wasnt keeping their end of the bargain and still making Uranium. So after the end Iran started to make more Uranium for real. Gg everyone).
Also ācull the gpusā: [angry gamer noises]
Iām not gonna advocate for it to happen but Iām pretty sure the world would be overall in a much healthier place geopolitically if someone actually started yeeting missiles into major American cities and landmarks. Itās too easy to not really understand the human impact of even a successful precision strike when the last times you were meaningfully on the other end of the airstrike were ~20 and ~80 years ago, respectively.
Similarly, at the chip production facilities, a committee of representatives stands at the end of the production line basically and rolls a ten-sided die for each chip; chips that donāt roll a 1 are destroyed on the spot.
Ah, yes, artificially kneecap chip fabsā yields, Iām sure that will go over well with the capitalist overlords who own them
Someone didnāt get the memo about nVidiaās stock price, and how is Jensen supposed to sign more boobs if suddenly his customers all get missileād?
Ian Lance Taylor (of GOLD, Go, and other tech fame) had a take on chatbots being AGI that I liked to see from an influential person of computing. https://www.airs.com/blog/archives/673
The summary is that chatbots are not AGI, using the current AI wave as the usher to AGI is not it, and all around dislikes in a very polite way that chatbot LLMs are seen as AI.
Apologies if this was posted when published.
Nikhilās guest post at Zitron just went up - https://www.wheresyoured.at/the-remarkable-incompetence-at-the-heart-of-tech/
EDIT: the intro was strong enough I threw in $7. Second half is just as good.
You thought CrĆ©mieux (Jordan Lasker) was bad. You were wrong. Heās even worse. https://www.motherjones.com/politics/2025/07/cremieux-jordan-lasker-mamdani-nyt-nazi-faliceer-reddit/
The whole internet loves ĆspĆØrature Trouvement, the grumpy old racist! 5 seconds later We regret to inform you the racist is not that old and actually has a pretty normal name. Also donāt look up his runescape username.
Fucking hell. Not the most important part of the story, but his elaborate lies about being Jewish are very very weird. Kind of like white Americans pretending that theyāre Cherokee I guess?
Itās not that weird when you understand the sharks he swims with. Race pseudoscientists routinely peddle the idea that Ashkenazi Jews have higher IQs than any other ethnic or racial group. Scoot Alexander and Big Yud have made this claim numerous times. Lasker pretending to be a Jew makes more sense once you realize this.
Iām aware of the idea, but itās still very weird for someone to pretend to be Jewish and also be a Nazi!
also here https://awful.systems/post/4995759
The long and short of it is motherjones discovered TPOs openly nazi alt.
https://www.profgalloway.com/ice-age/ Good post until I hit the below:
Instead of militarizing immigration enforcement, we should be investing against the real challenge: AI. The World Economic Forum says 9 million jobs globally may be displaced in the next five years. Anthropicās CEO warns AI could eliminate half of all entry-level white-collar jobs. Imagine the population of Greece storming the shores of America and taking jobs (even jobs Americans actually want), as theyāre willing to work 24/7 for free. Youāve already met them. Their names are GPT, Claude, and Gemini.
Having a hard time imagining 300 but AI myself, Scott. Could we like, not shoehorn AI into every other discussion?
Iirc Galloway was a pro cryptocurrency guy. So this tracks
E: imagine if the 3d printer people had the hype machine behind them like this. āChina better watch out, soon all manufacturing of products will be done by people at homeā. Meanwhile China: [Laughs in 大č·čæ].
I think that 3D printing never picked up, because itās one of those things that empower the people, i.e. to repair stuff or build their own things, so the number of opportunities to grift seems to be smaller (although Iām probably underestimating it).
Most of the recently hyped technologies had goals that were exact opposites of empowering the masses.
Tangential: Iāve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didnāt pollute the spaces they operate in. The example Iāve heard of is artist alleys at conventions- a 3D printer person will set up a stall and sell plastic models of dragons or pokemon or whatever. Everything is terrible!
Tangential: Iāve heard that there are 3D printer people that print junk and sell them. This would not be much of a problem if they didnāt pollute the spaces they operate in.
So, essentially AI slop, but with more microplastics. Given the 3D printer bros are much more limited in their ability to pollute their spaces (they have to pay for filament/resin, theyāre physically limited in where they can pollute, and they produce slop much slower than an LLM), theyāre hopefully easier to deal with.
I think that is it tbh. There was no big centralized profit, so no need to hype it up.
I liked his stuff on wework back in the day. Funny how he could see one tech grift really clearly and fall for another. Then again, WeWork is in the black these days. Anyway I think Galloway pivoted (apologies) to Mens Rights lately; and he also gave some money to UCLA Extension (ie not the main campus) which is a bit hard to interpret.
yeah lol ez just 3dprint polypropylene polymerization reactor. what the fuck is hastelloy?
Yeah, but we never got that massive hype cycle for 3d printers. Which in a way is a bit odd, as it could have happend. Nanomachine! Star trek replicators! (Getting a bit offtopic from Galloway being a cryptobro).
I can imagine it clear⦠a chart showing minimum feature size decreasing over time (using cherry picked data points) with a dotted line projection of when 3d printers would get down nanotech scale. 3d printer related companies would warn of dangers of future nanotech and ask for legislation regulating it (with the language of the legislation completely failing to effect current 3d printing technology). Everyone would be buying 3d printers at home, and lots of shitty startups would be selling crappy 3d printed junk.
Hereās an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Donāt Get Why Normies Donāt Freak Out:
For quite a while, Iāve been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being⦠at best lukewarm against the idea of humanity going extinct.
(Dude then goes on to try to game-theorize this, I didnāt bother to poke holes in it)
The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of āomnicideā is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.
At least on commenter gets it:
Most people distinguish between intentional acts and shit that happens.
(source)
Edit never read the comments (again). The commenter referenced above obviously didnāt feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice āsaveā, dipshit.
Hmm, should I be more worried and outraged about genocides that are happening at this very moment, or some imaginary scifi scenario dreamed up by people who really like drawing charts?
One of the ways the rationalists try to rebut this is through the idiotic dust specks argument. Deep down, they want to smuggle in the argument that their fanciful scenarios are actually far more important than real life issues, because what if their scenarios are just so bad that their weight overcomes the low probability that they occur?
(I donāt know much philosophy, so I am curious about philosophical counterarguments to this. Mathematically, I can say that the more they add scifi nonsense to their scenarios, the more that reduces the probability that they occur.)
You know, I hadnāt actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascalās wager. Only instead of Heaven being infinitely good if you convert thereās some infinitely bad thing that happens if you donāt do whatever Eliezer asks of you.
reverse dust specks: how many LWers would we need to permanently deprive of access to internet to see rationalist discourse dying out?
Whatās your P(that question has been asked at a US three letter agency)
it either was, or wasnāt, so 50%
Recently, Iāve realized that there is a decent explanation for why so many people believe that - if we model them as operating under a strict zero-sum game model of the world⦠āeveryone losesā is basically an incoherent statement - as a best approximation it would either denote no change and therefore be morally neutral, or an equal outcome, and would therefore be preferable to some.
Yes, this is why people think that. This is a normal thought to think others have.
Hereās my unified theory of human psychology, based on the assumption most people believe in the Tooth Fairy and absolutely no other unstated bizarre and incorrect assumptions no siree!
Why do these guys all sound like deathnote, but stupid?
because they cribbed their ideas from deathnote, and theyāre stupid
I mean if you want to be exceedingly generous (I sadly have my moments), this is actually remarkably close to the āintentional actsā and āshit happensā distinction, in a perverse Rationalist way. ^^
Thats fair, if you want to be generous, if you are not going to be Id say there are still conceptually large differences between the quote and āshit happensā. But yes, you are right. If only they had listened to Scott when he said ātalk less like robotsā
Somebody found a relevant reddit post:
Dr. Casey Fiesler āŖ@cfiesler.bsky.social⬠(who has clippy earrings in a video!) writes: " This is fascinating: reddit link
Someone āworked on a book with ChatGPTā for weeks and then sought help on Reddit when they couldnāt download the file. Redditors helped them realized ChatGPT had just been roleplaying/lying and there was no file/bookā¦"
After understanding a lot of things itās clear that it didnāt. And it fooled me for two weeks.
I have learned my lesson and now I am using it to generate one page at a time.
qu1j0t3 replies:
thatās, uh, not really the ideal takeaway from this lesson
you have to scroll through the personās comments to find it, but it does look they did author the body of the text and uploaded it as a docx into ChatGPT. so points for actually creating something unlike the AI bros
it looks like they tried to use ChatGPT to improve narration. to what degree the token smusher has decided to rewrite their work in the smooth, recycled plastic feel weāve all come to know and despise remains unknown
they did say they are trying to get it to generate illustrations for all 700 pages, and moreover appear[ed] to believe it can āwork in the backgroundā on individual chapters with no prompting. they do seem to have been educated on the folly of expecting this to work, but as blakestaceyās other reply pointed out, they appear to now be just manually prompting one page at a time. godspeed
They now deleted their post and I assume a lot of others, but they also claim they have no time to really write and just wanted a collection of stories for their kid(s). Which doesnt make sense, creating 700 pages of kids stories is a lot of work, even if you let a bot improve the flow. Unless they just stole a book of childrenās stories from somewhere. (I know these books exist, as a child from one of my brothers tricked me into reading two stories from one).
looks like thereās either downvote brigade keeping critical comments at +1 or 0, or reddit brigading countermeasures went on in defense of wittle promprfondler
New post from Matthew Hughes: People Are The Point, effectively a manifesto against gen-AI as a concept.
Better Offline was rough this morning in some places. Props to Ed for keeping his cool with the guests.
Oof, that Hollywood guest (Brian Koppelman) is a dunderhead. āThese AI layoffs actually make sense because of complexity theoryā. āYou gotta take Eliezer Yudkowsky seriously. He predicted everything perfectly.ā
I looked up his background, and it turns out heās the guy behind the TV show āBillionsā. That immediately made him make sense to me. The show attempts to lionize billionaires and is ultimately undermined not just by its offensive premise but by the worldās most block-headed and cringe-inducing dialog.
Terrible choice of guest, Ed.
I study complexity theory and Iād like to know what circuit lower bound assumption he uses to prove that the AI layoffs make sense. Seriously, it is sad that the people in the VC techbro sphere are thought to have technical competence. At the same time, they do their best to erode scientific institutions.
My hot take has always been that current Boolean-SAT/MIP solvers are probably pretty close to theoretical optimality for problems that are interesting to humans & AI no matter how āintelligentā will struggle to meaningfully improve them. Ofc I doubt that Mr. Hollywood (or Yud for that matter) has actually spent enough time with classical optimization lore to understand this. Computer go FOOM ofc.
Only way I can make the link between complexity theory and laying off people is thinking about putting people in ācan solve up to this level of problemā style complexity classes (which regulars here should realize gets iffy fast). So hope he explained it more than that.
The only complexity theory I know of is the one which tries to work out how resource-intensive certain problems are for computers, so this whole thing sounds iffy right from the get-go.
Yeah but those resource-intensive problems can be fitted into specific classes of problems (P, NP, PSPACE etc), which is what I was talking about, so we are talking about the same thing.
So under my imagined theory you can classify people as ācan solve: [ P, NP, PSPACE, ⦠]ā. Wonder what they will do with the P class. (Wait, what did Yarvin want to do with them again?)
solve this sokoban or youāre fired
Thereās really no good way to make any statements about what problems LLMs can solve in terms of complexity theory. To this day, LLMs, even the newfangled āreasoningā models, have not demonstrated that they can reliably solve computational problems in the first place. For example, LLMs cannot reliably make legal moves in chess and cannot reliably solve puzzles even when given the algorithm. LLM hypesters are in no position to make any claims about complexity theory.
Even if we have AIs that can reliably solve computational tasks (or, you know, just use computers properly), it still doesnāt change anything in terms of complexity theory, because complexity theory concerns itself with all possible algorithms, and any AI is just another algorithm in the end. If P != NP, it doesnāt matter how āintelligentā your AI is, itās not solving NP-hard problems in polynomial time. And if some particularly bold hypester wants to claim that AI can efficiently solve all problems in NP, letās just say that extraordinary claims require extraordinary evidence.
Koppelman is only saying ācomplexity theoryā because he likes dropping buzzwords that sound good and doesnāt realize that some of them have actual meanings.
I heard him say āquantumā and immediately came here looking for fresh-baked sneers
Yeah but I was trying to combine complexity theory as a loose theory misused by tech people in relation to āpeople who get firedā. (Not that I donāt appreciate your post btw, I sadly have not seen any pro-AI people be real complexity theory cranks re the capabilities. I have seen an anti be a complexity theory crank, but that is only when I reread my own posts ;) ).
Yeah, that guy was a real piece of work, and if I had actually bothered to watch The Bear before, I would stop doing so in favor of sending ChatGPT a video of me yelling in my kitchen and ask it if what is depicted was the plot of the latest episode.
I have been thinking about the true cost of running LLMs (of course, Ed Zitron and others have written about this a lot).
We take it for granted that large parts of the internet are available for free. Sure, a lot of it is plastered with ads, and paywalls are becoming increasingly common, but thanks to economies of scale (and a level of intrinsic motivation/altruism/idealism/vanity), it still used to be viable to provide information online without charging users for every bit of it. Same appears to be true for the tools to discover said information (search engines).
Compare this to the estimated true cost of running AI chatbots, which (according to the numbers Iām familiar with) may be tens or even hundreds of dollars a month for each user. For this price, users would get unreliable slop, and this slop could only be produced from the (mostly free) information that is already available online while disincentivizing creators from producing more of it (because search engine driven traffic is dying down).
I think the math is really abysmal here, and it may take some time to realize how bad it really is. We are used to big numbers from tech companies, but we rarely break them down to individual users.
Somehow reminds me of the astronomical cost of each bitcoin transaction (especially compared to the tiny cost of processing a single payment through established payment systems).
The big shift in per-action cost is what always seems to be missing from the conversation. Like, in a lot of my experience the per-request cost is basically negligible compared to the overhead of running the service in general. With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that donāt appear to be getting priced in or planned for in discussions of the glorious AI technocapital future
With LLMs not only do we see massive increases in overhead costs due to the training process necessary to build a usable model, each request that gets sent has a higher cost. This changes the scaling logic in ways that donāt appear to be getting priced in or planned for in discussions of the glorious AI technocapital future
This is a very important point, I believe. I find it particularly ironic that the ātraditionalā Internet was fairly efficient in particular because many people were shown more or less the same content, and this fact also made it easier to carry out a certain degree of quality assurance. Now with chatbots, all this is being thrown overboard and extreme inefficiencies are being created, and apparently, the AI hypemongers are largely ignoring that.
Iāve done some of the numbers here, but donāt stand by them enough to share. I do estimate that products like Cursor or Claude are being sold at roughly an 80-90% discount compared to whatās sustainable, which is roughly in line with what Zitron has been saying, but itās not precise enough for serious predictions.
Your last paragraph makes me think. We often idealize blockchains with VMs, e.g. Ethereum, as a global distributed computer, if the computer were an old Raspberry Pi. But it is Byzantine distributed; the (IMO excessive) cost goes towards establishing a useful property. If I pick another old computer with a useful property, like a radiation-hardened chipset comparable to a Gamecube or G3 Mac, then we have a spectrum of computers to think about. One end of the spectrum is fast, one end is cheap, one end is Byzantine, one end is rad-hardened, etc. Even GPUs are part of this; theyāre not that fast, but can act in parallel over very wide data. In remarkably stark contrast, the cost of Transformers on GPUs doesnāt actually go towards any useful property! Anything Transformers can do, a cheaper more specialized algorithm could have also done.
Sex pest billionaire Travis Kalanick says AI is great for more than just vibe coding. Itās also great for vibe physics.
@TinyTimmyTokyo He has more dollars than sense, as they say. (Funnier if you say it out loud)
@blakestaceyMy guess is that vibe-physics involves bruteforcing a problem until you find a solution. That method sorta works, but is wholly inefficient and rarely robust/general enough to be useful.
Nah, heās just talking to an LLM.
āIāll go down this thread with [Chat]GPT or Grok and Iāll start to get to the edge of whatās known in quantum physics and then Iām doing the equivalent of vibe coding, except itās vibe physics,ā Kalanick explained. āAnd weāre approaching whatās known. And Iām trying to poke and see if thereās breakthroughs to be had. And Iāve gotten pretty damn close to some interesting breakthroughs just doing that.ā
And I donāt think you can brute force physics in general, having to experimentally confirm or disprove every random-ass intermediary hypothesis the brute force generator comes up with seems like quite the bottle neck.
For sure. Thereās an infinite amount of ways to get things wrong in math and physics. Without a fundamental understanding, all they can do is prompt-fondle and roll dice.
They are not even rolling the dice. The bot is just humoring them, it apparently just defaults to eventually going āyou are close to the edge of what is, known, well done keep goingā.
If infinite monkeys with typewriters can compose Shakespeare, then infinite monkeys with slop machines can produce Einstein (but you need to pump in infinite amounts of money first into my CodeMonkeyfy startup, just in case).
Remember last week when that study on AIās impact on development speed dropped?
A lot of peeps take away on this little graphic was āsee, impacts of AI on sw development are a net negative!ā I think the real take away is that METR, the AI safety group running the study, is a motley collection of deeply unserious clowns pretending to do science and their experimental set up is garbage.
https://substack.com/home/post/p-168077291
āFirst, I donāt like calling this study an āRCT.ā There is no control group! There are 16 people and they receive both treatments. Weāre supposed to believe that the ātreated unitsā here are the coding assignments. Weāll see in a second that this characterization isnāt so simple.ā
(I am once again shilling Ben Rechtās substack. )
While I also fully expect the conclusion to check out, itās also worth acknowledging that the actual goal for these systems isnāt to supplement skilled developers who can operate effectively without them, itās to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.
True. They arenāt building city sized data centers and offering people 9 figure salaries for no reason. They are trying to front load the cost of paying for labour for the rest of time.
When you look at METRās web site and review the credentials of its staff, you find that almost none of them has any sort of academic research background. No doctorates as far as I can tell, and lots of rationalist junk affiliations.
oh yeah that was obvious when you see who they are and what they do. also, one of the large opensource projects was the lesswrong site lololol
iām surprised itās as well constructed a study as it is even given that