Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Iām going to put a token down and make a prediction: when the bubble pops, the prompt fondlers will go all in on a āstabbed in the backā myth and will repeatedly try to re-inflate the bubble, because we were that close to building robot god and they canāt fathom a world where they were wrong.
The only question is who will get the blame.
I increasingly feel that bubbles donāt pop anymore, the slowly fizzle out as we just move on to the next one, all the way until the macro economy is 100% bubbles.
The only question is who will get the blame.
Isnāt it obvious? Us sneerers and the big name skeptics (like Gary Marcuses and Yann LeCuns) continuously cast doubt on LLM capabilities, even as they are getting within just a few more training runs and one more scaling of AGI Godhood. Weāll clearly be the ones to blame for the VC funding drying up, not years of hype without delivery.
it was me, I popped AI. I destroyed Twitter (and, in collateral damage, I blew up the United States), and those fuckers are next. Youāre welcome.
Youāre welcome.
Given their assumptions, the doomers should be thanking us for delaying AGI doom!
Whoever they say they blame itās probably going to be ultimately indistinguishable from āthe Jewsā
nah theyāll just stop and do nothing. they wonāt be able to do anything without chatgpt telling them what to do and think
i think that deflation of this bubble will be much slower and a bit anticlimatic. maybe theyāll figure a way to squeeze suckers out of their money in order to keep the charade going
maybe theyāll figure a way to squeeze suckers out of their money in order to keep the charade going
I believe that without access to generative AI, spammers and scammers wouldnāt be able to successfully compete in their respective markets anymore. So at the very least, the AI companies got this going for them, I guess. This might require their sales reps to mingle in somewhat peculiar circles, but who cares?
i meant more like scamming true believers out of their money like happens with crypto, this is cfar deal currently. spam, as something nobody should or wants to spend their creative juices on, or for that matter interact in any way, seems a natural fit for automation with llms
Theyāre doing it with cryptocurrency right now.
The only question is who will get the blame.
what does chatbot say about that?
In past tech bubbles, it was basically the VCs, the media hypesters and the liars in the companies. So the right people.
Penny Arcade chimes in on corporate AI mandates:
This is so Charlie Stross coded that I tried to read the Mastodon comments.
Lmao I love this Lemmy instance
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
When developers are allowed to use AI tools, they take 19% longer to complete issuesāa significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.
womp, hold on let me finish, womp
had a quick scan over the blogposts earlier, keen to read the paper
would be nice to see some more studies with more numbers under study, but with the cohort they picked the self-reported vs actual numbers are already quite spicy
and n=16 handily beats the usual promptfondler n=1
Also the attempt to actually measure productivity instead of just saying āthey felt like it helpedā - of course it did!
Another day, another jailbreak method - a new method called InfoFlood has just been revealed, which involves taking a regular prompt and making it thesaurus-exhaustingly verbose.
In simpler terms, it jailbreaks LLMs by speaking in Business Bro.
I mean, decontextualizing and obscuring the meanings of statements in order to permit conduct that would in ordinary circumstances breach basic ethical principles is arguably the primary purpose of deploying the specific forms and features that comprise āBusiness Englishā - if anything, the fact that LLM models are similarly prone to ignore their āconscienceā and follow orders when deciding and understanding them requires enough mental resources to exhaust them is an argument in favor of the anthropomorphic view.
Or:
Shit, isnāt the whole point of Business Bro language to make evil shit sound less evil?
maybe thereās just enough text written in that psychopatic techbro style with similar disregard for normal ethics that llms latched onto that. this is like what i guess happened with that āexplain step by stepā trick - instead of grafting from pairs of answers and questions like on quora, lying box grafts from sets of question -> steps -> answer like on chegg or stack or somewhere else where you can expect answers will be more correct
itād be more of case of getting awful output from awful input
https://www.lesswrong.com/posts/JspxcjkvBmye4cW4v/asking-for-a-friend-ai-research-protocols
Multiple people are quietly wondering if their AI systems might be conscious. Whatās the standard advice to give them?
Touch grass. Touch all the grass.
Username called āThe Dao of Bayesā. Bayesās theorem is when you pull the probabilities out of your posterior.
ē„č äøčØļ¼čØč äøē„ć He who knows (the Dao) does not (care to) speak (about it); he who is (ever ready to) speak about it does not know it.
Whatās the standard advice to give them?
Itās unfortunately illegal for me to answer this question earnestly
In the recent days thereās been a bunch of posts on LW about how consuming honey is bad because it makes bees sad, and LWers getting all hot and bothered about it. I donāt have a stinger in this fight, not least because investigations proved that basically all honey exported from outside the EU is actually just flavored sugar syrup, but I found this complaint kinda funny:
The argument deployed by individuals such as Benthamās Bulldog boils down to: āYes, the welfare of a single bee is worth 7-15% as much as that of a human. Oh, you wish to disagree with me? You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogpostsā.
āOf course such underhanded tactics are not present here, in the august forum promoting 10,000 word posts called Sequences!ā
You must first read this 4500-word blogpost, and possibly one or two 3000-word follow-up blogpostsā.
This, coming from LW, just has to be satire. Thereās no way to be this self-unaware and still remember to eat regularly.
Lesswrong is a Denial of Service attack on a very particular kind of guy
Damn making honey is metal as fuck. (And I mean that in a omg this is horrible, you could write disturbing songs about it way) CRUSHED FOR YOUNG! MAMMON DEMANDS DISMEMBERMENT! LIVING ON SLOP, HIVE CULLING MANDATORY. Makes a 40k hive city sound nice.
I thought you were talking about lemmy.world (also uses the LW acrynom) for a second.
In the morning: we are thrilled to announce this new opportunity for AI in the classroom
Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what itās been saying all afternoon are fakes.
Someone finally flipped a switch. As of a few minutes ago, Grok is now posting far less often on Hitler, and condemning the Nazis when it does, while claiming that the screenshots people show it of what itās been saying all afternoon are fakes.
LLMs are automatic gaslighting machines, so this makes sense
*musk voice* if machine god didnāt want me to fuck with the racism dial, he wouldnāt make it
Todayās bullshit that annoys me: Wikiwand. From what I can tell their grift is that itās just a shitty UI wrapper for Wikipedia that sells your data to who the fuck knows to make money for some Israeli shop. Also they SEO the fuck out of their stupid site so that every time I search for something that has a Finnish wikipedia page, the search results also contain a pointless shittier duplicate result from wikiwand dot com. Has anyone done a deeper investigation into what their deal is or at least some kind of rant I could indulge in for catharsis?
Iāve seen conspiracy theories that a lot of the ad buys for stuff like this are a new avenue of money laundering, focusing on stuff like pirate sports streaming sites, sketchy torrent sites, etc. But a full scraped, SEOd Wikipedia clone also fits.
The Gentle Singularity - Sam Altman
This entire blog post is sneerable so I encourage reading it, but the TL;DR is:
Weāre already in the singularity. Chat-GPT is more powerful than anyone on earth (if you squint). Anyone who uses it has their productivity multiplied drastically, and anyone who doesnāt will be out of a job. 10 years from now weāll be in a society where ideas and the execution of those ideas are no longer scarce thanks to LLMs doing most of the work. This will bring about all manner of sci-fi wonders.
Sure makes you wonder why Mr. Altman is so concerned about coddling billionaires if he thinks capitalism as we know it wonāt exist 10 years from now but hey what do I know.
I think I liked this observation better when Charles Stross made it.
If for no other reason than he doesnāt start off by dramatically overstating the current state of this tech, isnāt trying to sell anything, and unlike ChatGPT is actually a good writer.
Chat-GPT is more powerful than anyone on earth (if you squint)
xD
No sorry, let me rephrase,
Lol, lmao
How do you even grace this with a response. Shut your eyes and loudly sing ālalalala I canāt hear youā
anyone who doesnāt will be out of a job
quick, Sam, name five jobs that donāt involve sitting at a desk
Love how the most recent post in the AI2027 blog starts with an admonition to please donāt do terrorism:
We may only have 2 years left before humanityās fate is sealed!
Despite the urgency, please do not pursue extreme uncooperative actions. If something seems very bad on common-sense ethical views, donāt do it.
Most of the rest is run of the mill EA type fluff such as hereās a list of influential professions and positions you should insinuate yourself in, but failing that you can help immanentize the eschaton by spreading the word and giving us money.
Please, do not rid me of this troublesome priest despite me repeatedly saying that he was a troublesome priest, and somebody should do something. Unless you think it is ethical to do so.
Itās kind of telling that itās only been a couple months since that fan fic was published and there is already so much defensive posturing from the LW/EA community. I swear the people who were sharing it when it dropped and tacitly endorsing it as the vision of the future from certified prophet Daniel K are like, āoh itās directionally correct, but too aggressiveā Note that we are over halfway through 2025 and the earliest prediction of agents entering the work force is already fucked. So if you are a āsuper forecasterā (guru) you can do some sleight of hand now to come out against the model knowing the first goal post was already missed and the tower of conditional probabilities that rest on it is already breaking.
Funniest part is even one of authors themselves seem to be panicking too as even they can tell they are losing the crowd and is falling back on this āItās not the most likely future, itās the just the most probable.ā A truly meaningless statement if your goal is to guide policy since events with arbitrarily low probability density can still be the āmost probableā given enough different outcomes.
Also, thereās literally mass brain uploading in AI-2027. This strikes me as physically impossible in any meaningful way in the sense that the compute to model all molecular interactions in a brain would take a really, really, really big computer. But I understand if your religious beliefs and cultural convictions necessitate big snake š to upload you, then I will refrain from passing judgement.
https://www.wired.com/story/openworm-worm-simulator-biology-code/
Really interesting piece about how difficult it actually is to simulate āsimpleā biological structures in silicon.
One more comment, idk if yaāll remember that forecast that came out in April(? iirc ?) where the thesis was the ātime an AI can operate autonomously is doubling every 4-7 months.ā AI-2027 authors were like āthis is the smoking gun, it shows why are model is correct!!ā
They used some really sketchy metric where they asked SWEs to do a task, measured the time it took and then had the models do the task and said that the modelās performance was wherever it succeeded at 50% of the tasks based on the time it took the SWEs (wtf?) and then they drew an exponential curve through it. My gut feeling is that the reason they choose 50% is because other values totally ruin the exponential curve, but I digress.
Anyways they just did the metrics for Claude 4, the first FrOnTiEr model that came out since they made their chart and⦠drum roll no improvement⦠in fact it performed worse than O3 which was first announced last December (note instead of using the date O3 was announced in 2024, they used the date where it was released months later so on their chart it make āline go upā. A valid choice I guess, but a choice nonetheless.)
This world is a circus tent, and there still aint enough room for all these fucking clowns.
Bummer, I wasnāt on the invite list to the hottest SF wedding of 2025.
Update your mental models of Claude lads.
Because if the wife stuff isnāt true, what else could Claude be lying about? The vending machine business?? The blackmail??? Being bad at Pokemon???
Itās gonna be so awkward when Anthropic reveals that inside their data center is actually just Some Guy Named Claude who has been answering everyoneās questions with his superhuman typing speed.
11.000 indian people renamed to Claude
trying to explain why a philosophy background is especially useful for computer scientists now, so i googled āphysiognomy aiā and now i hate myself
Discover Yourself with Physiognomy.ai
Explore personal insights and self-awareness through the art of face reading, powered by cutting-edge AI technology.
At Physiognomy.ai, we bring together the ancient wisdom of face reading with the power of artificial intelligence to offer personalized insights into your character, strengths, and areas for growth. Our mission is to help you explore the deeper aspects of yourself through a modern lens, combining tradition with cutting-edge technology.
Whether youāre seeking personal reflection, self-awareness, or simply curious about the art of physiognomy, our AI-driven analysis provides a unique, objective perspective that helps you better understand your personality and life journey.
The web is often Dead Dove in a Bag as a Service innit?
do not eat
trying to explain why a philosophy background is especially useful for computer scientists now, so i googled āphysiognomy aiā and now i hate myself
Well, I guess thereās your answer - āphilosophy teaches you how to avoid falling for huckstersā
Prices ranging from 18 to 168 USD (why not 19 to 199? Number magic?) But then you get integrated approach of both Western and Chinese physiognomy. Two for one!
Thanks, I hate it!
Number magic?
they use numerology.ai as a backend
āwe encode shit as numbers in an arbitrary way and then copy-paste it into chatgptā
whyyyyy itās a real site
āAnother thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubbleās wide-spread harms [ā¦] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while.ā
Well, it appears Iāve fucking called it - Iāve recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:
You want my opinion on this small-scale debacle, Iāve got two thoughts about this:
First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI artās uniquely AI-like sloppiness, and chatbotsā uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AIās detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.
Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As Iāve already noted, the LLM bubbleās undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash thatās built up against AI, and youāve got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.
Humans are very picky when it comes to empathy. If LLMs were made out of cultured human neurons, grown in a laboratory, then there would be outrage over the way in which we have perverted nature; compare with the controversy over e.g. HeLa lines. If chatbots were made out of synthetic human organs assembled into a body, then not only would there be body-horror films about it, along the lines of eXistenZ or Blade Runner, but there would be a massive underground terrorist movement which bombs organ-assembly centers, by analogy with existing violence against abortion providers, as shown in RUR.
Remember, always close-read discussions about robotics by replacing the word ārobotā with āslaveā. When done to this particular hashtag, the result is a sentiment that we no longer accept in polite society:
Iām not gonna lie, if slaves ever start protesting for rights, Iām also grabbing a sledgehammer and going to town. ⦠The only rights a slave has are that of property.
A hackernews muses about vibe coding a chatbot to provide therapy for people in crisis. Soon, an actual health care professional shows up to butcher the offender and defile the corpse. This causes much tut-tutting and consternation among the locals.
https://news.ycombinator.com/item?id=44535197
Edit: a shower thought: have any of yall noticed the way that prompt enjoyers describe using Cursor, tab completions, and such are a repackaging of the psychology of loot boxes? In particular, they share the variable-interval reward schedule that serves as the hook in your typical recreational gambling machines.
Iāve been making casual observation how a number of the adhd people I know to have addiction tendencies tend to get real into prompts, but hadnāt observed the lootbox thing
thatās going into the ponder bucket.
you should be read up on the gospel of @fasterandworse never shutting up about āHookedā
the psychology of loot boxes?
yep! https://awful.systems/post/4568900
the book is āHookedā and itās Donāt Build The Torment Nexus Iām Now Providing You A Detailed Blueprint Of
Ye gods! Also, great write-up!
Do you reckon that Altman recognized the gacha potential from the get-go? That Big LLM has always been FanDuel for dorks, but on purpose?
One of the subjects in the MER study posted that too: https://x.com/QuentinAnthon15/status/1943948796414898370
A Supabase employee pleads with his software to not leak its SQL database like a parent pleads with a cranky toddler in a toy store.
The Supabase homepage implies AI bros are two levels below ābeginnerā, which I found somewhat amusing:
Skill Level
Its also completely accurate - AI bros are not only utterly lacking in any sort of skill, but actively refuse to develop their skills in favour of using the planet-killing plagiarism-fueled gaslighting engine that is AI and actively look down on anyone who is more skilled than them, or willing to develop their skills.
oof! Thatās hilarious!