Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
New Zitron dropped, and, fuck, I feel this one in my bones.
This is the heart of recognizing so much of the bullshit in the tech field. I also want to make sure that our friends in the Ratsphere get theirs for their role in enabling everyone to pretend thereās a coherent path between the current state of LLMs and that hypothetical future where they can actually do things.
But the Ratspace doesnāt just expect them to actually do things, but also self improve. Which is another step above just human level intelligence, it also means that self improvement is possible (and on the highest level of nuttyness, unbound), a thing we have not even seen if it is possible. And it certainly doesnāt seem to be, as the lengths between a newer better version of chatGPT seems to be increasing (an interface around it doesnāt count). So imho due to chatgpt/LLMs and the lack of fast improvements we have seen recently (some even say performance has decreased, so we are not even getting incremental innovations), means that the ācould lead to AGI-foomā possibility space has actually shrunk, as LLMs will not take us there. And everything including the kitchen sink has been thrown at the idea. To use some AI-weirdo lingo: With the decels not in play(*), why are the accels not delivering?
*: And lets face it, on the fronts that matter, we have lost the battle so far.
E: full disclosure I have not read Zitrons article, they are a bit long at times, look at it, you could read 1/4th of a SSC article in the same time.
Can confirm that about Zitronās writing. He even leaves you with a sense of righteous fury instead of smug self-satisfaction.
And I think that the whole bullshit āfoomā argument is part of the problem. For the most prominent āthinkersā in related or overlapping spaces with where these LLM products are coming from the narrative was never about whether or not these models were actually capable of what they were being advertised for. Even the stochastic parrot arguments, arguably the strongest and most well-formulated anti-AI argument when the actual data was arguably still coming in, was dismissed basically out of hand. āSomething something emergent something.ā Meanwhile they just keep throwing more money and energy into this goddamn pit and the real material harms keep stacking up.