Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned soo many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
So, Iāve been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend Iāve noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that donāt involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs canāt do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].
Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.
I donāt really have anywhere Iām going with this, just something I noted that I donāt want to waste the energy repeatedly re-explaining on reddit, so Iām letting a primal scream out here to get it out of my system.
Yes, thank you, Iām also annoyed about this. Even classic āAIā approaches for simple pattern detection (what used to be called āMLā a few hype waves ago, although itās much older than that even) are now conflated with capabilities of LLMs. People are led to believe that ChatGPT is the latest and best and greatest evolution of āAIā in general, with all capabilities that have ever been in anything. And itās difficult to explain how wrong this is without getting too technical.
Related, this fun article: ChatGPT āAbsolutely Wreckedā at Chess by Atari 2600 Console From 1977
Relatedly, the gathering of (useful, actually works in real life, can be used to make products that turn a profit or that people actually want, and sometimes even all of the above at the same time) computer vision and machine learning and LLMs under the umbrella of āAIā is something I find particularly galling.
The eventual collapse of the AI bubble and the subsequent second AI winter is going to take a lot of useful technology with it that had the misfortune to be standing a bit too close to LLMs.