Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    Ā·
    5 days ago

    So, I’ve been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I’ve noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don’t involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can’t do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].

    Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.

    I don’t really have anywhere I’m going with this, just something I noted that I don’t want to waste the energy repeatedly re-explaining on reddit, so I’m letting a primal scream out here to get it out of my system.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      2 days ago

      Yes, thank you, I’m also annoyed about this. Even classic ā€œAIā€ approaches for simple pattern detection (what used to be called ā€œMLā€ a few hype waves ago, although it’s much older than that even) are now conflated with capabilities of LLMs. People are led to believe that ChatGPT is the latest and best and greatest evolution of ā€œAIā€ in general, with all capabilities that have ever been in anything. And it’s difficult to explain how wrong this is without getting too technical.

      Related, this fun article: ChatGPT ā€œAbsolutely Wreckedā€ at Chess by Atari 2600 Console From 1977

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      Ā·
      4 days ago

      Relatedly, the gathering of (useful, actually works in real life, can be used to make products that turn a profit or that people actually want, and sometimes even all of the above at the same time) computer vision and machine learning and LLMs under the umbrella of ā€œAIā€ is something I find particularly galling.

      The eventual collapse of the AI bubble and the subsequent second AI winter is going to take a lot of useful technology with it that had the misfortune to be standing a bit too close to LLMs.