Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

Last weekā€™s thread

(Semi-obligatory thanks to @dgerard for starting this)

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    22
    Ā·
    edit-2
    1 month ago

    Update on LLM reviewer situation:

    PM is down to let us pitch them our argument. Good news: PM seems like a cool person, is open minded, and is being pretty frank about the forces at work here. Bad news: taking action on this will open a whole can of worms, so any proof has to be ironclad. After conferring with our local grant wizards, the battle plan is to crank out a 15 minute pitch consisting of:

    • a 2 min elevator pitch of our tech, highlighting what the reviews mangled
    • intro to LLMs for people who know what glycosylation is
    • intro to semiotics for the same
    • show how transformer architectures transform symbols into symbols to produce text-shaped objects without actual intent, ideas, or context (and why ā€œautomated AI detectionā€ is also bullshit).
    • show a few examples of plausible-at-first-glance gen-ai slop (the nonexistant turkish fortress, mouse dck, etc)
    • Highlight how our weird reviews (both good and bad) fit exactly into this bin (absolutely mis-interpreting a table, inventing a bacterial species we didnā€™t use and talking shit about it, miscounting our team members, etc)

    Weā€™ll be leaning on the Stochastic Parrot paper pretty hard, because itā€™s a good entry into the field on the skeptical side and is just well constructed in general. Iā€™m also on the hunt simplified diagram for how LLMs convert tokens to arrays to tokens from the original transformer literature. Unfortunately, so much of the literature is obscurantist on purpose, and I want to avoid falling into the ā€œIt canā€™t be that stupidā€ trap. Any pointers in that direction are most welcome!

    Wish us luck, heh!

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      1 month ago

      good luck! it sounds like youā€™re coming in remarkably well-prepared, so unless theyā€™re gonna go fingers-in-ears (and it sounds like the PMā€™s better than that), youā€™re at least likely to make an impact

      Unfortunately, so much of the literature is obscurantist on purpose

      between this and all the SEO on OpenAIā€™s marketing horseshit and breathlessly parroted press releases, itā€™s exhausting to find good sources for how any of this stuff actually works in reality. shit, Iā€™ve had old primary sources on things like Sora get buried after OpenAIā€™s promises didnā€™t pan out. Iā€™m hoping you can find what you need ā€” our back archives might have a few links if you havenā€™t searched through here yet.