Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful youā€™ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cutā€™nā€™paste it into its own post ā€” thereā€™s no quota for posting and the bar really isnā€™t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but thereā€™s no appropriate sneer-space for them. Iā€™m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. Iā€™m talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyā€™re inescapable at this point, yet I donā€™t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldnā€™t be surgeons because they didnā€™t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canā€™t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting thisā€¦)

  • BigMuffin69@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    9 days ago

    :( looked in my old CS deptā€™s discord, recruitment posts for the ā€œExistential Risk Laboratoryā€ running an intro fellowship for AI Safety.

    Looks inside at materials, fkn Bostrom and Kelsey Piper and whole slew of BS about alignment faking. Ofc the founder is an effective altruist getting a graduate degree in public policy.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          Ā·
          edit-2
          9 days ago

          Center For Applied Rationality. They hosted ā€œworkshopsā€ were people could learn to be more rational. Except there methods werenā€™t really tested. And pretty culty. And reaching the correct conclusions (on topics such as AI doom) were treated as proof of rationality.

          Edit: still host, present tense. I had misremembered some news of some other rationality adjacent institution as them shutting down, nope, they are still going strong, offering regular 4 day brainwashing sessions workshops.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        edit-2
        9 days ago

        Mesa-optimization? Iā€™m not sure who in the lesswrong sphere coined itā€¦ but yeah, itā€™s one of their ā€œtechnicalā€ terms that donā€™t actually have academic publishing behind it, so jargon.

        Instrumental convergenceā€¦ I think Bostrom coined that one?

        The AI alignment forum has a claimed origin here is anyone on the article here from CFAR?

        • Architeuthis@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          9 days ago

          Mesa-optimization

          Why use the perfectly fine ā€˜inner optimizerā€™ mentioned in the references when you can just ask google translate to give you the clunkiest, most pedestrian and also wrong part of speech Greek term to use in place of ā€˜inā€™ instead?

          Also natural selection is totally like gradient descent brah, even though evolutionary algorithms actually modeled after natural selection used to be their own subcategory of AI before the term just came to mean lying chatbot.

        • istewart@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          Ā·
          9 days ago

          Mesa-optimizationā€¦ that must be when you rail some crushed-up Adderall XRs, boof some modafinil for good measure, and spend the night making sure your kitchen table surface is perfectly flat with no defects abrasions deviations contusionsā€¦