I don’t have a quip, just a sorrowful head shake that I somehow got in the shitty timeline.

  • PeepinGoodArgs@reddthat.com
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Researchers ultimately concluded that “a constructive strategy for identifying the violation of social norms is to focus on a limited set of social emotions signaling the violation,” namely guilt and shame. In other words, the scientists wanted to use AI to understand when a mobile user might be feeling bad about something they’ve done. To do this, they generated their own “synthetic data” via GPT-3, then leveraged zero-shot text classification to train predictive models that could “automatically identify social emotions” in that data. The hope, they say, is that this model of analysis can be pivoted to automatically scan text histories for signs of misbehavior.

    Lemme get this straight: DARPA researches fabricated a series of words that signaled emotional states. And then, they, the DARPA researchers classified the series of words with the emotional states for the AI to train on (zero-shot classification). And then they hope to leverage the trained AI to identify “social emotions”?

    Everything about this is fucking stupid.

    The GPT-3 prompt could’ve been: “What are some sentences a shameful socialist/conservative/anarchist/terrorist/etc protestor/litterer/murderer/liar/etc might use?”, implicitly connecting shame a particular ideology. As such, social emotions signals more emotions by their method of generation and classification.

    Suddenly, some random person is being targeted for having fucked up and they’re like, “Wtf did I do? Yes, I did shoplift from Target, but it was like a $20 shirt because my job at Wal-Mart makes me use food stamps to make ends meet. Fuck off!”

    The AI automatically detects another violation of social norms.

    And you’re like, “That’s an edge case…”. Yeah, sure, but it’s DARPA, we’re talking about here. That should be enough said.

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Have you texted someone lately to express guilt over…something? The government probably wants to know about it.

    Good thing I’m one of those people who doesn’t feel guilty about the things I do and express them. The feeling of guilt goes away shortly for me anyways, so I think I’m safe until they find a way to actually start reading our minds and invading our privacy of thought.

  • dewritoninja@pawb.social
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Great another tool that can be used against women and minorities. With all that far right authoritarianism on the rise I can’t wait for the ai to flag me as a raging homosexual so I can end up in a labor camp or dead

  • nxfsi@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Lemmings will support this as long as it is against:

    • Republicans
    • pedophiles
    • Tr*mp supporters
    • anti-vax
    • climate change deniers
    • racists
    • N***s