It’s brief, around 25:15

https://youtube.com/watch?v=nf7XHR3EVHo


If you’ve been sitting on making a post about your favorite instance, this could be a good opportunity to do so.

Going by our registration applications, a lot of people are learning about the fediverse for the first time and they’re excited about the idea. I’ve really enjoyed reading through them :)

  • mke@programming.dev
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    12 hours ago

    Content moderation primarily serves advertisers

    I’m lost, here. Do you not think fighting toxicity and hate speech is a valid and important function of moderation that’s just as much or more for the sake of the people as it might be for advertisers?

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      48 minutes ago

      I think that it’s just words & images on a screen that we could easily ignore like people did before, and people are indulging a grandiose conceit by thinking that moderation is that important or serves any greater cause than the interests of moderators. On social media that seems to be to serve the consumers, by which I mean the advertisers & commercial interests who pay for the attention of users. While the old internet approach of ignoring, gawking at the freakshow, or ridiculing/flaming toxic & hateful shit worked fine then resulting in many people disengaging, ragequitting, or going outside to do something better, that’s not great for advertisers protecting their brand & wanting to keep people pliant & unchallenged as they stay engaged in their uncritical filter bubbles & echo chambers.

      With old internet, safety wasn’t an internet nanny, thought police shit, and “stop burning my virgin eyes & ears”. It was an anonymous handle, not revealing personally identifying information (a/s/l?), not falling for scams & giving out payment information (unless you’re into that kinky shit). Glad to see newer social media returning to some of that.

      • mke@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        27 minutes ago

        Toxicity doesn’t “work fine,” it’s contagious and destructive. For projects, it slows progress. For communities in general, it reinforces bad behavior and pushes out newcomers, leading to more negative spaces, isolation, and stagnation, just off the top of my head. These were issues in older communities just as they are in modern ones.

        I don’t see why we should abandon moderation for your benefit, at the expense of people who care.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      10 hours ago

      I think the rise of hate speech on centralised platforms relies very heavily on their centralised moderation and curation via algorithms.

      They have all known for a long time that their algorithms promote hate speech, but they know that curbing that behaviour negatively affects their revenue, so they don’t do it. They chase the fast buck, and they appease advertisers who have a naturally conservative bent, and that means rage bait and conventional values.

      That’s quite apart from when platform owners explicitly support that hate speech and actively suppress left leaning voices.

      I think what we have on decentralised systems where we curate/moderate for ourselves works well because most of that open hate speech is siloed, which I think is the best thing you can do with it.