It’s brief, around 25:15

https://youtube.com/watch?v=nf7XHR3EVHo


If you’ve been sitting on making a post about your favorite instance, this could be a good opportunity to do so.

Going by our registration applications, a lot of people are learning about the fediverse for the first time and they’re excited about the idea. I’ve really enjoyed reading through them :)

  • lmmarsano@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    2
    ·
    edit-2
    19 hours ago

    Not friendica, which seems an obvious facebook alternative.

    Also, I think they’re onto something with their fuck it approach that every social media platform would benefit from. The internet was mostly that before. Content moderation primarily serves advertisers, it was never really for the people. Old internet anarchy was chaotic fun.

    • mke@programming.dev
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      12 hours ago

      Content moderation primarily serves advertisers

      I’m lost, here. Do you not think fighting toxicity and hate speech is a valid and important function of moderation that’s just as much or more for the sake of the people as it might be for advertisers?

      • lmmarsano@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        2 hours ago

        I think that it’s just words & images on a screen that we could easily ignore like people did before, and people are indulging a grandiose conceit by thinking that moderation is that important or serves any greater cause than the interests of moderators. On social media that seems to be to serve the consumers, by which I mean the advertisers & commercial interests who pay for the attention of users. While the old internet approach of ignoring, gawking at the freakshow, or ridiculing/flaming toxic & hateful shit worked fine then resulting in many people disengaging, ragequitting, or going outside to do something better, that’s not great for advertisers protecting their brand & wanting to keep people pliant & unchallenged as they stay engaged in their uncritical filter bubbles & echo chambers.

        With old internet, safety wasn’t an internet nanny, thought police shit, and “stop burning my virgin eyes & ears”. It was an anonymous handle, not revealing personally identifying information (a/s/l?), not falling for scams & giving out payment information (unless you’re into that kinky shit). Glad to see newer social media returning to some of that.

        • mke@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          Toxicity doesn’t “work fine,” it’s contagious and destructive. For projects, it slows progress. For communities in general, it reinforces bad behavior and pushes out newcomers, leading to more negative spaces, isolation, and stagnation, just off the top of my head. These were issues in older communities just as they are in modern ones.

          I don’t see why we should abandon moderation for your benefit, at the expense of people who care.

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        10 hours ago

        I think the rise of hate speech on centralised platforms relies very heavily on their centralised moderation and curation via algorithms.

        They have all known for a long time that their algorithms promote hate speech, but they know that curbing that behaviour negatively affects their revenue, so they don’t do it. They chase the fast buck, and they appease advertisers who have a naturally conservative bent, and that means rage bait and conventional values.

        That’s quite apart from when platform owners explicitly support that hate speech and actively suppress left leaning voices.

        I think what we have on decentralised systems where we curate/moderate for ourselves works well because most of that open hate speech is siloed, which I think is the best thing you can do with it.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      5
      ·
      edit-2
      5 hours ago

      Lemmy has also taken over advertiser focused moderation patterns. A great example is NSFW. What is NSFW exactly? Not safe for work? Why is only that relevant?
      NSFW is just used to mark advertiser unfriendly content. Why else group nakedness, violence, sexual content, and death in the same category?
      It’s way too vague to be useful, you have no idea if you’re going to see a nipple or a murder.

      Content warnings like on Mastodon are better, but don’t provide a way to reliably filter out categories. I personally think it would be way better to have specific nested tags for certain types of material.

      • commander@lemmings.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 hours ago

        Are you new to the internet? NSFW literally means what it says: it’s content that would not be safe for you to be viewing at work.

        Advertising has nothing to do with it, which is why you still get ads on NSFW boards on 4chan; they’re just NSFW ads.

      • MortUS@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        Imagine traveling down a liminal space of tubes and the only signs are nondescript TLDs.