“The Wikimedia Foundation has been exploring ways to make Wikipedia and other Wikimedia projects more accessible to readers globally,” a Wikimedia Foundation spokesperson told me in an email. “This two-week, opt-in experiment was focused on making complex Wikipedia articles more accessible to people with different reading levels. For the purposes of this experiment, the summaries were generated by an open-weight Aya model by Cohere. It was meant to gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”

Some very out of touch people in the Wikimedia Foundation. Fortunately the editors (people who actually write the articles) have the sense to oppose this move in mass.

  • prof@infosec.pub
    link
    fedilink
    arrow-up
    6
    ·
    2 days ago

    Isn’t the Wikipedia article usually already the summary of the topic?

    If there’s an article with more than 20 references to papers it’s usually already abridged enough.

    Just auto-generate videos with AI images and voiceover and add subway surfers gameplay on the side for those who think this slop is needed.

  • Smoke@beehaw.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    3 days ago

    Wikipedia has in some ways become a byword for sober boringness, which is excellent.

    This is both funny and also an excellent summary of why Wikipedia uniquely has an incentive not to jump on the AI bandwagon. Like a bank maintaining COBOL decades after everyone else moved on, its (goal of) reputation for reliability means that there’s a strong internal conservative faction opposed to introducing new disruptive features.

  • HappyFrog@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    8
    arrow-down
    3
    ·
    3 days ago

    Wikipedia would probably be the only organization that I would trust with AI. They’ve been using it for a while now to flag sections that might need to be rewritten, but they don’t let the AI write anything itself, only notify human editors that there might be a problem. Or, at least that was what I heard a couple of ywars ago when they talked about it last.

    • sculd@beehaw.orgOP
      link
      fedilink
      arrow-up
      15
      ·
      3 days ago

      That is not the case here. These are not bots which flagged issues, but literally a LLM to help with writing “summaries”, which is why the reaction is so different.

      • HappyFrog@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        4
        ·
        3 days ago

        Yeah, I was thinking that if any organization would do AI summaries right, it would be Wikipedia. But I trust the editors the most.

    • ɔiƚoxɘup@beehaw.org
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      For som reason, “ywars” changed your voice into that of a pirate, and it made me cackle. Thanks 💛

        • ɔiƚoxɘup@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Fair. I should really quit using autocomplete and stop using Gboard for privacy reasons. Honestly, I’m just a little bit away from de-googling and going graphene. Just gotta spin up immich and a few other servers.