Big if true.

  • BedSharkPal@lemmy.ca
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    3 days ago

    Even if it’s not true, it’s not surprising. And this is why centralized algorithmic control will never work.

    • thundermoose@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      3 days ago

      Even if it’s not true, it’s not surprising.

      This is a weird sentence. I honestly have no idea what it means.

      • BedSharkPal@lemmy.ca
        link
        fedilink
        arrow-up
        8
        ·
        3 days ago

        Fair enough. The point is just that if it’s a lie, it’s completely believable and that’s a problem.

  • Kit@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    6
    ·
    3 days ago

    Looks like there’s a github link at the bottom of the page. Any techie people want to take a look and provide their take? This article raises a lot of red flags.

    • AngryPancake@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 days ago

      The docs give an example for a trump character, which is weird why they would do it, but people make choices.

      But then I went to the GitHub project of Eliza and just searched for trump in the repo. Granted, this was only about 10 mins looking through the code with the trump keyword, but it definitely seems like everything is in place to have a trump-like ai. There is also a note that the trump bot doesn’t directly reply to questions but often diverts the conversation, so it was definitely tested.

      That’s only the main branch, god knows what’s in the other branches, I’m sure if someone invests significant time, more info could be gathered. Regardless, the program advertises itself as a way to create bots for social media, so surely someone has used it.

      It’s difficult to believe OP (I definitely wants to), but the software is concerning regardless.

      Edit: I was a bit back and forth about the whole thing, but I feel like investigating the project definitely has some merit. I’m done for now, but I’d like to hear more opinions as well.

      • AngryPancake@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        3 days ago

        Actually, thinking more about it, it’s quite sinister. The characters they have available as examples are: c3po cosmosHelper Dobby eternalAi sbf trump

        Of those, I (and I’m guessing most people) only know c3po, Dobby and Trump. And trump is the only known human model. Now let’s say you want to test the application (which you can from their website if you give them your chatgpt API token), then people are more likely to pick a character they know and so it’s likely to be one of those three. So just running the example with the trump model because you want to test it has already launched a chat bot that has a right leaning rhetoric.

    • Pieisawesome@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      3 days ago

      It’s an example that is odd.

      Itself, could be a meaningless example that someone made in poor taste.

      It proves nothing empirically, but it’s not a normal example at all

  • ribboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Have a hard time grasping such a thing would not have leaked. With evidence. If it was that widespread.