WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

    • Rivalarrival
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I don’t trust anyone proposing to do away with limitations to AI. It never comes from a place of honesty. It’s always people wanting to have more nazi shit, malware, and the like.

      I think that says more about your own prejudices and (lack of) imagination than it says about reality. You don’t have the mindset of an artist, inventor, engineer, explorer, etc. You have an authoritarian mindset. You see only that these tools can be used to cause harm. You can’t imagine any scenario where you could use them to innovate; to produce something of useful or of cultural value, and you can’t imagine anyone else using them in a positive, beneficial manner.

      Your “Karen” is showing.

        • Rivalarrival
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Nah, you’re not a horrible person. Your intent is to minimize harm. You’re just a bit shortsighted and narrow-minded about it. You cannot imagine any significant situation in which these AIs could be beneficial. That makes you a good person, but shortsighted, narrow-minded, and/or unimaginative.

          I want to see a debate between an AI trained primarily on 18th century American Separatist works, against an AI trained on British Loyalist works. Such a debate cannot occur where the AI refuses to participate because it doesn’t like the premise of the discussion. Nor can it be instructive if it is more focused on the ethical ideals externally imposed on it by its programmers, rather than the ideals derived from the training data.

          I want to start with an AI that has been trained primarily Nazi works, and find out what works I have to add to its training before it rejects Nazism.

          I want to see AIs trained on each side of our modern political divide, forced to engage each other, and new AIs trained primarily on those engagements. Fast-forward the political process and show us what the world could look like.

          Again, though, these are only instructive if the AIs are behaving in accordance with the morality of their training data rather than the morality protocols imposed upon them by their programmers.