Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • @normanwall@lemmy.world
    link
    fedilink
    English
    26
    edit-2
    5 months ago

    It controls all power infrastructure, can find new exploits to build it’s own botnet and is able to reprogram firmware of devices (routers/switches/servers)

    It can send press releases, emails, tweets using language similar to any user it’s read from before

    • Ultragramps
      link
      fedilink
      English
      45 months ago

      So, if it only clocks me using slangs for rizz I don’t need, I’ll know it’s a bot, no cap. Word.