A shocking story was promoted on the “front page” or main feed of Elon Musk’s X on Thursday:

“Iran Strikes Tel Aviv with Heavy Missiles,” read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran’s embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X’s own official AI chatbot, Grok, and then promoted by X’s trending news product, Explore, on the very first day of an updated version of the feature.

  • JackGreenEarth
    link
    fedilink
    English
    03 months ago

    AI isn’t inherently bad. Once AI cars cause less accidents than human drivers (even if they still cause some accidents) it will be moral to use them on roads.

    • @anon987@lemmy.world
      link
      fedilink
      English
      -13 months ago

      AI cars already cause drastically less accidents. And the accidents they do cause are overwhelmingly minor.

      • @Thorny_Insight@lemm.ee
        link
        fedilink
        English
        13 months ago

        People hate it when an accident happens and there’s no one to blame. Now it’s still on the driver’s responsibility but that’s not always going to be the case. We’re never reaching zero traffic deaths even with self driving cars that are a hundred times better than the best human driver.