• ExtraMedicated@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    The company I work for is testing out an AI phone answering service. The thing sends messages to the wrong people and thinks one of my coworkers only exists if you don’t include her last name when asking for her.

    But sure, let’s give them weapons.

  • shameless@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    All the years of training a pilot would receive and then on top of that all of the nuances that they would learn along the way of becoming a skilful pilot vs. a computer that someone throws a chat prompt into

    Why do people have so much faith in this vaporware 😂 It’s good at very specific things but people are acting as though its one size fits all.

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      7 months ago

      The AI pilot receives all that same training, and then some. It probably has the combined training of every pilot mission ever flown. People on Lemmy might think AI is a one size fits all solution, but the military understands that it needs specialized training. It will have received millions of scenarios and flight techniques as part of its LLM.

      • shameless@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Okay that’s a great counter argument and I can’t say that a model that’s being trained on every pilot mission ever flown wouldn’t be an excellent advantage.

        But so far we’ve seen companies try the exact same thing with cars. And no one has come up with something which is close to being allowed to be fully autonomous without it killing people or causing traffic incidents.

        So how can we ever expect something such as an LLM to understand the nuances of war? We already struggle at attributing blame with current technology, this just sounds like another great excuse for blowing up people in a foreign country and no one has to take any responsibility for it.

        • AwkwardLookMonkeyPuppet@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          7 months ago

          Those are valid concerns and ones they don’t really seem to have answered yet, which makes the pace at which they’re progressing irresponsible. There was an article a year or so about a simulated experiment with an AI pilot where it got points for bombing a target successfully, and lost points for not bombing the target. But it had to get approval from a human operator before striking the target. The human told it no, so it killed the human, and then bombed the target. So they told it that it can’t kill the human or it will lose all its points. So it attacked the communication equipment that the human used to tell it no before the human could tell it no and then bombed the target. This was all a simulation, so no humans were actually killed, but that raised all sorts of red flags. I’m sure they’ve put hundreds of hours into research since then, but ultimately it’s hard not to feel like this will backfire. Perhaps that’s just because a lifetime of being conditioned by Terminator and Matrix movies, but some of the evidence so far like that experiment proves that it’s not an outlandish concern. I don’t see how humans can envision ever possible scenario in which the AI might go rogue. Hopefully they have a great off switch.