• BeatTakeshi@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    1
    ·
    8 months ago

    “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

    “Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”

    Are we still supposed to believe that the pursuit of AI development is for the good of Humanity?

    Fuck you Google for opening Nimbus to the IDF, via a contract that contains a clause saying that you can’t break it whatever the reason. Fucking moronic disgrace to humanity all you bunch

  • apfelwoiSchoppen@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    edit-2
    8 months ago

    Another case where AI is used as a slick marketing term for a black box. A box in which humans selected indiscriminate bombing and genocide. Sure there is new technology used, but at the end of the day it is just military industry marketing to justify humans mass murdering other humans.

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      ·
      edit-2
      8 months ago

      It’s phrenology again.

      You really want to do something, but it feels evil and you don’t want to be evil so you slap some pseudoscience on it and relax. It’s done for Reasons now.

    • cone_zombie@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      7 months ago
      if is_hamas(new_target):
      
          x1 = new_target.x - 1000
          y1 = new_target.y - 1000
          x2 = new_target.x + 1000
          y2 = new_target.y + 1000
      
          airstrike(x1, y1, x2, y2, phosphorus=True)
      
  • Flying Squid@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    23
    ·
    8 months ago

    Maybe don’t use something that is rarely discussed without using the word “hallucination” in your plans to FUCKING KILL PEOPLE?

      • intrepid@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        8 months ago

        Doesn’t mean that it won’t hallucinate. Or whatever you call an AI making up crap.

        • mwguy@infosec.pub
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          7 months ago

          LLM’s hallucinate all the time. The hallucination is the feature. Depending on how you design the neural network you can get an AI that doesn’t hallucinate. LLM’s have to do that, because they’re mimicking human speech patterns and predicting one of my possible responses.

          A model that tries to predict locations of people likely wouldn’t work like that.

  • Whirling_Cloudburst@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 months ago

    We warned the world over ten years ago that this shit was going to happen. It will only get worse when AI drone swarms can be deployed on the cheap.

  • PugJesus@kbin.social
    link
    fedilink
    arrow-up
    16
    ·
    8 months ago

    Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.

    Fucking lmao

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    8 months ago

    This is the best summary I could come up with:


    The Israeli military’s bombing campaign in Gaza used a previously undisclosed AI-powered database that at one stage identified 37,000 potential targets based on their apparent links to Hamas, according to intelligence sources involved in the war.

    In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict.

    Israel’s use of powerful AI systems in its war on Hamas has entered uncharted territory for advanced warfare, raising a host of legal and moral questions, and transforming the relationship between military personnel and machines.

    The testimony from the six intelligence officers, all who have been involved in using AI systems to identify Hamas and Palestinian Islamic Jihad (PIJ) targets in the war, was given to the journalist Yuval Abraham for a report published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call.

    According to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war.

    Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants.


    The original article contains 2,185 words, the summary contains 238 words. Saved 89%. I’m a bot and I’m open source!

  • zerog_bandit@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    25
    ·
    7 months ago

    It sounds sinister until you remember that Hamas wipes it’s ass with the Geneva convention and regularly disguises fighters as civilians.