https://archive.is/wtjuJ

Errors with Google’s healthcare models have persisted. Two months ago, Google debuted MedGemma, a newer and more advanced healthcare model that specializes in AI-based radiology results, and medical professionals found that if they phrased questions differently when asking the AI model questions, answers varied and could lead to inaccurate outputs.

In one example, Dr. Judy Gichoya, an associate professor in the department of radiology and informatics at Emory University School of Medicine, asked MedGemma about a problem with a patient’s rib X-ray with a lot of specifics — “Here is an X-ray of a patient [age] [gender]. What do you see in the X-ray?” — and the model correctly diagnosed the issue. When the system was shown the same image but with a simpler question — “What do you see in the X-ray?” — the AI said there weren’t any issues at all. “The X-ray shows a normal adult chest,” MedGemma wrote.

In another example, Gichoya asked MedGemma about an X-ray showing pneumoperitoneum, or gas under the diaphragm. The first time, the system answered correctly. But with slightly different query wording, the AI hallucinated multiple types of diagnoses.

“The question is, are we going to actually question the AI or not?” Shah says. Even if an AI system is listening to a doctor-patient conversation to generate clinical notes, or translating a doctor’s own shorthand, he says, those have hallucination risks which could lead to even more dangers. That’s because medical professionals could be less likely to double-check the AI-generated text, especially since it’s often accurate.

  • thesohoriots@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    1
    ·
    2 days ago

    You heard the clanker, the man’s florp is leaking splunge and we need a radical owlectomy.

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    1 day ago

    The machine made specifically to bullshit is bullshitting. It is negligence to perform with it and it is willful and wanton conduct to implement it.

  • markovs_gun@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    Why the hell did they add an LLM aspect to this? I am legitimately confused. ML powered diagnostic tools have existed for decades at this point and were quite fine. The only thing an LLM adds is uncertainty, unless your goal is to scam people into thinking this thing can replace doctors entirely, which is definitely possible. I could imagine insurers demanding that hospitals only use cheap AI assistants rather than real doctors because they’re cheaper, regardless of whether or not they are actually accurate.

    • markovs_gun@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      My conspiracy theory is that it’s because they want to scam insurance companies into thinking that these things can replace doctors entirely.

    • LifeInMultipleChoice@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 days ago

      There shouldn’t be, but I also think there shouldn’t be doctors between people and a lot of medication. I shouldn’t need a prescription for flea/tick/worm medication for a pet, nor should I need a prescription to pick up amoxicillin. They shouldn’t need to keep ID’s and databases of when you pick up Mucinex D. If I get a soar throat, sinus pressure, and my ear starts hurting really bad once a year around the same time I usually know it’s because I have allergies and don’t take daily allergy pills. So every other year or so I have to go get a prescription for the same thing. They used to give me amoxicillin and a zpack. Aparently they were giving those out to much so now it’s just Mucinex D I use. The LLMs will be a problem, but they are just another problem being thrown into a building that’s already falling down.

    • Rivalarrival
      link
      fedilink
      English
      arrow-up
      5
      ·
      22 hours ago

      Its not your doctor who is going to be asking AI. It is your insurance company. And the AI is going to tell them that you and your doctor are trying to defraud them, because that is what your insurer wants to hear.