Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      It’s actually interesting. They found the LLMs gave the correct diagnosis high-90-something percent of the time if they had access to the notes doctors wrote about their symptoms. But when thrust into the room, cold, with patients, the LLMs couldn’t gather that symptom info themselves.

      • Hacksaw@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        LLM gives correct answer when doctor writes it down first… Wowoweewow very nice!

        • scarabic@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          10 days ago

          If you think there’s no work between symptoms and diagnosis, you’re dumber than you think LLMs are.