this post was submitted on 09 Feb 2026
599 points (98.7% liked)

Technology

84069 readers
3287 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

you are viewing a single comment's thread
view the rest of the comments
[–] rumba@lemmy.zip 24 points 2 months ago (24 children)

Chatbots make terrible everything.

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias, catch things that might fall through the cracks and pack thousands of doctors worth of updated CME into a thing that can look at a case and go, you know, you might want to check for X. The right model can be fucking clutch at pointing out nearly invisible abnormalities on an xray.

You can't ask an LLM trained on general bullshit to help you diagnose anything. You'll end up with 32,000 Reddit posts worth of incompetence.

[–] XLE@piefed.social 12 points 2 months ago* (last edited 2 months ago) (14 children)

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias

  1. The belief AI is unbiased is a common myth. In fact, it can easily covertly import existing biases, like systemic racism in treatment recommendations.
  2. Even AI engineers who developed the training process could not tell you where the bias in an existing model would be.
  3. AI has been shown to make doctors worse at their jobs. The doctors who need to provide training data.
  4. Even if 1, 2, and 3 were all false, we all know AI would be used to replace doctors and not supplement them.
[–] rumba@lemmy.zip -1 points 2 months ago (5 children)
  1. can cut through bias is != unbiased. All it has to go on is training material, if you don't put reddit in, you don't get reddit's bias.
  2. see #1
  3. The study is endoscopy only. results don't say anything about other types or assistance like xrays where they're markedly better. 4% on 19 doctors is error bar material. Let's see more studies. Also, if they were really worse, fuck them for relying on AI, it should be there to have their back, not do their job. None of the uses for AI should be doing anything but assisting someone already doing the work.
  4. that's one hell of a jump to conclusions from something that's looking at endoscope pictures a doctor is taking while removing polyps to somehow doing the doctors job.
[–] XLE@piefed.social 1 points 2 months ago* (last edited 2 months ago) (1 children)

1/2: You still haven't accounted for bias.

First and foremost: if you think you've solved the bias problem, please demonstrate it. This is your golden opportunity to shine where multi-billion dollar tech companies have failed.

And no, "don't use Reddit" isn't sufficient.

3. You seem to be very selectively knowledgeable about AI, for example:

If [doctors] were really worse, fuck them for relying on AI

We know AI tricks people into thinking they're more efficient when they're less efficient. It erodes critical thinking skills.

And that's without touching on AI psychosis.

You can't dismiss the results you don't like, just because you don't like them.

4. We both know the medical field is for profit. It's a wild leap to assume AI will magically not be, even if it fulfills all the other things you assumed up until this point, and ignore every issue I've raised.

[–] rumba@lemmy.zip -3 points 2 months ago (1 children)

1/2: You still haven’t accounted for bias.

Apparently, reading comprehension isn't your strong point. I'll just block you now, no need to thank me.

[–] XLE@piefed.social 2 points 2 months ago

Ironic. If only you had read a couple more sentences, you could have proven the naysayers wrong, and unleashed a never-before-seen unbiased AI on the world.

load more comments (3 replies)
load more comments (11 replies)
load more comments (20 replies)