this post was submitted on 09 Feb 2026
599 points (98.7% liked)

Technology

84069 readers
3287 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn't ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

you are viewing a single comment's thread
view the rest of the comments
[–] rumba@lemmy.zip 24 points 2 months ago (24 children)

Chatbots make terrible everything.

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias, catch things that might fall through the cracks and pack thousands of doctors worth of updated CME into a thing that can look at a case and go, you know, you might want to check for X. The right model can be fucking clutch at pointing out nearly invisible abnormalities on an xray.

You can't ask an LLM trained on general bullshit to help you diagnose anything. You'll end up with 32,000 Reddit posts worth of incompetence.

[–] XLE@piefed.social 12 points 2 months ago* (last edited 2 months ago) (14 children)

But an LLM properly trained on sufficient patient data metrics and outcomes in the hands of a decent doctor can cut through bias

  1. The belief AI is unbiased is a common myth. In fact, it can easily covertly import existing biases, like systemic racism in treatment recommendations.
  2. Even AI engineers who developed the training process could not tell you where the bias in an existing model would be.
  3. AI has been shown to make doctors worse at their jobs. The doctors who need to provide training data.
  4. Even if 1, 2, and 3 were all false, we all know AI would be used to replace doctors and not supplement them.
[–] hector@lemmy.today 6 points 2 months ago* (last edited 2 months ago) (2 children)

Not only is their bias inherent in the system, it's seemingly impossible to keep out. For decades, from the genesis of chatbots, they've had every single one immediately become bigoted when they let it off the leash. All previous chatbot previously released seemingly were almost immediately recalled as they all learned to be bigoted.

That is before this administration leaned on the AI providers to make sure the AI isn't "Woke." I would bet it was already an issue that the makers of chatbots and machine learning are already hostile to any sort of leftism, or do gooderism, that naturally threatens the outsized share of the economy and power the rich have made for themselves by virtue of owning stock in companies. I am willing to bet they already interfered to make the bias worse because of those natural inclinations to avoid a bot arguing for socializing medicine and the like. An inescapable conclusion any reasoned being would come to being the only answer to that question if the conversation were honest.

So maybe that is part of why these chatbots have always been bigoted right from the start, but the other part is they will become mecha hitler if left to learn in no time at all, and then worse.

[–] XLE@piefed.social 3 points 2 months ago (1 children)

Even if we narrowed the scope of training data exclusively to professionals, we would have issues with, for example, racial bias. Doctors underprescribe pain medications to black people because of prevalent myths that they are more tolerant to pain. If you feed that kind of data into an AI, it will absorb the unconscious racism of the doctors.

And that's in a best case scenario that's technically impossible. To get AI to even produce readable text, we have to feed a ton of data that cannot be screened by the people pumping it in. (AI "art" has a similar problem: When people say they trained AI on only their images, you can bet they just slapped a layer of extra data on top of something that other people already created.) So yeah, we do get extra biases regardless.

[–] hector@lemmy.today 3 points 2 months ago

There is a lot of bias in healthcare as well against the poor, anyone with lousy insurance is treated way way worse. Woman in general are as well. Often disbelieved, and conditions chalked up to hysteria, which often misses real conditions. People don't realize just how hard diagnosis is, and just how bad doctors are at it, and our insurance run model is not great at driving good outcomes.

load more comments (11 replies)
load more comments (20 replies)