this post was submitted on 27 Feb 2026
166 points (97.7% liked)

Technology

82069 readers
3255 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/43640522

If ChatGPT wants to replace health professionals, it should be held liable for the "advice" it gives.

all 22 comments
sorted by: hot top controversial new old
[–] thebestaquaman@lemmy.world 41 points 2 days ago* (last edited 2 days ago) (4 children)

In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment

So it performs slightly worse than a coin flip...

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see

Holy shit! That's a lot worse than a coin flip.

Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care

And there are real people out there that actually trust this tech to make real decisions for them. It literally performs significantly worse than a coin flip both with regards to false positives and false negatives. You are literally better off flipping a coin or throwing a dice than asking this thing what to do.

[–] Dave@lemmy.nz 11 points 2 days ago

Even better than a coin flip is asking this what to do then doing the opposite!

[–] FallenWalnut@lemmy.world 11 points 2 days ago

It is truly horrifying when you drill into the numbers.

I can see that it MIGHT be useful as a tool for medical professionals, but exposing it to the public is an insane risk.

[–] U7826391786239@piefed.zip 6 points 2 days ago

they'll never be regulated because fascists love the mass surveillance. who cares about false positives--number of people bagged goes up either way

[–] Atherel@lemmy.dbzer0.com 4 points 2 days ago

You're even better off by doing the opposite of what chatgpt tells you to do.

[–] artyom@piefed.social 31 points 2 days ago (1 children)

Holy shit, TIL there's a ChatGPT Health!? How is this not unauthorized practice of medicine?

[–] Wammityblam@lemmy.world 20 points 2 days ago (1 children)

Past that, how is it HIPAA compliant?

There is no fucking way I believe that Open AI is not skimming these interactions for training.

[–] CompactFlax@discuss.tchncs.de 11 points 2 days ago (1 children)

HIPAA governs how the data is held, but individuals can consent to sharing. Even with OpenAI.

[–] expr@piefed.social 4 points 2 days ago

You can also revoke that consent, and HIPAA requires data to be able to be completely destroyed. no way they are compliant.

[–] konomi@piefed.blahaj.zone 21 points 2 days ago

For the love of gawd stop putting the bullshit machine in everything.

"Doctor, I have severe chest pain. Do I have a heart attack?"

"Computer says no."

[–] SaharaMaleikuhm@feddit.org 7 points 2 days ago

Trusting the lying machine now gets you a Darwin award. Nice

[–] floquant@lemmy.dbzer0.com 11 points 2 days ago

If ChatGPT wants to replace health professionals, it should be held liable for the "advice" it gives.

Not should, it's fucking mental that it isn't.

[–] panda_abyss@lemmy.ca 16 points 2 days ago

This is a product that should not and should never have existed.

[–] SaraTonin@lemmy.world 7 points 2 days ago (1 children)

I honestly don’t get why OpenAI and Apple seem to be trying to explicitly market LLMs as being capable of giving medical advice. It’s so obviously a lawsuit waiting to happen

[–] brynden_rivers_esq@lemmy.ca 4 points 1 day ago

It’s because they think they’ll win those lawsuits. They may be right. They’re gonna pull an Alex jones: “oh come on, it’s a bit! Everyone knows it’s bullshit!”

[–] kescusay@lemmy.world 11 points 2 days ago (1 children)

How about a $10 billion fine for OpenAI for every mistake? Make it hurt. Make them pull the plug on this travesty.

[–] whotookkarl@lemmy.dbzer0.com 6 points 2 days ago

Not likely under fascism or oligarchy, you need a functional legislature and judiciary for that sort of justice.