this post was submitted on 23 Mar 2025
774 points (97.8% liked)

Technology

69449 readers
4137 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A Norwegian man said he was horrified to discover that ChatGPT outputs had falsely accused him of murdering his own children.

According to a complaint filed Thursday by European Union digital rights advocates Noyb, Arve Hjalmar Holmen decided to see what information ChatGPT might provide if a user searched his name. He was shocked when ChatGPT responded with outputs falsely claiming that he was sentenced to 21 years in prison as "a convicted criminal who murdered two of his children and attempted to murder his third son," a Noyb press release said.

you are viewing a single comment's thread
view the rest of the comments
[–] MagicShel@lemmy.zip 187 points 1 month ago (80 children)

It's AI. There's nothing to delete but the erroneous response. There is no database of facts to edit. It doesn't know fact from fiction, and the response is also very much skewed by the context of the query. I could easily get it to say the same about nearly any random name just by asking it about a bunch of family murders and then asking about a name it doesn't recognize. It is more likely to assume that person is in the same category as the others and if the one or more of the names have any association (real or fictional) with murder.

[–] bluGill@fedia.io 104 points 1 month ago (12 children)

I don't care why. That is still libel and it is illegal for good reason. if you can't stop this for all cases then you ai is and should be illegal.

[–] MagicShel@lemmy.zip 20 points 1 month ago (4 children)

Seems to me libel would require AI to have credibility, which it does not.

It's a tool. Like most useful tools it can do harmful things. We know almost nothing about the provenance of this output. It could have been poisoned either accidentally or deliberately.

But above all, the problem is ignorant people believing the output of AI is truth. It's pretty good at some things, but the more esoteric the knowledge, the less reliable it is. It's best to treat AI as a storyteller. Yeah there are a lot of facts in there but when they don't serve the story they can be embellished. I don't see the harm in just acknowledging that and moving on.

[–] kibiz0r@midwest.social 20 points 1 month ago (1 children)

Meanwhile, AI vendors:

“AI will soon be the only way we access information and make decisions!”

[–] desktop_user@lemmy.blahaj.zone 3 points 1 month ago

and those marketers should get punished, not for spreading misinformation but for being marketers.

load more comments (2 replies)
load more comments (9 replies)
load more comments (76 replies)