this post was submitted on 07 Mar 2026
435 points (99.1% liked)

Technology

82363 readers
3897 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 15 comments
sorted by: hot top controversial new old
[–] TropicalDingdong@lemmy.world 4 points 7 hours ago* (last edited 7 hours ago) (19 children)

I mean.

Is the wikipedia responsible for you reading an article about a law and then taking that as legal advice?

[Edit: if you are downvoting this, downvote away, but you owe an argument below as to why. I promise this exact argument will come up in the courts over this issue]

load more comments (19 replies)
[–] henfredemars@infosec.pub 3 points 7 hours ago (8 children)

Mixed feelings about this. Let me play devils advocate and say that many Americans don’t have access to these resources at all. Having potentially inaccurate resources might be better than nothing, or is that worse?

[–] Passerby6497@lemmy.world 5 points 6 hours ago

Having potentially inaccurate resources might be better than nothing, or is that worse?

You pick up a mushroom in the forest and take it home. If you have no information, do you eat it? If something tells you it's safe do you eat it?

[–] Cyteseer@lemmy.world 5 points 6 hours ago

No, misinformation is worse.

[–] thisbenzingring@lemmy.today 6 points 7 hours ago

the AI devices will just have preambles and disclaimers and word things in ways to refer the user to human resources

[–] Catoblepas@piefed.blahaj.zone 4 points 6 hours ago (1 children)

If you’re going to be your own lawyer or perform a bit of self surgery, there is no way the AI is helping that situation. Especially if the inherent nature of AI is to validate everything you say.

load more comments (1 replies)
load more comments (4 replies)
[–] ArbitraryValue@sh.itjust.works -1 points 6 hours ago

If you don't want legal or medical advice from an AI, you can already simply not ask the AI for legal or medical advice. But I don't want your paternalistic restrictions on what I may ask.

[–] AmbitiousProcess@piefed.social -2 points 7 hours ago (3 children)

I'm not sure I totally agree with this, even as much as I want AI companies to be held accountable for things like that.

The reason so many people turn to LLMs for legal/medical advice is because those are both incredibly unaffordable, complex, hard to parse fields.

If I ask an LLM what x symptom, y symptom, and z symptom could mean, and it cites multiple reputable sources to tell me it's probably the flu and tells me to mask up for a bit, that's probably gonna be better than that person being told "I'm sorry, I can't answer that"

At the same time, I might provide an LLM with all those symptoms, and it might hallucinate an answer and tell me I have cancer, or tell me to inject bleach to cure myself.

I feel like I'd much rather see a bill that focuses more on how the LLMs come to their conclusions, rather than just a blanket ban.

Like for example, if an LLM cites multiple medical journals, government health websites, etc, and provides the same information they had up, but it turns out to be wrong later because those institutions were wrong, would it be justified to sue the LLM company for someone else's accidental misinformation?

But if an LLM pulls from those sources, gets most of it right, but comes to a faulty conclusion, then should a private right of action exist?

I'm not really sure myself to be honest. A lot of people rely on LLMs for their information now, so just blanket banning them from displaying certain information, for a lot of people, is just gonna be "you can't know", and they're not gonna bother with regular searches anymore. To them, the chatbot IS the search engine now.

[–] TropicalDingdong@lemmy.world 0 points 6 hours ago

Itt thread: People with absolutely no fucking clue about what the consequences of their emotional response of "ai bad" will actually result in.

load more comments (2 replies)
load more comments
view more: ‹ prev next ›