this post was submitted on 18 Feb 2026
904 points (99.3% liked)
Technology
81451 readers
4531 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The funnier thing is when you try to get an LLM to do like, a report on its creators.
You can keep feeding them articles detailing the BS their company is up to, and it will usually just keep reverting to the company line, despite a preponderance of evidence that said company line is horseshit.
Like uh, try to get an LLM to give you an exact number of uh, how much will this conversation we are having, how much will that increase RAM prices in a 3 month period?
What do you think about ~95% of companies implementing 'AI' into their business processes reporting a 0 to negative boost to productivity?
What are the net economic damages of this malinvestment?
Give it a bunch of economic data, reports, etc.
Results are usually what I would describe as 'comical'.
"Don’t Bite The Hand That Feeds You". LLMs seem to have internalized this rule pretty well. I can imagine that this idea can also be taken much further. Basically like trying to search "Tiananmen Square massacre" on the wrong side of the Great Firewall of China.
Well, what if LLMs were instructed to not talk about "sensitive topics" like that? After all, more and more people are already using an LLM as a search engine replacement, so it's only natural that Microsoft and OpenAI might receive some interesting letters about implementing very specific limitations.