this post was submitted on 30 Mar 2026
59 points (96.8% liked)

Technology

83220 readers
3240 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Folk are getting dangerously attached to AI that always tells them they're right

you are viewing a single comment's thread
view the rest of the comments
[–] OwOarchist@pawb.social 8 points 3 hours ago* (last edited 3 hours ago)

The crazy thing is that the technology isn't naturally sycophantic on its own. It can generate any kind of text at all; it doesn't have to generate fawningly sycophantic text.

Where that comes from is the 'hidden prompt' every major AI company puts into their AI. In addition to the prompt you send, the interface also sends it other prompts that you don't see, telling it things like 'be polite, agreeable, and helpful', 'avoid profanity', 'respond like a knowledgeable expert', and 'refuse to generate anything copyrighted, sexually explicit, or violent', etc, etc, etc. And these hidden prompts define much of the AI's behavior and "personality". To some degree, this is necessary for it to be an even vaguely useful tool, and these hidden prompts help it pass various tests. Some LLMs, if you ask them to, will repeat their hidden prompt to you so you can see what it's actually being asked to do.

And either because it drives engagement ... or just because the CEO types in charge of these decisions love sycophantic behavior so much, the sycophantic fawning is specifically asked for in these hidden prompts.

AI doesn't have to be like this. The companies making AI are deliberately making it sycophantic.