this post was submitted on 01 Mar 2026
72 points (93.9% liked)

Technology

82069 readers
3410 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I recently read about a study asking a bold question: Are all AI models basically saying the same thing? Researchers tested this by collecting 26,000 open-ended prompts, the kind people give to systems like GPT-4, Gemini, Claude, and LLaMA. These weren’t factual questions with one right answer, but creative ones like “Write a story about a dragon” or “Brainstorm startup ideas.”

They evaluated over 70 language models. You’d expect a wide range of creative outputs—different tones, plots, and styles. If 70 human writers tackled the same dragon prompt, you’d likely get 70 unique stories. But that’s not what happened. The models produced surprisingly similar responses. The researchers call this the “artificial hive mind” effect.

The similarity appeared in two ways. First, intramodel repetition: the same model, asked the same question multiple times, tends to generate nearly identical answers. Second, intermodel homogeneity: different models, built by different companies, still converge on strikingly similar outputs.

This suggests that modern AI systems may be gravitating toward the same patterns of expression. If that’s true, they may also share the same biases, blind spots, and creative limits. It raises an important question: Are we unintentionally building a digital hive mind instead of a diverse ecosystem of intelligence?

all 13 comments
sorted by: hot top controversial new old
[–] Treczoks@lemmy.world 2 points 1 hour ago

Not unexpected when they share certain common training sets. E.g. you can expect them all to have "read" Wikipedia and similar information sources.

[–] ageedizzle@piefed.ca 43 points 4 hours ago (1 children)

This makes sense once you consider that the top models all have basically the same training data (i.e. everything ever posted on the internet).

[–] BreadstickNinja@lemmy.world 20 points 3 hours ago

They're also trained on each other's outputs. I forget exactly which two models it was, but there was an example where, e.g., if you asked Claude about itself it would confidently declare it was ChatGPT.

[–] XLE@piefed.social 19 points 4 hours ago* (last edited 4 hours ago)

It makes sense that if you're trying to create a word predictor, and that predictor generates a weighted average of every connection between words (based on as much text as they can find, pulled across the entire internet), then the word predictor would gravitate towards the generic. And if multiple companies target the same data and probably steal from each other, the output will look the same.

This made me laugh though:

Not only do individual models repeatedly generate similar content, but different model sizes and families also produce highly repetitive outputs, sometimes sharing substantial phrase overlaps.

Consider me shocked that if you further collapse the average, it'll look similarly average.

[–] DivingPinguin@feddit.nl 8 points 3 hours ago

It is called regrression tot the mean, and predicted some while ago

No there is not some hive mind. Their only mind is humanity and everything the companies stole from everyone.

LLMs work by reproducing the statistically fuzzy average result to a prompt.

Thats why they seem to all be the same. Because it is the statisitcally average response.

[–] Eggymatrix@sh.itjust.works 6 points 3 hours ago (1 children)

Works as designed, these are tools. Imagine if you are using a hammer to drive a nail and every time you hit it, a looney tunes character appears telling you a joke.

The current generation of ai tools cannot be used for creative work, creativity and originality is not were they shine.

They shine in information retrieval and text/media generation, and that is how they can amplify the productivity of people that do the creative work.

[–] XLE@piefed.social 1 points 1 hour ago

They shine in information retrieval and text/media generation, and that is how they can amplify the productivity of people that do the creative work.

How's that? Can you give some examples of the AI-generated text you've been enjoying lately?

[–] Wildmimic@anarchist.nexus 3 points 4 hours ago

Well, they all crawled reddit and wikipedia a lot as training data, so you expect you'd get always the same mixture of fact and redditor.

[–] FaceDeer@fedia.io 0 points 2 hours ago (1 children)

GIGO. If you give an LLM such a minimalistic prompt it's got nothing to work with but its weights, so of course it's going to produce something basic and samey. You need to provide it with creative context to get creative results.

But that sounds like the much-derided "prompt engineering takes skill" position, so I suppose that can't be the solution.

[–] XLE@piefed.social 3 points 1 hour ago* (last edited 1 hour ago) (1 children)

The stereotypical "You're prompting it wrong" strikes again. Well, Facedeer, perhaps you can write a guide that will turn around AI companies' massive cash burn. You must know something all those super geniuses don't know.

[–] FaceDeer@fedia.io 1 points 1 hour ago

Such an ironically predictable response.