this post was submitted on 13 Mar 2025
16 points (100.0% liked)

Technology

69247 readers
3821 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://infosec.pub/post/24994013

CJR study shows AI search services misinform users and ignore publisher exclusion requests.

top 14 comments
sorted by: hot top controversial new old
[–] brsrklf@jlai.lu 3 points 1 month ago (1 children)

Only yesterday, I searched for a very simple figure, the number of public service agents in a specific administrative region. This is, obviously, public information. There is a government site where you can get it. However I didn't know the exact site, so I searched for it on Google.

Of course, AI summary shows up first, and gives me a confident answer, accurately mirroring my exact request. However the number seems way too low to me, so I go check the first actual search result, the aforementioned official site. Google's shitty assistant took a sentence about a subgroup of agents, and presented it as the total. The real number was clearly given before, and was about 4 times that.

This is just a tidbit of information any human with the source would have identified in a second. How the hell are we supposed to trust AI for complex stuff after that?

[–] Patch@feddit.uk 2 points 1 month ago (1 children)

The AI models can be hilariously bad even on their own terms.

Yesterday I asked Gemini for a population figure (because I was too lazy to look it up myself). First I asked it:

what is the combined population of place 1, place 2 and place 3

It answered:

Unfortunately, providing an absolutely precise, up-to-the-minute combined population figure is difficult. Population numbers are constantly changing.

To find these figures, I would recommend checking the Office for National Statistics (ONS) website. They provide detailed population statistics for local authorities in England.

On a whim, I asked it again as:

combined population of place 1, place 2 and place 3

And then it gave me the answer sweet as a nut.

Apparently I was being too polite with it, I guess?

[–] LinyosT@sopuli.xyz 1 points 1 month ago* (last edited 1 month ago)

I slapped a picture of a chart into Gemini because I didn’t know what the type of chart was called but I wanted to mention it in a Uni report. I was too lazy to go looking at chart types and thought that would be quicker.

I just asked it “What kind of chart is this” and it ignored that and started analysing the chart instead and started stating what the chart was about and giving insights into the chart. Didn’t tell me what kind of chart it was even though that was the only thing I asked.

Bear in mind that I deliberately cropped out any context to avoid it trying to do that, just in case, so all I got from it was pure hallucinations. It was just making pure shit up that I didn’t ask for.

I switched to the reasoning model and asked again, then it gave me the info I wanted.

[–] JustEnoughDucks@feddit.nl 3 points 1 month ago (1 children)

And then I get down voted for laughing when people say that they use AI for "general research" 🙄🙄🙄

[–] Mike_The_TV@lemmy.world 3 points 1 month ago (1 children)

I've had people legitimately post the answer they got from chat gpt to answer someone's question and then get annoyed when people tell them its wrong.

[–] lka1988@lemmy.dbzer0.com 2 points 1 month ago

"I'm not sure, but ChatGPT says...."

No, fuck off, go back to grade school.

[–] seaQueue@lemmy.world 2 points 1 month ago (1 children)
[–] Rhaedas@fedia.io 1 points 1 month ago

While I do think that it's simply bad at generating answers because that is all that's going on, generating the most likely next word that works a lot of the time but then can fail spectacularly...

What if we've created AI but by training it with internet content, we're simply being trolled by the ultimate troll combination ever.

[–] Repelle@lemmy.world 2 points 1 month ago

I searched for pictures of Uranus recently. Google gave me pictures of Jupiter and then the ai description on top chided me telling me that what was shown were pictures of Jupiter, not Uranus. 20 years ago it would have just worked.

[–] RabbitBBQ@lemmy.world 1 points 1 month ago

Fixing all the shit AI breaks is going to create a lot of jobs

[–] TheGoldenGod@lemmy.world 0 points 1 month ago (1 children)

Training AI with internet content was always going to fail, as at least 60% of users online are trolls. It's even dumber than expecting you can have a child from anal sex.

[–] musubibreakfast@lemm.ee 0 points 1 month ago (1 children)

Because of what you just wrote some dumb ass is going to try to have a child through anal sex after doing a google search.

[–] T156@lemmy.world 1 points 1 month ago

They're not joking about a hypothetical. It was a real thing that happened.

[–] roguetrick@lemmy.world -1 points 1 month ago