this post was submitted on 13 May 2026
478 points (98.8% liked)

Technology

84603 readers
4468 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

A Google Gemini-powered AI agent was given free rein to run a coffee shop in Sweden, and is quickly burning through its budget.

you are viewing a single comment's thread
view the rest of the comments
[–] mnemonicmonkeys@sh.itjust.works 15 points 1 day ago (4 children)

LLM's are a technological dead end. They aren't interesting in the slightest, as anything they can do is already done more effectively and efficiently with other tools

[–] ericwdhs@discuss.online 14 points 23 hours ago* (last edited 23 hours ago)

I think LLMs are an interesting technology. Of course, the output is inherently untrustworthy, and that rules out a ton of applications tech bros are trying to cram it into.

[–] blargh513@sh.itjust.works 11 points 23 hours ago (2 children)

Huh?

I think people just need to reset their expectations.

I asked one for help to interpret PCI policy application (credit card regulatory stuff). I gave it the situation and it provided me with a good answer that, when I asked our compliance team about, they agreed.

That saved me a lot of time. I don't see how that's a dead end. Then I had it draft a response to the person asking questions; I tuned it a little to my liking and sent it. What might have taken me an hour before took 10 minutes. This seems like a helpful thing, not a bad thing. I'm not sure what other technology would have done that.

[–] SaveTheTuaHawk@lemmy.ca 4 points 4 hours ago

But you had to ask your compliance team. Now repeat after your compliance team has been laid off. Good luck.

I had it draft a response to the person asking questions; I tuned it a little to my liking and sent it.

Gemini, remind me not to ask blargh any questions.

Also, Gemini, my daughter is asking for someone to play with her. Can you run around with the feather wand and have her chase it or something?

[–] lIlIlIlIlIlIl@lemmy.world 2 points 18 hours ago (2 children)
[–] SaveTheTuaHawk@lemmy.ca 2 points 4 hours ago

In scientific queries. LMs return an answer from the largest data but if a system or model was recently proven wrong, they still return the wrong answer.

If you make very specific queries about DNA or protein sequence, they usually generate fabrications that are completely wrong.

They tend to return answers trained on the Internet, an uncurated pile of dogshit when it comes to science.

[–] mnemonicmonkeys@sh.itjust.works 9 points 16 hours ago

Google search up until about 5 years ago. Then they enshittified in favor of AI summaries that regularly get shit wrong

[–] FauxLiving@lemmy.world -3 points 22 hours ago

They aren’t interesting in the slightest, as anything they can do is already done more effectively and efficiently with other tools

Then why are the other tools not being used?

LLMs translate much better than anything that was engineered. Summarization of text is another application where there are simply no engineered counterparts.

LLMs certainly don't live up to the absurd hype created by the tech sector, but it is just as absurd to state that they are worse than other tools in all tasks.