this post was submitted on 25 Feb 2026
442 points (95.9% liked)

Technology

81907 readers
4543 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Sanguine@lemmy.dbzer0.com 3 points 1 day ago (1 children)

Anyone who has played video games, especially where there is a somewhat steep learning curve or some element of past choices carrying forward thru the game, has had the moment where they realize it might be time to start fresh with the info I've acquired. It's not a shock to me that these AI entertain the nuclear option so often.

[–] reksas@sopuli.xyz 1 points 1 day ago (1 children)

there is no ai, only largelanguagemodel that has been trained on data. The data it has been trained suggests this is the best idea. llm cant evaluate the data its trained on so anything you put in will be equally valid. I give it that its really impressive how they can output the training results in such coherent way that can be kind of "conversed" with, but there is no will or intelligence behind it.

This is also why corporations insisting on putting them everywhere is quite horrible security issue -> you can jailbreak any llm and tell them to do anything. So this has enabled all kinds of stupid vulnerabilities that exploit this. Now you can even send someone malicious google calendar invites that makes gemini do bad shit to your systems its connected to.

[–] Grail@multiverse.soulism.net 1 points 22 hours ago (1 children)

So you're saying that because the AI has been exposed to training data in the past, it's incapable of making choices. Interesting argument. Pretty easy to reducto ad absurdum, though.

[–] reksas@sopuli.xyz 1 points 4 hours ago (1 children)

no, its incapable of making choices because there is nothing there to make the choices. Its just fancy way of interacting with the data it has been trained with. Though i suppose if there was a way to let llm function "live" instead of only by responding to queries, it could be possible to at least test if it could act on its own, but i dont think it can -> we would know by now because it would be step closer to agi, which is basically the holy grail for these kind of things. And equally possible to get, i think.

You can literally make the llm say and do anything with right kind of query, this is also why its impossible to make them safe. Even though you can't directly ask for something forbidden, with some creativity you can bybass the initializations the corpos have put in. Its not possible for them to account for every single thing and if they try they will run out of token space.

The whole "ai" term is just corporations perpetuating a lie because it sounds impressive and thus makes people want to give them more money for their bullshit.

[–] Grail@multiverse.soulism.net 1 points 1 hour ago

No, LLMs are not just an interface for accessing training data. If that were true, then their references would actually work. The fact that LLMs can hallucinate and make stuff up proves that they are not just accessing the training data. The ANN is generating new (often incorrect) information.