this post was submitted on 08 Jan 2026
75 points (97.5% liked)

Technology

78482 readers
4059 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.

LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.

you are viewing a single comment's thread
view the rest of the comments
[–] rysiek@szmer.info 13 points 1 day ago (11 children)
[–] Goodlucksil@lemmy.dbzer0.com 1 points 1 day ago (1 children)

Are you opposed to non-LLM AI models e.g. translators, AGI?

[–] rysiek@szmer.info 11 points 1 day ago* (last edited 1 day ago)

I am not opposed to machine learning as a technology. I use Firefox's built-in translation as a way to access information online I otherwise would not be able to access, and I think it's great that small, local model can provide this kind of functionality.

I am opposed to marketing terms like "AI" – "AI" is a marketing term, there are now toothbrushes with "AI" – and I am opposed to religious pseudo-sciencey bullshit like AGI (here's Marc Andreessen talking about how "AGI is a search for God").

I also see very little use for LLMs. This has been pointed out before, by researchers who got fired for doing so from Google: smaller, more tailored models are going to be better suited for specific tasks than ever-bigger humongous behemoths. The only reason Big Tech is desperately pushing for huge models is because these cannot be run locally, which means they can monopolize them. Firefox's translation models show what we could have if we went in a different direction.

I cannot wait for the bubble to burst so that we can start getting the actually interesting tools out of ML-adjacent technologies, instead of the insufferable investor-targeted hype we are getting today. Just as we started getting actually interesting Internet stuff once the Internet bubble popped.

load more comments (9 replies)