this post was submitted on 10 May 2026
143 points (91.3% liked)

Technology

84603 readers
4328 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Both Ubuntu and Fedora have made it official: support is coming soon for running local generative AI instances.

An epic and still-growing thread in the Fedora forums states one of the goals for the next version: the Fedora AI Developer Desktop Objective. It is causing some discontent, and at least one Fedora contributor, SUSE’s Fernando Mancera, has resigned.

you are viewing a single comment's thread
view the rest of the comments
[–] luciferofastora@feddit.org 1 points 2 days ago (1 children)

Mind: I'm not the person running the local model.

but if there are dramatically more efficient tools that cover the same needs, you could be way more efficient (less wasteful) than you are

I did say that the efficiency would be a different question neither of us can answer in this case, but I fully agree with you. I merely pointed out that a local model wouldn't be a permanent waste of electricity.

occupying (wasting) space

That's relative to how much space you have. I also have games on my disk that I haven't played in a while, so they're more or less wasted space. But they're not particularly large, so I can spare a few GB for them, and if I do want to play them, I can jump in spontaneously.

Well I'm usually using these models to prettify matlab figures, it's something that I can do but would take me a while, using an llm it takes a minute or two and gives a good enough result. The llms regression to the mean is also exactly what I want. I want a legible figure. I'd say that the electricity that the llm uses is less than the beefy consumed by me burning food to actually write the thing myself.