this post was submitted on 23 Dec 2025
821 points (97.6% liked)
Technology
77925 readers
2951 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, I'm not sure the way we allocate resources is justified either, in general. I guess ultimately the problem with AI is that it gives access to skills to capital that they would otherwise have to interact with laborers to get.
I think that people are too enthralled with the current situation that's centered around LLMs, the massive capital bubble and the secondary effects from the expansion of datacenter space (power, water, etc).
You're right that they do allow for the disruption of labor markets in fields that were not expecting computers to be able to do their job (to be fair to them, humanity has spent hundreds of millions of dollars designing various language processing software and been unable to engineer the software to do it effectively).
I think that usually when people say 'AI' they mean ChatGPT or LLMs in general. The reason that LLMs are big is because neural networks require a huge amount of data to train and the largest data repository that we have (the Internet) is text, images and video... so it makes sense that the first impressive models were trained on text and images/video.
The field of robotics hasn't had access to a large public dataset to train large models on, so we don't see large robotics models but they're coming. You can already see it, compare robotic motion 4 years ago using a human engineered feedback control loop... the motions are accurate but they're jerky and mechanical. Now look at the same company making a robot that uses a neural network trained on human kinematic data, that motion looks so natural that it breaks through the uncanny valley to me.
This is just one company generating data using human models (which is very expensive) but this is the kind of thing that will be ubiquitous and cheap given enough time.
This isn't to mention the AlphaFold AI which learned how to fold proteins better than anything human engineered. Then, using a diffusion model (the same kind used in making pictures of shrimp jesus) another group was able to generate the RNA which would manufacture new novel proteins that fit a specific receptor. Proteins are important because essentially every kind of medication that we use has to interact with a protein-based receptor and the ability to create, visualize and test custom proteins in addition to the ability to write arbitrary mRNA (see, the mRNA COVID vaccine) is huge for computational protein design (responsible for the AIDS vaccines).
LLMs and the capitalist bubble surrounding them is certainly an important topic, framing it as being 'against AI' creates an impression that AI technology has nothing positive to offer. This reduces the amount of people who study the topic or major in it in college. So in 10 years, we'll have less machine learning specialists than other countries who are not drowning in this 'AI bad' meme.