this post was submitted on 10 Apr 2026
142 points (97.3% liked)
Technology
83666 readers
3884 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The problem is that you saw AI and thought LLM.
Machine Learning is a big field, AI/Neural Networks are a subset of that field and LLMs are only a single application of a specific type of LLM (Transformer model) to a specific task (next token prediction).
The only reason that LLMs and Image generation models are the most visible is that training neural network requires a large amount of data and the largest repository of public data, the Internet, is primarily text and images. So, text and image models were the first large models to be trained.
The most exciting and potentially impactful uses of AI are not LLMs. Things like protein folding and robotics will have more of an impact on the world than chatbots.
In this case, generating fast approximations for physical modeling can save a ton of compute time for engineering work.
Watson beat Ken Jennings over a decade ago. Protein folding was already done too, the people who did it even won a Nobel prize for it a couple years ago.
LLMs being the most visible part of AI after over 75 years of AI, isn’t because they’re the biggest or latest or greatest or whatever, it’s marketing. Plain and simple marketing.
Probably more that it's the only AI normal people will interact with regularly. Your average person isn't going to run a protein folding application, but they will probably talk to chatgpt or use Google AI summaries.
Protein folding is far from "done" lol. Models have gotten a lot better but there is still more to do.
I didn’t say it was finished. I said people had won a Nobel prize for having done it. It takes decades to win a Nobel prize. My point was that it had been done years and years ago, not recently.
Anfinsen won the Nobel in 1972 for showing that the amino acid sequence is what is responsible for the 3D structure of proteins.
Since then we've been able to take images of protein's structures using xray crystallography but that is a painstaking process. The ability to accurately predict a protein's structure from an amino acid sequence has been an unsolved problem until very recently.
It wasn't until 2024 that Hassabis, Jumper and Baker won the Nobel for their work in predicting protein structure (using an AI called AlphaFold) and computationally designing new proteins.
The ability to create arbitrary proteins is new and will revolutionize some fields of medicine (like cancer treatment) and, to me, is a much more impressive use of AI.
LLMs are interesting but they are incredibly over-hyped as far as 'changing the world' goes, imo.
They won the prize for an AI they made in the early 2000s.
Other people in this thread say physics simulations are inherently chaotic. If an AI model is trained on inherently chaotic data, how will the results not be chaotic or not worse?
That is an insightful question.
The answer is that we actually understand chaos in a way. It isn't unpredictable in general, it's just hard to say how any given situation will evolve but we can understand a lot about how all systems will evolve.
I'm not good at explaining, but some content creators cover this topic pretty well. If you're interested, here's a video about it from Veritasium: https://www.youtube.com/watch?v=fDek6cYijxI
The universe is chaotic. But chaos doesn't mean something isn't reproducible or doesn't follow a set of rules.
Because physically speaking, chaotic and unpredictable are two different things - and why it works so well on this case: it's becoming a stochastic problem, not a deterministic one.
It's an awesome area for machine learning: you didn't need to understand the result and how it got created, it just needs to be "close enough".
I work for a company that's been using machine learning software for slightly over a decade.