this post was submitted on 10 Apr 2026
142 points (97.3% liked)
Technology
83666 readers
3884 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This doesn't sound like they are using LLMs for processing their 3D models. The way it's described in the article sounds a lot more like some machine learning model trained on physics simulations for aerodynamics.
Yes! The way those physics models are created is so cool. The article somewhat explains it, but it's mostly a fluff-piece for things unrelated to genAI. More in-depth:
The physically accurate simulation is great but slow. So we can create a neural network (there's a huge variety in shapes), and give it an example of physics, and tell it to make a guess as to what it'll look like in, say, 1ms. We make it improve at this billions of times, and eventually it becomes "good enough" in most cases. By doing those 1ms steps in a loop, we get a full simulation. Because we chose the shape of it, we can pick a shape that's quite fast to compute, and now we have a less-accurate but faster simulation.
The really cool thing is that sometimes, these models are better than the more expensive physics simulation, probably because real physics is logical and logical things are easier to learn.
We've done things like this for ages. One way we can improve them is by giving them multiple time steps. Unfortunately they kinda suck at seeing connections over time, so this is expensive. Luckily, transformers were invented! This is a neural network shape that is really good at seeing connections over one dimension, like time, while still being pretty cheap and really easy to do run in parallel (which is how you can go fast nowadays).
With a bunch of extra wiring, transformers also become GPT, i.e. text-based AIs. That's why they suddenly got way better; they went from being able to see connections with words maybe 3-4 steps back, to recently a literal million. This is basically the only relationship with "AI" this has.
It's a super exciting time for so many fields of science. Transformers are really the key discovery that's made modern AI what it is today and we're only barely scratching the surface of possible applications.
The future is going to be weird in ways we can't even imagine.