This is gonna be how we get more Citicorp Center flaws in our world isn't it?
And I dont expect modern "profits at all costs" companies to own up to it and fix their issues like it was in 1978.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
accuracy is not a huge concern at the design stage
Then I doubt they are running the mentioned most accurate, two-week-long physics solvers at this stage either. You only do that when you need accuracy. A quick simulation doesn't take long.
I'm failing to see why the creative writing machine is better than a simulation set to 'rough'.
I’m failing to see why the creative writing machine is better than a simulation set to ‘rough’.
because it's not a "creative writing machine", it's machine learning that's been trained to run physics simulations. it has nothing to do with LLMs. we've been using systems like this for decades. including ones like Folding@home which have been instrumental in the development of many drug therapies for different illnesses.
your internal biases have clouded your critical thinking skills and your ability to competently and thoughtfully examine information you're provided has been compromised. in plain english, like the AI techbros you despise, you've given up your ability to think.
I’m failing to see why the creative writing machine is better than a simulation set to ‘rough’.
The problem is that you saw AI and thought LLM.
Machine Learning is a big field, AI/Neural Networks are a subset of that field and LLMs are only a single application of a specific type of LLM (Transformer model) to a specific task (next token prediction).
The only reason that LLMs and Image generation models are the most visible is that training neural network requires a large amount of data and the largest repository of public data, the Internet, is primarily text and images. So, text and image models were the first large models to be trained.
The most exciting and potentially impactful uses of AI are not LLMs. Things like protein folding and robotics will have more of an impact on the world than chatbots.
In this case, generating fast approximations for physical modeling can save a ton of compute time for engineering work.
Watson beat Ken Jennings over a decade ago. Protein folding was already done too, the people who did it even won a Nobel prize for it a couple years ago.
LLMs being the most visible part of AI after over 75 years of AI, isn’t because they’re the biggest or latest or greatest or whatever, it’s marketing. Plain and simple marketing.
LLMs being the most visible part of AI after over 75 years of AI, isn’t because they’re the biggest or latest or greatest or whatever, it’s marketing. Plain and simple marketing.
Probably more that it's the only AI normal people will interact with regularly. Your average person isn't going to run a protein folding application, but they will probably talk to chatgpt or use Google AI summaries.
Protein folding is far from "done" lol. Models have gotten a lot better but there is still more to do.
I didn’t say it was finished. I said people had won a Nobel prize for having done it. It takes decades to win a Nobel prize. My point was that it had been done years and years ago, not recently.
Other people in this thread say physics simulations are inherently chaotic. If an AI model is trained on inherently chaotic data, how will the results not be chaotic or not worse?
The universe is chaotic. But chaos doesn't mean something isn't reproducible or doesn't follow a set of rules.
Because physically speaking, chaotic and unpredictable are two different things - and why it works so well on this case: it's becoming a stochastic problem, not a deterministic one.
It's an awesome area for machine learning: you didn't need to understand the result and how it got created, it just needs to be "close enough".
I work for a company that's been using machine learning software for slightly over a decade.
I came here to say the same thing. There's no way a coarse CFD simulation of air over a car takes two weeks to run on even budget HPCs.
Long fine ones take about 8 hours to run for the ones we do. More complex ones run up to 36 hours. Iterative coarse simulations take a couple minutes.
If they're taking two weeks to run their simulations, I question their process.
I'm failing to see why the creative writing machine is better than a simulation set to 'rough'.
This doesn't sound like they are using LLMs for processing their 3D models. The way it's described in the article sounds a lot more like some machine learning model trained on physics simulations for aerodynamics.
Yes! The way those physics models are created is so cool. The article somewhat explains it, but it's mostly a fluff-piece for things unrelated to genAI. More in-depth:
The physically accurate simulation is great but slow. So we can create a neural network (there's a huge variety in shapes), and give it an example of physics, and tell it to make a guess as to what it'll look like in, say, 1ms. We make it improve at this billions of times, and eventually it becomes "good enough" in most cases. By doing those 1ms steps in a loop, we get a full simulation. Because we chose the shape of it, we can pick a shape that's quite fast to compute, and now we have a less-accurate but faster simulation.
The really cool thing is that sometimes, these models are better than the more expensive physics simulation, probably because real physics is logical and logical things are easier to learn.
We've done things like this for ages. One way we can improve them is by giving them multiple time steps. Unfortunately they kinda suck at seeing connections over time, so this is expensive. Luckily, transformers were invented! This is a neural network shape that is really good at seeing connections over one dimension, like time, while still being pretty cheap and really easy to do run in parallel (which is how you can go fast nowadays).
With a bunch of extra wiring, transformers also become GPT, i.e. text-based AIs. That's why they suddenly got way better; they went from being able to see connections with words maybe 3-4 steps back, to recently a literal million. This is basically the only relationship with "AI" this has.
I don't know if this is the full explanation, but the article does touch on how the LPM can be tweaked to match physical tests:
The trick is to incorporate experimental measurements to fine-tune the model. If a physics simulation doesn’t agree exactly with experimental data, it is often difficult to figure out why and tweak the model until they agree. With AI, incorporating a few experimental examples into the training process is a lot more straightforward, and it’s not necessary to understand where exactly the model went wrong.
I once worked tech support for people who ran physics simulations. They said that sometimes they had to rerun the simulations if they didn't come back accurately. I asked how they could tell if they were accurate.
They said it was based on whether it felt right. I still hate that response, but I guess I can't come up with a better idea, other than doing whatever they're testing in real life.
Those kinds of simulations are inherently chaotic, tiny changes to the initial conditions can have wildly different outcomes sometimes to the point of being nonsensical. Also, since they're simulating a limited volume the boundary conditions can cause weird artifacts in some cases.
If you run a simulation of air over an aircraft wing and the end result is a mess of turbulence instead of smooth flow then you can assume that simulation was acting weird and not that your wing design is suddenly breaking the rule of physics. When the simulation breaks it usually does so in ways that are obvious due to previous testing with physical models.
That's ... Basically what they said.
No, they said "it felt right" which is incomprehensible to anyone that doesn't have any experience with how a CFD results generally looks like.
But, what about accuracy? For GM’s purposes, Strauss says accuracy is not a huge concern at the design stage because finer details are ironed out later in the process. “When it really starts to matter is when we’re getting close to launching a vehicle, and the coefficient of drag is going to be used for our energy calculation, which eventually goes to the certification of our miles per gallon on the sticker.” At that stage, Strauss says, a physical model of the car will be put into a wind tunnel for an exact number.
All in all, by drastically bringing down the time it takes to model the physics, large physics models enable engineers to explore a much greater range of possibilities before a final design is reached.
So this is really just to help iterate on early designs without waiting 2 weeks each time to get feedback on if a design is problematic. This is actually a really great use of machine learning.
And again no "AI" is used, because it does not exist and is not needed for statistical approximations.
Props to this publication for at least leaving the grifter bullshit out of the title. That's a clue that the technology might actually be useful.
"AI" <<<<<<< large physics models
All machine learning is and has always been part of the artificial intelligence field. They're doing AI, whether that happens to be a trending term or not.
This is definitely AI, but AI is such a vaguely defined term that it's basically meaningless. Too many people these days mistake it for meaning "a computer that can think like a human" even though it encompasses everything from LLMs to chess playing algorithms to something like Minecraft zombie pathfinding.
Yah, I have some vague experience in the space and, without getting into things covered by NDAs, I guess I can say...
First, The popular media talks about the classic style of physics solvers as these magical black boxes but my experience is that they are sufficiently unreliable that I would never trust my life solely to the answers of a solver. They do provide very valuable feedback for refining a design without an endless hardware-rich cycle of destructive testing. Thus, I think that a large physics model is probably going to be the same sort of useful tool.
Second, while the CAE engineers can be very very protective over the time they spend on the two week cycle the article talks about, it's fucking drudge work and a waste of a good mind. At the same time, the article does not really talk about some of the nitty gritty details. Aerodynamics is a great place to start because there's less setup but the coefficient of drag is only one problem that needs to be considered.
Third, the good engineers can "see" things intuitively because things do operate with a pattern. Vorticies from protruding features... stress fractures from square holes in a beam... etc. This does feel like an area where spicy autocorrect can spicy autocorrect you to a useful answer.
Finally, cycle time for real world engineers is just like the cycle time for software engineers. Nobody wants to go back to the world where programmers submitted a deck of cards and got the printout back a week later.
The only real risk here is that somebody gets high on their own supply and decides that a large physics model is actually predictive and we don't need the same set of actual physical tests that validate the models.
I don't know what specific technology they're using, but remember that not all AL/ML implementations are LLMs.
Yeah, I was referring to spicy autocorrect in the more general sense of something that uses a faster statistical model to replace a slower theoretically derived exhaustive calculation.
If a physics simulation doesn’t agree exactly with experimental data, it is often difficult to figure out why and tweak the model until they agree. With AI, incorporating a few experimental examples into the training process is a lot more straightforward, and it’s not necessary to understand where exactly the model went wrong.
That's not too bad if it's only ever used as a rough guide in the early stages of design, with proper testing done later. But do we trust corporations not to get lazy and pressure their engineers to skip the accurate tests altogether, especially when they can then brag to their investors that AI is replacing expensive engineer time? What would Boeing's management want to do with this tech?
The regular old FEM based models can be quite misleading and when I had the chance to dig into them some years ago, it made me vaguely anxious. Except that nobody trusts the existing CAE solvers, there's always a process to verify that actually the structure does what you think it does.
Aerodynamics, at least the coefficient of drag, is actually really good for this because you can't cheat the air and it's mostly obvious when you screw it up. Which isn't true for flutter or the more structural details.
So, yeah, there is that risk, that they'll get high on their own supply. But thankfully the management already thinks that the current crop of CAE solvers are magical and so the credentialed professional engineers already know how to fight that battle for a lot of the structural details. (The long-suffering assembly line folk who are trying to assemble the airplane properly are, of course, a different matter and have had a lot less leverage)
Although, I'd also propose that there's a second risk, which is that the current validation process is oriented towards the ways with which the existing FEM models screw you up and it's likely that when the large physics model screws you up, it won't be the way FEM models do.
"According to our latest model, if you remove the doors from the aircraft entirely instead of waiting for them to come off midair, you can save about $0.007 of fuel per flight"
Well, who do I trust less? GM or AI?
And large physics models are simulations, aren't they?
Right? Like isn't this just a different algorithm at the end of the day? This just has an organic (as in a part of the design) way to skip parts of the equation with pressumed constants.
If you have to feed back empirical data to tweak and re-train the transformer/CNN-based model, then it really doesn't offer an advantage over traditional physics-based models. The goal should be to get rid of as many fudge factors as possible.
I mean, they're using AI for "polling" now so, why not?