This was the expected outcome. It's why some people come off so frothing at the mouth about it
Traister101
Ironically you might have restricted mode enabled. Check your settings in YouTube and disable it if it is
Did not expect to see a Voosh video in my !technology@lemmy.world, hopefully folks here won't be put off by his strong (but accurate) words to describe the situation.
More complex forms of reasoning in the context of "Reasoning Systems" is video game NPC Ai. They take the current game state and "reason" about what action they should take now or even soon in the future. Really good video game Ai will use your velocity to pre-aim projectiles at where you'll be in the future instead of where you are currently. The NPC analogy is one of the very thing's being described by the term
If you truly believe that you fundamentally misunderstand the definition of that word or are being purposely disingenuous as you Ai brown nose folk tend to be. To pretend for a second you genuinely just don't understand how to read LLMs, the most advanced "Ai" they are trying to sell everybody is as capable of reasoning as any compression algorithm, jpg, png, webp, zip, tar whatever you want. They cannot reason. They take some input and generate an output deterministically. The reason the output changes slightly is because they put random shit in there for complicated important reasons.
Again to recap here LLMs and similar neural network "Ai" is as capable of reasoning as any other computer program you interact with knowingly or unknowingly, that being not at all. Your silly Wikipedia page is a very specific term "Reasoning System" which would include stuff like standard video game NPC Ai such as the zombies in Minecraft. I hope you aren't stupid enough to say those are capable of reasoning
They can't reason. LLMs, the tech all the latest and greatest still are, like GPT5 or whatever generate output by taking every previous token (simplified) and using them to generate the most likely next token. Thanks to their training this results in pretty good human looking language among other things like somewhat effective code output (thanks to sites like stack overflow being included in the training data).
Generating images works essentially the same way but is more easily described as reverse jpg compression. You think I'm joking? No really they start out with static and then transform the static using a bunch of wave functions they came up with during training. LLMs and the image generation stuff is equally able to reason, that being not at all whatsoever
No actually! Musks entire involvement with PayPal was being fired by the company he founded which then later down the road was bought out by PayPal when the people who fired him for incompetence turned it around and made it valuable enough PayPal wanted it
I presume you don't live in the US because lol. Living is more expensive than ever before in history but shit like phones and TVs cheap yay?
What's wrong with liking cuties?
Sometimes you don't feel like eating a large orange.
Well they are an Ex chairman so hopefully that's a good sign?
It's not an assumption it's just a matter of practical reality. If we're at best a decade off from that point why pretend it could suddenly unexpectedly improve to the point it's unrecognizable from its current state? LLMs are neat, scientists should keep working on them and if it weren't for all the nonsense "Ai" hype we have currently I'd expect to see them used rarely but quite successfully as it would be getting used off of merit, not hype.
I mean it varies. It's fun to just goof off in games sometimes. That's why sandbox games like Minecraft typically have creative modes, sometimes you just wanna play in the sandbox and have a good time without the more typical game parts tying you down