this post was submitted on 13 Mar 2026
189 points (96.1% liked)

Technology

82581 readers
5658 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

" ...
What exactly is AMI building? The short answer is world models, a category of AI system that LeCun has been arguing for, and working on, for years. The longer answer requires understanding why he thinks the industry has taken a wrong turn.

Large language models learn by predicting which word comes next in a sequence. They have been trained on vast quantities of human-generated text, and the results have been remarkable, ChatGPT, Claude, and Gemini have demonstrated an ability to generate fluent, plausible language across an enormous range of subjects. But LeCun has spent years arguing, loudly and repeatedly, that this approach has fundamental limits.

His alternative is JEPA: the Joint Embedding Predictive Architecture, a framework he first proposed in 2022. Rather than predicting the future state of the world in pixel-perfect or word-by-word detail, the approach that makes generative AI both powerful and prone to hallucination, JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail. The idea is to build systems that understand physical reality the way humans and animals do: not through language, but through embodied experience."

you are viewing a single comment's thread
view the rest of the comments
[โ€“] merc@sh.itjust.works 3 points 2 hours ago (1 children)

LLMs are an obvious dead end when it comes to actual "intelligence" or understanding how the world works.

But, this sounds like a "draw the rest of the owl" situation.

"JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail."

Oh, it's that simple is it? Just have it "learn abstract representations of how the world works". Amazing how nobody thought to do that before!

I think I understand the distinction they're trying to draw. Current models are trained on billions of pictures of cats and billions of pictures of dogs. You feed it an image of Fido and it finds a point in 2500 dimensional space and knows whether that point is in the "cat space" or "dog space". It can be very good, but it doesn't have any "understanding" of what makes something a cat vs. a dog. Humans, OTOH, aren't trained on billions of images. But, they learn about things like "teeth" and "whiskers" and "snouts" and "eyes". Within their knowledge of eyes, they spot that vertical slit pupils are unusual and different, and part of what makes something "catlike". AFAIK, nobody has ever managed to create a system that learns abstract features without intensive human training.

I like that they're trying something new. But, are they counting on a massive breakthrough on a problem that has existed since people first started theorizing about AI? Or, is it just a matter of refining a known process?

[โ€“] SuspciousCarrot78@lemmy.world 1 points 2 hours ago* (last edited 1 hour ago)

Ppft. Simples. We already solved this Down Under.

https://interestingengineering.com/science/biological-computer-with-human-neurons-play-doom

I have no mouth. Yet, I must scream.