this post was submitted on 27 Mar 2026
609 points (97.2% liked)

Technology

83220 readers
3066 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, "Reasoning" models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what's next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

(page 3) 50 comments
sorted by: hot top controversial new old
[–] Diurnambule@jlai.lu 5 points 3 days ago (2 children)
load more comments (2 replies)
[–] SuspciousCarrot78@lemmy.world 5 points 3 days ago (1 children)

"...specifically crafted to demonstrate tasks that humans complete easily"

Motherfucker, I can't work out Minesweeper. I got zero fucking chance with your mystery box bloop game.

load more comments (1 replies)
[–] Sam_Bass@lemmy.world 4 points 3 days ago (1 children)

AI code is prewritten and is unable to edit that. Humans edit their "code" every second

[–] Lumisal@lemmy.world 4 points 3 days ago

It's funny because that means something like freaking Neurosama made by a YouTuber could probably do better at AGI than these multi billion dollar companies due to it being designed so it can modify it's own code depending on the task given (and at one point, doing so while not directly prompted).

Of course, this makes Neurosama completely useless at work focused tasks outside of coding, because it can and does refuse to do things on purpose.

And that's exactly why you won't see AGI coming from any huge business corporation - because they're trying to make something that replaces workers, rather than something that has no direct purpose.

(Disclaimer - this is not to say Neurosama is AGI in any way, just that it could probably do the tasks much better than the mainstream AIs can, because it has been build with flexibility and adaptability in mind.)

[–] WorldsDumbestMan@lemmy.today 3 points 3 days ago

I'm not sure such a general term is factual.

I doubt I can adapt 100%

load more comments
view more: ‹ prev next ›