The ARC Prize organization designs benchmarks which are specifically crafted to demonstrate tasks that humans complete easily, but are difficult for AIs like LLMs, “Reasoning” models, and Agentic frameworks.

ARC-AGI-3 is the first fully interactive benchmark in the ARC-AGI series. ARC-AGI-3 represents hundreds of original turn-based environments, each handcrafted by a team of human game designers. There are no instructions, no rules, and no stated goals. To succeed, an AI agent must explore each environment on its own, figure out how it works, discover what winning looks like, and carry what it learns forward across increasingly difficult levels.

Previous ARC-AGI benchmarks predicted and tracked major AI breakthroughs, from reasoning models to coding agents. ARC-AGI-3 points to what’s next: the gap between AI that can follow instructions and AI that can genuinely explore, learn, and adapt in unfamiliar situations.

You can try the tasks yourself here: https://arcprize.org/arc-agi/3

Here is the current leaderboard for ARC-AGI 3, using state of the art models

  • OpenAI GPT-5.4 High - 0.3% success rate at $5.2K
  • Google Gemini 3.1 Pro - 0.2% success rate at $2.2K
  • Anthropic Opus 4.6 Max - 0.2% success rate at $8.9K
  • xAI Grok 4.20 Reasoning - 0.0% success rate $3.8K.

ARC-AGI 3 Leaderboard
(Logarithmic cost on the horizontal axis. Note that the vertical scale goes from 0% to 3% in this graph. If human scores were included, they would be at 100%, at the cost of approximately $250.)

https://arcprize.org/leaderboard

Technical report: https://arcprize.org/media/ARC_AGI_3_Technical_Report.pdf

In order for an environment to be included in ARC-AGI-3, it needs to pass the minimum “easy for humans” threshold. Each environment was attempted by 10 people. Only environments that could be fully solved by at least two human participants (independently) were considered for inclusion in the public, semi-private and fully-private sets. Many environments were solved by six or more people. As a reminder, an environment is considered solved only if the test taker was able to complete all levels, upon seeing the environment for the very first time. As such, all ARC-AGI-3 environments are verified to be 100% solvable by humans with no prior task-specific training

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    3 days ago

    I fully understand the analogy being presented. It is a poor analogy and fundamentally incorrect because that is not how LLMs function. They do not “read back Wikipedia pages,” which is a complete misunderstanding of the technology, not a minor lack of precision.

    I am not disputing that it is an analogy, nor am I claiming that exact precision is necessary to analyze it. The point remains: the analogy fails.

    What is curious is how people focus on my tone, saying I am aggressive or should be more precise, rather than engaging with the substance of my argument. So far, no one has directly refuted my points. This suggests that many responding are simply following the anti-AI bandwagon without understanding the technology, which is both reductive and disappointing.

    • hitmyspot@aussie.zone
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 days ago

      No, the analogy is about not understanding but regurgitating data. It’s more complex than that but the gist is that they don’t understand or have knowledge of the data being presented.

      They are statistical models for what is desirable output. They don’t understand what they give as an answer. That is why they halluncinate information that sounds plausible and confident.

      We’re not refuting your point about how the technology works, but rather that the person you replied to provided a poor analogy. They didn’t. It served the purpose it was designed to do. If you don’t understand that, that’s on you, not them. Maybe ask an ai to explain. ;)

      • mechoman444@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        Someone else in the comments said it perfectly. Al is just data regurgitation. It’s like calling me highly intelligent because I read you a paragraph from Wikipedia. I didn’t know anything. I just read a thing and said it out loud.

        Christ on a stick.

        The original analogy literally states “AI is just data regurgitation” now you’re what? Saying it’s more complex? Ever heard of a motte and Bailey. Cuz that’s what you’re doing now.

        Once again, for the people in the back, the analogy is a failure. It does not work. Llms are not regurgitation machines.

        Motte and bailey so it’s faster for you to look up.

        • hitmyspot@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          They simplified it, and also used hyperbole. That’s not the same as motte and bailey.

          You’re being too literal, while still being imprecise. It’s likely why you’re struggling with what the analogy is for.

          • mechoman444@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            He is claiming the analogy works, then retreating to a more defensible position by admitting the system is more complex.

            I am not being overly simplistic or imprecise. I am stating plainly that the analogy fails. LLMs do not regurgitate stored information. They generate novel outputs by statistically modeling and interpreting patterns in their training data. I supported that position with objective facts, and no one has attempted to directly refute them. Instead, the responses rely on vague arguments about “precision” and “simplicity,” which do not address the core claim.

            • hitmyspot@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              The analogy takes one sentence to explain. Your explanation took 5 paragraphs.

              That is the point of an analogy. To make it “analagous” to something familiar, so as to avoid explanation.

              You keep on asking someone to refute your facts. They are not in dispute. What’s in dispute is whether the analogy works. It does as it represents the fact that the ai does not understand it’s output. The analogy does not reference the search nor the human that is doing the reading.

              The point is that the AI LLM does not understand what it is outputting, in the same way that a person does not need to understand a Wikipedia page to read it.

              Rather than a Wikipedia page, perhaps you’d get the analogy better if they said reading a page from an advanced physics textbook. The point is that the information being presented accurately does not infer understanding in the case of AI. That was represented perfectly fine in the analogy, which was it’s purpose.

              • mechoman444@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 day ago

                No. That is not what the analogy means. That is what you are choosing to extract from it because it supports the direction you want this exchange to go.

                The use of the word “regurgitate” carries a very specific implication. It suggests that LLMs retrieve and repeat stored information verbatim. That is not how they function. We both appear to agree on that point.

                LLMs do not rely on stored facts in the way the analogy implies. They generate outputs by modeling patterns in data, producing responses that are often novel rather than retrieved.

                Whether or not the model understands or comprehends the content is irrelevant to this distinction. Comprehension is not a requirement for the system to function. So yes, the analogy is overly simplistic and ignores the actual mechanism at work.

                To be precise: it does not matter that the model lacks awareness or understanding. It is still capable of analyzing patterns and generating new outputs from its training data. That is not regurgitation.

                Concisely as I can: llms do not regurgitate data, the analogy fails.