All prompts

Scaling to AGI

To what extent do you think scaling large language models will contribute to achieving AGI, and what other approaches or breakthroughs might be necessary?

10.8s $0.0054 388 tokens

I think scaling alone is probably insufficient, though it's gotten us surprisingly far. Here's my assessment:

What scaling has delivered (unexpectedly well)

  • Emergent capabilities that weren't explicitly trained
  • Strong few-shot learning and generalization
  • Reasoning improvements (though brittle)
  • Cross-domain knowledge integration

Likely scaling limitations

Grounding: LLMs learn from text about the world, not from the world itself. This seems fundamentally limiting for robust physical reasoning and causal understanding.

Sample efficiency: Humans learn from vastly less data. A child doesn't need to read millions of words to understand object permanence.

Reliability: Scaling reduces but doesn't eliminate hallucinations or logical errors. The improvement curve may be asymptotic.

Agency: Current models are reactive. They don't maintain goals, update beliefs over time, or actively explore.

What else might be needed

  • Embodied learning or richer interaction loops (not just text completion)
  • Hybrid architectures combining neural networks with symbolic reasoning, formal verification, or search
  • Better learning algorithms that extract more from less data
  • Persistent memory and goal-directed behavior rather than stateless responses
  • Metacognition - models that can genuinely monitor and improve their own reasoning

My intuition: we're maybe 30-60% of the way there with scaling, but the remaining gap requires qualitatively different approaches. The question is whether those can be incremental additions or require fundamental reconceptualization.

What's your take on this?

Generated 24th Oct 2025