AGI House Dinner Series | April 27, 2026

At AGI House, we bring together frontier researchers, founders, and investors to answer a deceptively simple question: what actually gets us to AGI?


Oriol Vinyals (VP Research @ Google DeepMind; Gemini co-architect), Andrew Dai (CEO @ Elorean AI, 12-year DeepMind veteran ), Jiajun Wu (Stanford professor, vision & robotics), Fan-yun Sun (Cofounder @ Moonlight AI ), Zayd Enam (CEO @ EnamCo, Cofounder @ Cresta), Nick Oupurov ( CEO @ Fleet AI), Nazneen Rajani (CEO @ Collinear CEO; ex-Hugging Face), Xiang  Deng(Cofounder @ NeoCognition), Alex Wang (Stanford PhD), Brian Zhan (Partner @ Striker Venture Partners), Bill Sun (CEO @ GAlpha, early Google Brain attention researcher), Andrew Ma (Director @ Turing), Rocky Yu (CEO @ AGI House, Host)

World Models, Agents, and the Path to AGI

Inside the AGI House Dinner Series

At AGI House, we bring together frontier researchers, founders, and investors to answer a deceptively simple question: what actually gets us to AGI?

At our recent dinner on World Models, Agents, and the Path to AGI, one thing became clear—there’s no single path forward. But there are clear fault lines shaping the future.

This post distills the most important ideas from the room.

1. What is a World Model—Really?

The concept of a “world model” predates modern AI hype by decades.

In classical control theory (1960s–70s), a world model was simple and precise:

A function that maps state + action → future state

In other words: if I do X, what happens?

Today, that definition has fragmented.

Some see world models as:

  • Learned simulations of environments (e.g. video-based models)
  • Implicit representations inside large neural networks
  • Domain-specific tools (robotics, games, finance)

The key disagreement:

Do we actually need explicit world models for AGI?

  • One camp argues end-to-end systems are enough—just scale and let the model learn everything implicitly.
  • The other argues explicit structure is necessary, especially in data-scarce domains like robotics.

This isn’t just philosophical—it directly affects how systems are built.

2. Gaming: The First True World Model Playground

Gaming has quietly become the most fertile ground for world models.

There are two distinct paradigms emerging:

World Models as a Product

  • Entire environments generated by neural networks
  • Think: playable worlds, dynamic environments, infinite content

World Models as a Tool

  • Training grounds for agents
  • Simulated environments to learn skills before deployment

The technical challenges are non-trivial:

  • Generalizing across game mechanics and rules
  • Designing data distributions that don’t overfit
  • Balancing realism vs controllability

But the opportunity is massive.

The gaming industry is already larger than movies—and world models could fundamentally reshape how games are built.

This isn’t just a feature. It’s a new platform layer.

3. Robotics: Where World Models Actually Matter

If gaming is abundant-data paradise, robotics is the opposite.

  • Real-world data is scarce and expensive
  • Failures are costly
  • Iteration cycles are slow

This is where world models become not just useful—but necessary.

Key insights from the discussion:

  • Sim-to-real remains unsolved
  • YouTube-scale video data hasn’t translated into robotics breakthroughs
  • High-fidelity physics simulation is still a bottleneck

There are early signals:

  • A handful of companies have demonstrated promising results with learned world models
  • New scaling laws for robotics are starting to emerge

But compared to digital domains, physical intelligence is still behind.

Digital AGI may arrive soon. Physical AGI is a different timeline entirely.

4. Agents: The System Matters More Than the Model

One of the most practical takeaways from the dinner:

The biggest gains today are not from better models—but from better systems around them.

There’s an ongoing debate:

Should we improve the model weights?

  • Expensive
  • Slow
  • Requires frontier-level compute

Or improve the harness (the system around the model)?

  • Faster iteration
  • Easier to deploy
  • Often delivers more immediate value

The emerging consensus:

The sweet spot is “good enough models + excellent infrastructure.”

This includes:

  • Tool use
  • Memory systems
  • Orchestration layers
  • Human-in-the-loop control

Even top-tier models still struggle with:

  • Choosing the right level of abstraction
  • Knowing when to hand control back to humans
  • Solving last-mile enterprise problems

5. Where AI Actually Works Today

Despite all the hype, real deployment is concentrated in constrained environments.

Examples discussed:

  • Insurance workflows generating significant revenue
  • Coding environments showing strong productivity gains
  • Customer service and credit operations

Why these work:

  • Clear feedback loops
  • Structured data
  • Well-defined success metrics

This leads to a key investment insight:

The best opportunities aren’t necessarily the biggest visions—they’re the domains where models already perform reliably.

6. The AGI Timeline: Closer Than It Feels

One definition from the room stood out:

AGI is an agent that can learn from experience at a reasonable rate.

Under that definition:

  • Digital AGI could arrive within this decade—possibly within 1–2 years
  • Coding is likely the gateway domain
  • Physical AGI will take longer due to real-world constraints

But there’s a catch.

The Real Bottleneck: Compute

  • Energy
  • Cooling
  • Memory bandwidth
  • Chip manufacturing

Even today:

Most compute is spent on research, not production

At the same time, AI is starting to accelerate its own progress:

  • Automated research assistants
  • Experiment generation
  • Paper synthesis

We’re entering a feedback loop.

7. Investment Dynamics: Talent Over Everything

From the investor perspective, one principle dominated:

Back the best researchers in the most important categories.

Not:

  • Business models (too early)
  • Monetization strategies (will evolve)

But:

  • Technical leadership
  • Category dominance

Emerging opportunity areas:

  • Gaming world models (new engines beyond traditional platforms)
  • Robotics infrastructure
  • Agent systems for enterprise

Meanwhile, the competitive landscape remains fluid:

  • Leadership in specific domains shifts quickly
  • Large labs can catch up fast when focused
  • The frontier is still wide open

8. The Unsolved Problems

Despite rapid progress, several hard problems remain:

Cross-Modality Learning

  • Video data doesn’t meaningfully improve reasoning benchmarks
  • Bridging perception and cognition is still unsolved

Architecture vs Scaling

  • Most labs continue scaling existing paradigms
  • Some are exploring new architectures—but cautiously

Self-Improving Systems

  • AI can assist research—but not replace human judgment
  • Taste, problem selection, and intuition remain human advantages

9. Why AGI House Exists

AGI House was built for exactly these conversations.

Across dinners, fellowships, and collaborations, the goal is simple:

Accelerate the transition from AI breakthroughs → real-world impact

Closing Thought

The path to AGI is not a straight line.

It’s a convergence:

  • World models (explicit or implicit)
  • Agent systems
  • Infrastructure and compute
  • Human judgment and taste

The biggest mistake right now is assuming one paradigm will win.

The reality is messier—and more interesting.

And that’s exactly why we host these dinners.