Shane Legg talks about the 1997 and later creation and popular definition of general intelligence.
Shane Legg, co-founder and Chief AGI Scientist at Google DeepMind, defines levels of artificial general intelligence (AGI) as a spectrum rather than a single binary threshold.
NOTE: He mentions a 1997 paper on nanotechnology security that defined AGI. I, Brian Wang, was at the Foresight Institute conferences in 1996 and 1997 where super artificial intelligence was debated. Foresight constantly had mind expanding debates about the limits of technology and going beyond limits.
This debate in 2011, had earlier versions in 1996 and 1997.
The AI results was felt to be inevitable when molecular nanotechnology happened. It turns out the molecular nanotechnology advances lagged AI advances.
1. Minimal AGI about 2027
2. Full AGI 3-6 years after minimal AGI (2030-2033)
3. Superintelligence is likely soon after Full AGI (say less than a year to two years)
Minimal AGI is an artificial agent that can perform all the kinds of cognitive tasks that typical humans can do. The bar is set at “typical” human performance to avoid being too low (where AI fails basic tasks humans easily handle) or too high (excluding many humans).
Current AI is uneven. Superhuman in areas like multilingual fluency or general knowledge, but weak in continual learning, visual/spatial reasoning (perspective in scenes or graph/diagram reasoning).
Legg expects minimal AGI in a few years, guessing around 2 years from late 2025. He thinks 2027–2028.
Full AGI is an AI that can achieve the full spectrum of human cognitive capabilities, including extraordinary feats (inventing new physics theories, composing groundbreaking symphonies, or producing revolutionary literature like Einstein or Mozart).
Artificial Superintelligence (ASI) is AI with the generality of AGI but far beyond human cognitive limits in capability. Legg acknowledges no perfect definition exists (every attempt has flaws), but it vaguely means vastly superior general intelligence.
He views human intelligence is NOT the upper limit of what is possible.
How to Test and Confirm AGI (Especially Minimal AGI)
Legg proposes a rigorous, human-referenced operational definition focused on generality.
There will be an initial Battery of Tests. A large suite of thousands of cognitive tasks where human typical performance is known.
AI must match or exceed typical human level on all tasks.
Failure on any task = fails to meet minimal AGI (lacks sufficient generality).
Adversarial Phase
If it passes the standard suite, give teams of experts (with model access) 1–2 months to find any cognitive task typical humans can do where the AI fails.
If no such failure is found, it practically qualifies as minimal AGI.
He rejects narrower definitions like fixed checklists such as the Humanity’s Last Exam.
Specific feats (cooking in a new kitchen, making $1M from $100K).
These miss the core emphasis on broad cognitive generality.
Why Superintelligence Is Inevitable
Legg argues superintelligence is inevitable due to fundamental computational advantages over the human brain.
The Human brain constraints are~20 watts power.
A few pounds weight.
~100–200 Hz signal frequency
Electrochemical signals at ~30 m/s
AI in data centers can scale to
Hundreds of megawatts and even gigawatts and terawatts
Millions of pounds of hardware
GHz–THz frequencies
Near light-speed signal propagation (300,000 km/s)
This is 6–8 orders of magnitude advantage in the near term in power, size, bandwidth, and speed simultaneously. Just as machines already surpass humans physically (cranes lift more, telescopes see farther, computers store more knowledge), cognition will follow the same pattern. As algorithms and understanding improve, AI will far exceed human reasoning, invention, and other cognitive domains.
FOUR AI SCALING AT THE SAME TIME
1. Power
2. Size
3. Bandwidth
4. Speed
Likely also algorithmic efficiency. Algorithmic efficiency and other efficiency leverages to give more from power, speed and bandwidth.
Legg Predicts the Timeline
Likely Superintelligence by ~2028 or Shortly After?
Minimal AGI: Legg has consistently predicted 50% chance by 2028 (stated since 2009, reaffirmed in this 2025 interview).
Full AGI: A few years later — 3–6 years after minimal AGI → roughly early 2030s.
Superintelligence will follow relatively quickly after full AGI, driven by the massive computational scaling potential. While not giving an exact ASI date, the logic (unbounded scaling beyond human limits) suggests it could emerge within the late 2020s to early 2030s if progress continues.
What Are the “Scaling” Factors?
The path to superintelligence relies on a combination of scaling approaches, not just one.
Compute scaling — larger models, more energy, bigger data centers (the core driver for surpassing human limits).
More/better data — especially targeted data for weak areas (e.g., visual reasoning datasets).
Algorithmic and architectural improvements — e.g., episodic memory/retrieval systems for continual learning, new processes for reasoning.
Ongoing progress in metrics across many cognitive weaknesses, addressing the “long tail” of human-like tasks.
Legg sees no fundamental blockers. Current gaps are being closed through this multi-factor scaling. In summary, Legg views the arrival of minimal AGI around 2028 as highly plausible, full AGI soon after, and superintelligence as an inevitable outcome of computational scaling — leading to profound societal transformation that requires urgent preparation across all fields.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.

