What are the differences between AGI, transformative AI, and superintelligence?
These terms are all related attempts to define AI capability milestones[1] — roughly, "the point at which artificial intelligence becomes truly intelligent." There’s a lot of variance in how different people use them — we’re pretty confused about what these systems will look like, and it’s hard to find definitions that are natural.[2] But the most standard meanings are something like:
- AGI stands for "artificial general intelligence" and refers to AI programs that aren't just skilled at a narrow task (like playing board games or driving cars)[3] but that have a kind of intelligence that they can apply to a similarly wide range of domains as humans. Some call systems like Gato AGI because they can solve many tasks with the same model. However, the term is more often used for systems with at least human-level general competence, so more typically AGI is still seen as a potential future development.[4] One concrete definition of AGI is that it can do any economically productive task a human can, perhaps with a bit of on-the-job-training. So if we saw unified AI replacing humans in many jobs, it would be AGI.
- Transformative AI is any AI powerful enough to transform society.[5] Holden Karnofsky defines it as AI that causes at least as big an impact as the Agricultural or Industrial Revolutions, which increased economic growth many times over. Ajeya Cotra's "Forecasting Transformative AI with biological anchors" describes a "virtual professional," i.e., a program that can do most remote jobs, as an example of a system that would have such an impact.
- Superintelligence is defined by Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." This is a significantly higher bar than the concepts listed above, but it may be reached a short time after the others, e.g., because of recursive self-improvement.
Other terms which are sometimes used include:
- Advanced AI is any AI that's much more powerful than current AI. The term is sometimes used as a loose placeholder for the other concepts here.
- Human-level AI is sometimes defined as any AI that can solve all the cognitive problems a human can solve, and sometimes just vaguely means it’s roughly as intelligent as humans on average. Current AI has a very different profile of strengths and weaknesses than humans, and this is likely to remain true of future AI: before AI is at least human-level at all tasks, it will probably be vastly superhuman at some important tasks while still being weaker at others. For example: a human-level AI could be superhuman at programming while struggling to write a good novel.
- Strong AI was defined by John Searle as the philosophical thesis that computer programs can have "a mind in exactly the same sense human beings have minds", but the term is sometimes used outside this context as more or less interchangeable with "AGI" or "human-level AI."
- Seed AI is any AI with enough AI programming ability to set off a recursive self-improvement process, maybe one that takes it all the way to superintelligence. An AI might not have to start off as an AGI to have sudden and dangerous impacts in this way.
- Turing Test-passing AI is any AI smart enough to fool human judges into thinking it's human. The level of capability required depends on how intense the scrutiny is: current language models trained to imitate human text can already seem human to a casual observer, despite not having general human-level intelligence. On the other hand, imitating an intelligence can be harder than outperforming it (in the same way that it’s harder to walk exactly like a turtle than to walk faster than a turtle), so it's also possible for smarter-than-human AI to fail the Turing Test.
- APS-AI is a term introduced by Joe Carlsmith in his report on existential risk from power-seeking AI. APS stands for Advanced, Planning, and Strategically aware. "Advanced" means it's more powerful than humans at important tasks; "Planning" means it's an agent that pursues goals by using its world models; "Strategically aware" means it has good models of its strategic situation with respect to humans in the real world. Carlsmith argues that these properties together create the risk of AI takeover.
- PASTA is an acronym for "Process for Automating Scientific and Technological Advancement," introduced by Holden Karnofsky in a series of blog posts. His thesis is that any AI powerful enough to automate human R&D is sufficient for sudden transformative impacts, even if it doesn't qualify as AGI.
- Uncontrollable AI means an AI that can circumvent or counter any measures humans take to correct its decisions or restrict its influence. An uncontrollable AI doesn’t have to be an AGI or superintelligence. It could, for example, just have powerful hacking skills that make it practically impossible to shut it down or remove it from the internet. An AI could also become uncontrollable by becoming very skilled at manipulating humans.
- The t-AGI framework, proposed by Richard Ngo, benchmarks the difficulty of a task by how long it would take a human to do it, and says an AI is a t-AGI if it can (at all, in any amount of time) do most tasks of difficulty t. For instance, an AI that can recognize objects in an image, answer trivia questions, etc., is a "1-second-AGI,” because it can do most tasks that would take a human one second to do, while an AI that can do things like develop new apps and review scientific papers is a "1-month-AGI."
These definitions have also changed over time. ↩︎
For a dialogue illustrating the difficulty of coming up with good definitions in mathematics, see “Proofs and Refutations” by Imre Lakatos. ↩︎
AI that excels at specific tasks is sometimes called “Narrow AI.” ↩︎
The term AGI suffers from ambiguity, to the point where some people avoid using it. Still, it remains the most common term used to talk about the cluster of concepts used on this page. ↩︎
The term is unrelated to the transformer architecture. ↩︎