What is artificial general intelligence (AGI)?

Loosely speaking, an artificial general intelligence (AGI) is a hypothetical future AI that’s smart like a human. There isn't agreement on an exact definition:

  • Wikipedia defines it as "AI that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks."
  • IBM defines it as "The science-fiction version of artificial intelligence, where artificial machine intelligence achieves human-level learning, perception and cognitive flexibility."
  • Some define it as AI that can do most economically-valuable tasks (i.e., do most human jobs).
  • Some define it as AI that reasons in a way that generalizes to a lot of different problems, including problems in domains the AI hasn't encountered before.

AGI is often contrasted with narrow AI, which can only perform one specific task or a few closely related tasks, such as playing board games or recommending products.

Nobody has built AGI yet[1], but some AI labs are explicitly trying to. Many experts expect that AGI will be built in the not-too-distant future. We don’t know what the first AGI will look like or whether it can be produced by scaling current architectures (such as GPT).


  1. A few people disagree and argue that current LLMs are AGI. Blaise Agüera y Arcas and Peter Norvig made that claim in 2023, and Tyler Cowen made it in 2025 with the release of OpenAI o3. ↩︎



AISafety.info

We’re a global team of specialists and volunteers from various backgrounds who want to ensure that the effects of future AI are beneficial rather than catastrophic.