Intro to AI safety
Introduction
This section will explain and build a case for existential risk from AI. It’s too short to give more than a rough overview, but will link to other aisafety.info articles when more detail is needed.
As an alternative, we also have a self-contained narrative introduction.
Summary
-
AI systems far smarter than us may be created soon. AI is advancing fast, and this progress may result in human-level AI — but human-level is not the limit, and shortly after, we may see superintelligent AI.
-
These systems may end up opposed to us. AI systems may pursue their own goals, those goals may not match ours, and that may bring us into conflict.
-
Consequences could be major, including human extinction. AI may defeat us and take over, which would be ruinous. But there are also other implications, including great benefits if AI is developed safely.
-
We need to get our act together. Experts are worried, but humanity doesn’t have a real plan, and you may be able to help.