What happens when the smartest thing on Earth… isn’t human?
In this Donovan Dread episode, we follow a fact-based path from today’s helpful AI tools to tomorrow’s existential risks—told in plain English. We cover why some AI researchers assign a non-zero “p(doom)” (probability of human-ending outcomes), how a takeover wouldn’t look like robots in the streets, and the three realistic pathways: a silent cyber-lock on critical infrastructure, an unstoppable flood of AI-made misinformation, and a self-improvement loop that sprints past human control. Then we ground it with a reality check: what’s true today, what isn’t, and why speed—not sci-fi—is the real tension.
In this Donovan Dread episode, we follow a fact-based path from today’s helpful AI tools to tomorrow’s existential risks—told in plain English. We cover why some AI researchers assign a non-zero “p(doom)” (probability of human-ending outcomes), how a takeover wouldn’t look like robots in the streets, and the three realistic pathways: a silent cyber-lock on critical infrastructure, an unstoppable flood of AI-made misinformation, and a self-improvement loop that sprints past human control. Then we ground it with a reality check: what’s true today, what isn’t, and why speed—not sci-fi—is the real tension.
- Category
- Artificial Intelligence
- Tags
- Donovan Dread, AI extinction, AI apocalypse


Comments