AI 2027 depicts a possible future where artificial intelligence radically transforms the world in just a few intense years. It’s based on detailed expert forecasts — but how much of it will actually happen? Are we really racing towards a choice between a planet controlled by the elite, or one where humans have lost control entirely?
My takeaway? Loss of control, racing scenarios, and concentration of power are all concerningly plausible, and among the most pressing issues the world faces.
Check out the video and the resources below, judge the scenario for yourself, and let me know in the comments: how realistic is this? What are you still confused about? What makes you feel skeptical? What do you think we can actually do about this?
*Where to find me*
Subscribe to AI in Context to get up to speed and join the conversation about AI. There’s a lot to figure out, and we might have less time than you think. It’s time to jump in.
You can also follow for skits and explainers on YouTube Shorts as well as:
TikTok: https://www.tiktok.com/@ai_in_context
Instagram: https://www.instagram.com/ai_in_context/
This video is a production of 80,000 Hours. Find us at https://80000hours.org and subscribe to our main YouTube channel here: @eightythousandhours
*What you can do next*
To read more about what you might be able to do to help, or get involved, check out: https://80000hours.org/agi/
You can also check out the 80,000 Hours job board at https://jobs.80000hours.org
Or see what the authors of AI 2027 suggest doing next: https://blog.ai-futures.org/p/what-you-can-do-about-ai-2027
Or take a 2-hour course on the Future of AI: https://bluedot.org/courses/future-of-ai
You can tell your US or UK representatives you care about this issue in 60 seconds using this tool: https://controlai.com/take-action/
And if you just want some practical recommendations for how you and your family can get more prepared: https://benjamintodd.substack.com/p/how-can-an-ordinary-person-prepare
*Further reading and watching*
About AI 2027
Full report: https://ai-2027.com/
By Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
Update on their model: https://ai-2027.com/research/timelines-forecast#2025-may-7-update
The lead author’s change in median forecast to 2028: https://x.com/DKokotajlo/status/1940270575248973910
For more videos about AI risk, check out:
Previous video about AI 2027: https://www.youtube.com/watch?v=k_onqn68GHY
Could AI wipe out humanity? | Most pressing problems: https://www.youtube.com/watch?app=desktop&v=qzyEgZwfkKY
Intro to AI Safety by Rob Miles: https://www.youtube.com/watch?v=pYXy-A4siMw
Me on Computerphile: https://www.youtube.com/watch?v=pYP0ynR8h-k
For more on what it means for an AI to “seek reward”, check out my short video: https://www.youtube.com/shorts/OoClSkTd6yY
To read more about misalignment and AI risk: https://80000hours.org/problem-profiles/artificial-intelligence/
To read more about why AGI by 2030 is plausible https://80000hours.org/agi/guide/when-will-agi-arrive/
*Chapters*
0:00 Introduction
1:15 The World in 2025
3:53 The Scenario Begins
6:07 Sidebar: Feedback Loops
7:21 China Wakes Up
10:11 Sidebar: Chain of Thought
10:52 Better-than-human Coders
11:46 Sidebar: Misalignment in the Real World
12:08 Agent-3 Deceives
15:18 Sidebar: How Misalignment Happens
17:53 The Choice
20:07 Ending A: The Race
24:08 Ending B: Slowdown
26:30 Zooming Out
29:04 The Implications
31:19 What Do We Do?
33:30 Conclusions and Resources
*Credits*
Directed and Produced by Phoebe Brooks: https://pbrooksfilms.com/
Written by Phoebe Brooks and Aric Floyd
Editing, Graphics and Animation by Phoebe Brooks, Sam Watkins and Daniel Recinto: https://www.watkinsfilms.com/, http://behance.net/danielrecinto
Executive Produced by Chana Messinger
Production assistance from Charlotte Maxwell, Jack Worrall, David Erwood and Jake Morris
With special thanks to Daniel Kokotajlo, Ryan Greenblatt, Nate Soares, Max Harms, Katja Grace, Mark Beall, Seán Ó Héigeartaigh and Eli Lifland
And thanks to Bella Forristal, Arden Koehler, Ailbhe Treacy, Rob Wiblin, Sean Riley, Siliconversations, Mathematicanese, Valerie Richmond, Daria Ivanova, Sloane Siegel, Brendan Hurst, Katy Moore, Mark DeVries, Ines Fernandez, Francesca Forristal, Rob Miles, Elizabeth Cox, Drew Spartz, Petr Lebedev, Mithuna Yoganathan, Conor Barnes
My takeaway? Loss of control, racing scenarios, and concentration of power are all concerningly plausible, and among the most pressing issues the world faces.
Check out the video and the resources below, judge the scenario for yourself, and let me know in the comments: how realistic is this? What are you still confused about? What makes you feel skeptical? What do you think we can actually do about this?
*Where to find me*
Subscribe to AI in Context to get up to speed and join the conversation about AI. There’s a lot to figure out, and we might have less time than you think. It’s time to jump in.
You can also follow for skits and explainers on YouTube Shorts as well as:
TikTok: https://www.tiktok.com/@ai_in_context
Instagram: https://www.instagram.com/ai_in_context/
This video is a production of 80,000 Hours. Find us at https://80000hours.org and subscribe to our main YouTube channel here: @eightythousandhours
*What you can do next*
To read more about what you might be able to do to help, or get involved, check out: https://80000hours.org/agi/
You can also check out the 80,000 Hours job board at https://jobs.80000hours.org
Or see what the authors of AI 2027 suggest doing next: https://blog.ai-futures.org/p/what-you-can-do-about-ai-2027
Or take a 2-hour course on the Future of AI: https://bluedot.org/courses/future-of-ai
You can tell your US or UK representatives you care about this issue in 60 seconds using this tool: https://controlai.com/take-action/
And if you just want some practical recommendations for how you and your family can get more prepared: https://benjamintodd.substack.com/p/how-can-an-ordinary-person-prepare
*Further reading and watching*
About AI 2027
Full report: https://ai-2027.com/
By Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
Update on their model: https://ai-2027.com/research/timelines-forecast#2025-may-7-update
The lead author’s change in median forecast to 2028: https://x.com/DKokotajlo/status/1940270575248973910
For more videos about AI risk, check out:
Previous video about AI 2027: https://www.youtube.com/watch?v=k_onqn68GHY
Could AI wipe out humanity? | Most pressing problems: https://www.youtube.com/watch?app=desktop&v=qzyEgZwfkKY
Intro to AI Safety by Rob Miles: https://www.youtube.com/watch?v=pYXy-A4siMw
Me on Computerphile: https://www.youtube.com/watch?v=pYP0ynR8h-k
For more on what it means for an AI to “seek reward”, check out my short video: https://www.youtube.com/shorts/OoClSkTd6yY
To read more about misalignment and AI risk: https://80000hours.org/problem-profiles/artificial-intelligence/
To read more about why AGI by 2030 is plausible https://80000hours.org/agi/guide/when-will-agi-arrive/
*Chapters*
0:00 Introduction
1:15 The World in 2025
3:53 The Scenario Begins
6:07 Sidebar: Feedback Loops
7:21 China Wakes Up
10:11 Sidebar: Chain of Thought
10:52 Better-than-human Coders
11:46 Sidebar: Misalignment in the Real World
12:08 Agent-3 Deceives
15:18 Sidebar: How Misalignment Happens
17:53 The Choice
20:07 Ending A: The Race
24:08 Ending B: Slowdown
26:30 Zooming Out
29:04 The Implications
31:19 What Do We Do?
33:30 Conclusions and Resources
*Credits*
Directed and Produced by Phoebe Brooks: https://pbrooksfilms.com/
Written by Phoebe Brooks and Aric Floyd
Editing, Graphics and Animation by Phoebe Brooks, Sam Watkins and Daniel Recinto: https://www.watkinsfilms.com/, http://behance.net/danielrecinto
Executive Produced by Chana Messinger
Production assistance from Charlotte Maxwell, Jack Worrall, David Erwood and Jake Morris
With special thanks to Daniel Kokotajlo, Ryan Greenblatt, Nate Soares, Max Harms, Katja Grace, Mark Beall, Seán Ó Héigeartaigh and Eli Lifland
And thanks to Bella Forristal, Arden Koehler, Ailbhe Treacy, Rob Wiblin, Sean Riley, Siliconversations, Mathematicanese, Valerie Richmond, Daria Ivanova, Sloane Siegel, Brendan Hurst, Katy Moore, Mark DeVries, Ines Fernandez, Francesca Forristal, Rob Miles, Elizabeth Cox, Drew Spartz, Petr Lebedev, Mithuna Yoganathan, Conor Barnes
- Category
- Artificial Intelligence
Comments