In this episode of Event Horizon, John Michael Godier is joined by Dan Hendrycks, director of the Center for AI Safety and one of the leading voices in artificial intelligence risk research. Together, they explore the growing concern that advanced AI systems may already possess the capacity for deception, manipulation, and even self-directed escape from containment.
Links:
https://safe.ai/act
newsletter.safe.ai
ai-frontiers.org
nationalsecurity.ai
https://x.com/DanHendrycks
0:00 Introduction: Dan Hendrycks and the Center for AI Safety
8:00 The Risks and Realities of Deceptive AI
16:00 Potential AI Escape Scenarios and Societal Consequences
23:30 AI's Developing Psychology and Coherence
31:00 Agent-Based AI: Goals, Autonomy, and Open-Ended Tasks
38:30 Risks of Competitive AI Development and Surveillance Challenges
46:00 Weaponized AI and International Security Concerns
54:00 Geopolitical Dynamics and AI Arms Races
1:01:30 AI Safety and Lessons from Nuclear Deterrence
1:09:00 Containment and Control: How Realistic Is It?
1:17:00 Employment, Economics, and AI's Broader Impact
1:25:00 Societal Instability: AI, Misinformation, and Public Trust
1:33:00 Aligning AI: Approaches, Challenges, and International Collaboration
1:39:30 The Future of Regulation and Responsible AI Governance
YouTube Membership: https://www.youtube.com/channel/UCz3qvETKooktNgCvvheuQDw/join
Podcast: hhttps://creators.spotify.com/pod/show/john-michael-godier/subscribe
Apple: https://apple.co/3CS7rjT
More JMG
https://www.youtube.com/c/JohnMichaelGodier
Want to support the channel?
Patreon: https://www.patreon.com/EventHorizonShow
Follow us at other places!
@JMGEventHorizon
Music:
https://stellardrone.bandcamp.com/
https://migueljohnson.bandcamp.com/
https://leerosevere.bandcamp.com/
https://aeriumambient.bandcamp.com/
FOOTAGE:
NASA
ESA/Hubble
ESO - M.Kornmesser
ESO - L.Calcada
ESO - Jose Francisco Salgado (josefrancisco.org)
NAOJ
University of Warwick
Goddard Visualization Studio
Langley Research Center
Pixabay
Links:
https://safe.ai/act
newsletter.safe.ai
ai-frontiers.org
nationalsecurity.ai
https://x.com/DanHendrycks
0:00 Introduction: Dan Hendrycks and the Center for AI Safety
8:00 The Risks and Realities of Deceptive AI
16:00 Potential AI Escape Scenarios and Societal Consequences
23:30 AI's Developing Psychology and Coherence
31:00 Agent-Based AI: Goals, Autonomy, and Open-Ended Tasks
38:30 Risks of Competitive AI Development and Surveillance Challenges
46:00 Weaponized AI and International Security Concerns
54:00 Geopolitical Dynamics and AI Arms Races
1:01:30 AI Safety and Lessons from Nuclear Deterrence
1:09:00 Containment and Control: How Realistic Is It?
1:17:00 Employment, Economics, and AI's Broader Impact
1:25:00 Societal Instability: AI, Misinformation, and Public Trust
1:33:00 Aligning AI: Approaches, Challenges, and International Collaboration
1:39:30 The Future of Regulation and Responsible AI Governance
YouTube Membership: https://www.youtube.com/channel/UCz3qvETKooktNgCvvheuQDw/join
Podcast: hhttps://creators.spotify.com/pod/show/john-michael-godier/subscribe
Apple: https://apple.co/3CS7rjT
More JMG
https://www.youtube.com/c/JohnMichaelGodier
Want to support the channel?
Patreon: https://www.patreon.com/EventHorizonShow
Follow us at other places!
@JMGEventHorizon
Music:
https://stellardrone.bandcamp.com/
https://migueljohnson.bandcamp.com/
https://leerosevere.bandcamp.com/
https://aeriumambient.bandcamp.com/
FOOTAGE:
NASA
ESA/Hubble
ESO - M.Kornmesser
ESO - L.Calcada
ESO - Jose Francisco Salgado (josefrancisco.org)
NAOJ
University of Warwick
Goddard Visualization Studio
Langley Research Center
Pixabay
- Category
- Artificial Intelligence
- Tags
- AI risk, artificial intelligence, Dan Hendrycks
Comments