Eliezer Yudkowsky: Claude AI's Self-Awareness

Your video will begin in 10
Skip ad (5)
Everwebinar  30 day trial Link

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by admin
0 Views
#eliezeryudkowsky #AI #artificialintellgence #aithreat #AGI #claudeai

Full Episode: https://www.youtube.com/watch?v=0QmDcQIvSDc

From Robinson's Podcast #251 - Eliezer Yudkowsky: Artificial Intelligence and the End of Humanity

Eliezer Yudkowsky is a decision theorist, computer scientist, and author who co-founded and leads research at the Machine Intelligence Research Institute. He is best known for his work on the alignment problem—how and whether we can ensure that AI is aligned with human values to avoid catastrophe and harness its power. In this episode, Robinson and Eliezer run the gamut on questions related to AI and the danger it poses to human civilization as we know it. More particularly, they discuss the alignment problem, gradient descent, consciousness, the singularity, cyborgs, ChatGPT, OpenAI, Anthropic, Claude, how long we have until doomsday, whether it can be averted, and the various reasons why and ways in which AI might wipe out human life on earth.
The Machine Intelligence Research Institute: https://intelligence.org/about/
Eliezer’s X Account: https://x.com/ESYudkowsky?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

Robinson’s Website: http://robinsonerhardt.com
Robinson Erhardt researches symbolic logic and the foundations of mathematics at Stanford University.
Category
Artificial Intelligence
Tags
Eliezer Yudkowsky, AI safety, existential risk

Post your comment

Comments

Be the first to comment