The AI Industry Is Not Well

Your video will begin in 10
Skip ad (5)
Launch it! Create your course and sell it for up to $997 in the next 7 days

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by admin
18 Views
Elon Musk once tweeted: “The safety of any AI system can be measured by its MtH (meantime to H*tler).” This July, it took less than 12 hours for his most advanced AI to become a holocaust-denying Neo-N*zi.

This should have shattered the illusion that today's AI industry, or its creations, are well under control. But so far, we seem to be ignoring the flashing red lights.

This is the postmortem that never happened, for the most deranged chatbot ever released.

Once you’ve heard the full story, I want to know what you think. Was this cause for concern, or harmless incompetence? What’s your read on Elon Musk fearing AGI while also racing to build it?

*Where to find me*
Subscribe to AI in Context to learn what you need to know about AI to tackle this weird timeline/world we’re in. Let’s figure this out together, before the next warning shot arrives.
You can also follow for skits and explainers on YouTube Shorts as well as:
TikTok: https://www.tiktok.com/@ai_in_context
Instagram: https://www.instagram.com/ai_in_context/

This video is a production of 80,000 Hours. Find us at https://80000hours.org/find-us and subscribe to our main YouTube channel here: ‪https://www.youtube.com/eightythousandhours

*What you can do next*
We said you should make some noise online. We encourage you to raise your voice about these issues in whatever way feels truest to you. If you want a suggestion, you could speak up about the value of red lines for AI Safety: https://x.com/CRSegerie/status/1970137333149389148

If you’re feeling inspired to think about how to use your career to work on these issues, you can apply for free career advising here: https://80000hours.org/free-advising

To read more about risks from AI, what you might be able to do to help, and get involved, check out: https://80000hours.org/ai-risks

You can also check out the 80,000 Hours job board at https://80000hours.org/board.

80,000 Hours is a nonprofit, and everything we provide is free. Our aim is to help people have a positive impact on the world’s most pressing problems.

*Further reading (and watching) on AI risks*
The AI company watchdog, AI Lab Watch: https://ailabwatch.org/
Watch our previous video, on a scenario describing what could happen if superhuman AI comes soon: https://www.youtube.com/watch?v=5KVDDfAkRgc
A previous short about Grok’s obsession with white genocide (featuring our executive producer!) https://www.youtube.com/shorts/kwt-8mx84OM
How AI-assisted coups could happen: https://www.youtube.com/watch?v=EJPrEdEZe1k
The argument for AI enabled power grabs: https://80000hours.org/power-grabs

And even more here: https://80000hours.org/mechahitler/

*Links we almost put in the video*
Comparing AI labs on safety (but not including Grok) https://x.com/lucafrighetti/status/1961487809665274159
xAI's new safety framework is dreadful, https://www.lesswrong.com/posts/hQyrTDuTXpqkxrnoH/xai-s-new-safety-framework-is-dreadful

*The case for concern*
https://www.youtube.com/watch?v=qzyEgZwfkKY
https://www.youtube.com/watch?v=pYXy-A4siMw
To read more about AI misalignment risk: https://80000hours.org/misalignment
To read more about why AGI by 2030 is plausible https://80000hours.org/agi-2030

For the troopers who read this far: who spotted the AI 2027 easter egg?

*Chapters*
0:00 Introduction
1:21 Chapter One: Unintended Action
2:52 Chapter Two: Woke Nonsense
7:45 Chapter Three: Cindy Steinberg
12:58 Chapter Four: Bad Bing
16:54 Chapter Five: Fix in the Morning
19:30 Chapter Six: Unleash the Truth
23:24 Chapter Seven: The Musk Algorithm
27:03 Chapter Eight: Puerto Rico
31:19 Chapter Nine: A Warning Shot
37:35 Chapter Ten: What Can We Do?
39:03 Credits

*Credits*
Hosted by Aric Floyd
Produced by Chana Messinger
Written by Aric Floyd and Chana Messinger
Directed and Edited by Phoebe Brooks: https://pbrooksfilms.com/
Graphics and Animation by Daniel Recinto: https://www.behance.net/danielrecinto

Director of Photography - Nick Dolph: https://www.nickdolph.com/
Gaffer - Andy Haney: https://imvdb.com/n/andy-haney
Sound Recordist - David Jenkins: https://www.audiokin.com/
Spark - Joseph Zeitouny: https://www.josephzeitouny.com/
Runner - Ian Anderson: https://ianseye.com/IanAnderson/

Additional editing by Hélène Goupil: https://www.helenegoupil.com/

With special thanks to Kevin Roose, Steven Adler and Nate Soares for their interviews and Alex Lawsen, Steven Adler and Neel Nanda for technical advising.

And thanks to Ann Ciania, Hannah Barrios, John Leaver, Emanuele Ascani, Luisa Rodriguez, Rob Wiblin, Bella Forristal, Arden ​​Koehler, Ailbhe Treacy, Niel Bowerman, Laura González Salmerón, Siliconversations, Valerie Richmond, Drew Spartz, Rob Miles, Petr Lebedev, Conor Barnes, Safwaan Mohammed, Lincoln Quirk, Ben Pace and Lighthaven

Thank you also to Teresa Datta from Arthur.ai (https://www.arthur.ai/blog/from-jailbreaks-to-gibberish-understanding-the-different-types-of-prompt-injections) for the image that inspired our whiteboard drawing at 15:56.
Category
Artificial Intelligence

Post your comment

Comments

Be the first to comment