Hacking AI is TOO EASY (this should be illegal)

Your video will begin in 10
Skip ad (5)
How to write copy that sells

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by admin
1 Views
Want to deploy AI in your cloud apps SAFELY? Let Wiz help: https://ntck.co/wiz

Can you hack AI? In this video I sit down with elite AI hacker Jason Haddix to unpack how attackers compromise AI-enabled apps—not just jailbreak chatbots, but exfiltrate customer data, abuse tool calls, and pivot across systems. We walk through his six-part AI pentest blueprint, play the Gandalf prompt-injection game, and demo wild techniques like emoji smuggling and link smuggling. You’ll see real-world cases (think Slack salesbots + Salesforce leaks), why MCP (Model Context Protocol) and agentic frameworks can widen the blast radius, and then we flip to defense: web-layer fundamentals, a “firewall for AI” on inputs/outputs, and least-privilege for data and tools—plus a hands-on demo you can try. If you’re building with AI in 2025, this is your wake-up call (and your roadmap). Educational content only—hack ethically and only with permission.


???? Watch the Full Interview here: https://youtu.be/2Z-9EOyb6HE


Links and STUFF
—--------------------------------------------------------
Practice Prompt Injection: https://gandalf.lakera.ai/baseline
Pliney's Github: https://github.com/elder-plinius



Follow Jason Everywhere:
X: https://x.com/Jhaddix
Linkedin: https://www.linkedin.com/in/jhaddix/
Instagram: https://www.instagram.com/j.haddix56/
Tiktok: https://www.tiktok.com/@jhaddix56

Checkout Jason’s courses:
Website: https://www.arcanum-sec.com/
Training Overview: Training: https://www.arcanum-sec.com/training-overview
Attacking AI course: https://www.arcanum-sec.com/training/attacking-ai
Hacking your career: https://www.arcanum-sec.com/training/hack-your-brand



????????Join the NetworkChuck Academy!: https://ntck.co/NCAcademy



**Sponsored by Wiz.io

00:00 - Hack companies through AI?
00:58 - What does “hacking AI” really mean?
01:43 - AI pentest vs. red teaming (6-step blueprint)
02:42 - Prompt Injection 101 (why it’s so hard)
04:14 - Try it live: Gandalf prompt-injection game
05:09 - Jailbreak taxonomy: intents, techniques, evasions
05:55 - Emoji smuggling + anti-classifier demo
07:23 - Link smuggling (data exfiltration trick)
11:38 - Real-world leaks: Salesforce/Slack bot case
13:47 - MCP security risks & blast radius
16:55 - Can AI hack for us? Agents & bug bounties
20:52 - Defense in depth: web, AI firewall, least privilege
24:57 - Jason’s Magic Card: GPT-4o system prompt leak (wild story)







SUPPORT NETWORKCHUCK
---------------------------------------------------
➡️NetworkChuck membership: https://ntck.co/Premium
☕☕ COFFEE and MERCH: https://ntck.co/coffee

Check out my new channel: https://ntck.co/ncclips

????????NEED HELP?? Join the Discord Server: https://discord.gg/networkchuck

STUDY WITH ME on Twitch: https://bit.ly/nc_twitch

READY TO LEARN??
---------------------------------------------------
-Learn Python: https://bit.ly/3rzZjzz
-Get your CCNA: https://bit.ly/nc-ccna

FOLLOW ME EVERYWHERE
---------------------------------------------------
Instagram: https://www.instagram.com/networkchuck/
Twitter: https://twitter.com/networkchuck
Facebook: https://www.facebook.com/NetworkChuck/
Join the Discord server: http://bit.ly/nc-discord




AFFILIATES & REFERRALS
---------------------------------------------------
(GEAR I USE...STUFF I RECOMMEND)
My network gear: https://geni.us/L6wyIUj
Amazon Affiliate Store: https://www.amazon.com/shop/networkchuck
Buy a Raspberry Pi: https://geni.us/aBeqAL
Do you want to know how I draw on the screen?? Go to https://ntck.co/EpicPen and use code NetworkChuck to get 20% off!!
fast and reliable unifi in the cloud: https://hostifi.com/?via=chuck


Prompt Injection explained with live demos: Gandalf game, emoji smuggling, and link smuggling exfiltration.


AI Pentesting vs AI Red Teaming: a six-phase methodology for securing LLM apps end-to-end.


LLM jailbreak taxonomy: intents, techniques, evasions, and utilities—how attackers actually think.


RAG poisoning, tool-call abuse, and over-scoped API keys: the hidden risks in modern AI products.


MCP (Model Context Protocol) security: tools/resources/prompts, server hardening, and blast-radius control.


Agentic frameworks (LangChain, LangGraph, CrewAI) security pitfalls—and how to test them safely.


Real-world case study: Slack salesbot + Salesforce data exposure and what went wrong.


Defense in depth for AI: input/output validation, a firewall for AI (guardrails/classifiers), least privilege.


Bug bounty + AI: why mid-tier vulns are getting automated while human creativity still wins.


2025 AI security blueprint: map your attack surface, prevent system-prompt leaks, and lock down data access.






#promptinjection #aihacking #airedteaming
Category
Artificial Intelligence
Tags
hacking AI, AI security, prompt injection

Post your comment

Comments

Be the first to comment