AI Model Penetration: Testing LLMs for Prompt Injection & Jailbreaks

Your video will begin in 10
Skip ad (5)
The new system to launch an online business

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by admin
1 Views
Sign up to attend IBM TechXchange 2025 in Orlando → https://ibm.biz/Bdej4m

Learn more about Penetration Testing here → https://ibm.biz/Bde2MF

AI models aren’t impenetrable—prompt injections, jailbreaks, and poisoned data can compromise them. ???? Jeff Crume explains penetration testing methods like sandboxing, red teaming, and automated scans to protect large language models (LLMs). Protect sensitive data with actionable AI security strategies!

Read the Cost of a Data Breach report → https://ibm.biz/Bde2ME

#aisecurity #llm #promptinjection #ai
Category
AI prompts
Tags
IBM, IBM Cloud

Post your comment

Comments

Be the first to comment