How One Prompt Hijacked Lenovo’s AI Chatbot

Your video will begin in 10
Skip ad (5)
How to write copy that sells

Thanks! Share it with your friends!

You disliked this video. Thanks for the feedback!

Added by admin
0 Views
A single ~400-character prompt was enough to hijack Lenovo’s AI website chatbot. By switching the bot’s output to HTML and injecting a malicious payload, researchers triggered XSS, executed code, and exfiltrated session cookies—even snagging an agent’s cookies after escalation.
This video breaks down the four prompt elements, the root cause (weak input/output sanitisation + “people-pleasing” LLMs), and the real risks: data theft, support system compromise, and lateral movement. We finish with practical guardrails and why bug bounty hunters and red teamers should target AI chatbots right now.

Topics: lenovo ai chatbot, xss, prompt injection, session cookies, html injection, llm security, chatbot security, ethical hacking, bug bounty, red teaming, ai guardrails, input sanitisation, output encoding, cookie theft, lateral movement

#lenovo #ai #hacked
Category
AI prompts
Tags
lenovo ai chatbot, lenovo chatbot xss, cross site scripting

Post your comment

Comments

Be the first to comment