AI safety and Security. Recommendation: Do NOT Connect to Any Agent. Explained with practical examples.
all rights w/ authors:
Breaking the Prompt Wall (I): A Real-World Case Study of
Attacking ChatGPT via Lightweight Prompt Injection
by
Xiangyu Chang ∗ Guang Dai † Hao Di‡ Haishan Ye§
from
* School of Management, Xi’an Jiaotong University.
† SGIT AI Lab.
‡ School of Management, Xi’an Jiaotong University.
§ School of Management, Xi’an Jiaotong University and SGIT AI Lab.
#aiexplained
#safety
#risk
#jailbreak
#scienceexplained
#protection
all rights w/ authors:
Breaking the Prompt Wall (I): A Real-World Case Study of
Attacking ChatGPT via Lightweight Prompt Injection
by
Xiangyu Chang ∗ Guang Dai † Hao Di‡ Haishan Ye§
from
* School of Management, Xi’an Jiaotong University.
† SGIT AI Lab.
‡ School of Management, Xi’an Jiaotong University.
§ School of Management, Xi’an Jiaotong University and SGIT AI Lab.
#aiexplained
#safety
#risk
#jailbreak
#scienceexplained
#protection
- Category
- AI prompts
- Tags
- artificial intelligence, AI models, LLM
Comments