Source |
The Hacker News |
Identifiant |
8633079 |
Date de publication |
2025-01-03 16:44:00 (vue: 2025-01-03 12:08:07) |
Titre |
New AI Jailbreak Method \\'Bad Likert Judge\\' Boosts Attack Success Rates by Over 60% |
Texte |
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model\'s (LLM) safety guardrails and produce potentially harmful or malicious responses.
The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and
Cybersecurity researchers have shed light on a new jailbreak technique that could be used to get past a large language model\'s (LLM) safety guardrails and produce potentially harmful or malicious responses.
The multi-turn (aka many-shot) attack strategy has been codenamed Bad Likert Judge by Palo Alto Networks Unit 42 researchers Yongzhe Huang, Yang Ji, Wenjun Hu, Jay Chen, Akshata Rao, and |
Notes |
★★
|
Envoyé |
Oui |
Condensat |
akshata alto attack bad been boosts chen codenamed could cybersecurity get guardrails harmful has have huang jailbreak jay judge language large light likert llm malicious many method model multi networks new over palo past potentially produce rao rates researchers responses safety shed shot strategy success technique turn unit used wenjun yang yongzhe |
Tags |
|
Stories |
|
Move |
|