Source |
The Hacker News |
Identifiant |
8669573 |
Date de publication |
2025-04-29 21:48:00 (vue: 2025-04-29 18:07:25) |
Titre |
De nouveaux rapports découvrent les jailbreaks, le code dangereux et les risques de vol de données dans les principaux systèmes d'IA New Reports Uncover Jailbreaks, Unsafe Code, and Data Theft Risks in Leading AI Systems |
Texte |
Divers services génératifs de l'intelligence artificielle (Genai) ont été trouvés vulnérables à deux types d'attaques jailbreaks qui permettent de produire un contenu illicite ou dangereux.
La première des deux techniques, le nom de code Code Inceng
Various generative artificial intelligence (GenAI) services have been found vulnerable to two types of jailbreak attacks that make it possible to produce illicit or dangerous content.
The first of the two techniques, codenamed Inception, instructs an AI tool to imagine a fictitious scenario, which can then be adapted into a second scenario within the first one where there exists no safety |
Notes |
★★
|
Envoyé |
Oui |
Condensat |
adapted artificial attacks been can code codenamed content dangerous data exists fictitious first found genai generative have illicit imagine inception instructs intelligence jailbreak jailbreaks leading make new one possible produce reports risks safety scenario second services systems techniques theft then tool two types uncover unsafe various vulnerable where which within |
Tags |
Tool
|
Stories |
|
Move |
|