Source |
Bleeping Computer |
Identifiant |
8644768 |
Date de publication |
2025-01-30 07:00:00 (vue: 2025-01-30 13:08:11) |
Titre |
Time Bandit ChatGPT jailbreak bypasses safeguards on sensitive topics |
Texte |
A ChatGPT jailbreak flaw, dubbed "Time Bandit," allows you to bypass OpenAI\'s safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation. [...]
A ChatGPT jailbreak flaw, dubbed "Time Bandit," allows you to bypass OpenAI\'s safety guidelines when asking for detailed instructions on sensitive topics, including the creation of weapons, information on nuclear topics, and malware creation. [...] |
Notes |
★★★
|
Envoyé |
Oui |
Condensat |
allows asking bandit bypass bypasses chatgpt creation detailed dubbed flaw guidelines including information instructions jailbreak malware nuclear openai safeguards safety sensitive time topics weapons when |
Tags |
Malware
|
Stories |
ChatGPT
|
Move |
|