Source |
TroyHunt |
Identifiant |
8648129 |
Date de publication |
2025-02-11 22:13:42 (vue: 2025-02-11 23:07:45) |
Titre |
New hack uses prompt injection to corrupt Gemini\\'s long-term memory |
Texte |
There\'s yet another way to inject malicious prompts into chatbots.
There\'s yet another way to inject malicious prompts into chatbots. |
Notes |
★★★
|
Envoyé |
Oui |
Condensat |
another chatbots corrupt gemini hack inject injection long malicious memory new prompt prompts term there uses way yet |
Tags |
Hack
|
Stories |
|
Move |
|