Source |
The Register |
Identifiant |
7002440 |
Date de publication |
2022-09-19 13:37:53 (vue: 2022-09-19 14:05:39) |
Titre |
GPT-3 \'prompt injection\' attack causes bad bot manners |
Texte |
Also, EA goes kernel-deep to stop cheaters, PuTTY gets hijacked by North Korea, and more. In Brief OpenAI's popular natural language model GPT-3 has a problem: It can be tricked into behaving badly by doing little more than telling it to ignore its previous orders.… |
Envoyé |
Oui |
Condensat |
also attack bad badly behaving bot brief can causes cheaters deep doing gets goes gpt has hijacked ignore injection its kernel korea language little manners model more natural north openai orders popular previous problem: prompt putty stop telling than tricked |
Tags |
|
Stories |
|
Notes |
|
Move |
|