Source |
CybeReason |
Identifiant |
6752137 |
Date de publication |
2022-09-06 15:01:28 (vue: 2022-09-06 17:06:22) |
Titre |
Malicious Life Podcast: Hacking Language Models |
Texte |
Language models are everywhere today: they run in the background of Google Translate and other translation tools, they help operate voice assistants like Alexa or Siri, and most interestingly they are available via several experiential projects trying to emulate natural conversations such as OpenAI's GPT-3 and Google's LaMDA. Can these models be hacked to gain access to the sensitive information they learned from their training data? Check it out... |
Notes |
|
Envoyé |
Oui |
Condensat |
access alexa are assistants available background can check conversations data emulate everywhere experiential from gain google gpt hacked hacking help information interestingly lamda language learned life like malicious models most natural openai operate other out podcast: projects run sensitive several siri such these today: tools training translate translation trying voice |
Tags |
|
Stories |
|
Move |
|