One Article Review

Accueil - L'article:
Source Chercheur.webp Schneier on Security
Identifiant 8307493
Date de publication 2023-02-06 11:02:12 (vue: 2023-02-06 11:07:47)
Titre Attacking Machine Learning Systems
Texte The field of machine learning (ML) security—and corresponding adversarial ML—is rapidly advancing as researchers develop sophisticated techniques to perturb, disrupt, or steal the ML model or data. It's a heady time; because we know so little about the security of these systems, there are many opportunities for new researchers to publish in this field. In many ways, this circumstance reminds me of the cryptanalysis field in the 1990. And there is a lesson in that similarity: the complex mathematical attacks make for good academic papers, but we mustn't lose sight of the fact that insecure software will be the likely attack vector for most ML systems...
Envoyé Oui
Condensat 1990 about academic advancing adversarial are attack attacking attacks because but circumstance complex corresponding cryptanalysis data develop disrupt fact field good heady insecure know learning lesson likely little lose machine make many mathematical ml—is model most mustn new opportunities papers perturb publish rapidly reminds researchers security security—and sight similarity: software sophisticated steal systems techniques these time; vector ways will
Tags
Stories
Notes ★★
Move


L'article ne semble pas avoir été repris aprés sa publication.


L'article ne semble pas avoir été repris sur un précédent.
My email: