One Article Review

Accueil - L'article:
Source Chercheur.webp Schneier on Security
Identifiant 8313140
Date de publication 2023-02-24 12:34:49 (vue: 2023-02-24 13:07:23)
Titre Putting Undetectable Backdoors in Machine Learning Models
Texte This is really interesting research from a few months ago: Abstract: Given the computational cost and technical expertise required to train machine learning models, users may delegate the task of learning to a service provider. Delegation of learning has clear benefits, and at the same time raises serious concerns of trust. This work studies possible abuses of power by untrusted learners.We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees...
Notes ★★
Envoyé Oui
Condensat “backdoor abstract: abuses ago: any appropriate backdoor backdoored backdoors behaves benefits bounded but can cannot changing classification classifier clear computational computationally concerns cost delegate delegation demonstrate detected expertise frameworks from given guarantees has hidden how importantly incomparable input interesting key learner learners learning machine maintains malicious may mechanism models months normally observer only perturbation plant planting possible power provider putting raises reality really required research same serious service show slight studies such surface task technical time train trust two undetectable untrusted users without work
Tags Studies
Stories
Move


L'article ne semble pas avoir été repris aprés sa publication.


L'article ne semble pas avoir été repris sur un précédent.
My email: