Please enable JavaScript.
Coggle requires JavaScript to display documents.
Adversarial Examples Attacks and Defenses for Deep Learning (Proposed…
Proposed Attack
-
Gradient Based
-
-
-
-
Overview
FGSM with adversarial training and adding random when updating the adversarial examples to defeat adversarial training
-
-
-
-
-
Threat Model
-
Adversary's Knowledge
-
Black-box attacks
Assume the adversary has no access to trained model, only acting as a standard user.
Adversarial Specificity
-
Non-targeted attacks
Don't assign a specific class to the neural network output. Output class can be anything except the original one.
-
-
Perturbation
Perturbation Limitation
Optimized Perturbation
Sets perturbation as the goal of the optimization problem. The goal to minimize the perturbation so that humans cannot recognize the perturbation.
Constraint Perturbation
Sets perturbation as the constraint of the optimization problem. These methods only require the perturbation to be small enough.
Perturbation Measurment
Lp norm ( || . ||p )
measures the magnitude of perturbation by p-norm distance. The commonly used are L-0, L-2, and L-inf.
-
-
a new metric introduced, consistent with human perception
-
-
-
-