Abstract:
In the report, aimed at the problem of machine learning security and defense adversarial samples attack, a PCA-based anti-sample attack defense method was proposed, which uses the fast gradient sign method (FGSM) non-target attack method, and the adversary is a white box attack. PCA was performed on the MNIST dataset to defend against escape attacks in deep neural network models. The results showed that PCA can defend adversarial samples attack, and the effect was best when the dimension reduction dimension is 50.