This project explores the effects of using error-correcting codes in detecting adversarial attacks on image classifiers.
-
Notifications
You must be signed in to change notification settings - Fork 0
oattia/eccadv
About
A study of using error correcting codes on adversarial robustness in Deep Neural Networks.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published