Find artificial relightings that fool classifiers with PyTorch.
- perform research on the field of available image augmentation techniques that simulate different illumination
- adapt the most suitable frameworks to attack the classifier
- evaluate the robustness of the targeted model against this type of input pertubations
- analyze whether the adversarial training approach helps improving its robustness.
Attacks: this package contains all our adversarial attack algorithms
Classifiers: this package contains all classifiers that we are attacking in our experiments
Data: contains data we train the classifiers on
Dep: deprecated notebooks and scripts
Experiments: notebooks that execute our adversarial attack experiments
Relighters: model implementations of all relighters