Skip to content

Find artificial relightings that fool classifiers with PyTorch

License

Notifications You must be signed in to change notification settings

johannaSommer/adversarial_relighting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Attacks on Images by Relighting

Find artificial relightings that fool classifiers with PyTorch.

Project Description

  • perform research on the field of available image augmentation techniques that simulate different illumination
  • adapt the most suitable frameworks to attack the classifier
  • evaluate the robustness of the targeted model against this type of input pertubations
  • analyze whether the adversarial training approach helps improving its robustness.

Repo Structure

Attacks: this package contains all our adversarial attack algorithms
Classifiers: this package contains all classifiers that we are attacking in our experiments
Data: contains data we train the classifiers on
Dep: deprecated notebooks and scripts
Experiments: notebooks that execute our adversarial attack experiments
Relighters: model implementations of all relighters

Contributors

About

Find artificial relightings that fool classifiers with PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published