Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Address tiler issues (#1411) * fix tiler * deprecate random tile locations * restore random tiling in tile method * check tiling section in config * disable tiling for ganomalu * pad -> padding * Refactor Reverse Distillation to match official code (#1389) * Non-mandatory early stopping * Added conv4 and bn4 to OCBE * Loss as in the official code (flattened arrays) * Added comment on how to use torchvision model as an encoder to reproduce results in the paper * Remove early stop from config, change default anomaly_map_mode to add * pre-commit fix * Updated results * Update src/anomalib/models/reverse_distillation/README.md Co-authored-by: Samet Akcay <[email protected]> * Update src/anomalib/models/reverse_distillation/README.md Co-authored-by: Samet Akcay <[email protected]> * Update src/anomalib/models/reverse_distillation/README.md Co-authored-by: Samet Akcay <[email protected]> * Remove early_stopping * Update src/anomalib/models/reverse_distillation/lightning_model.py Co-authored-by: Samet Akcay <[email protected]> * Easier to read code --------- Co-authored-by: Samet Akcay <[email protected]> * Patch for the WinError183 on the OpenVino export mode (#1386) * Fix WinError183 (Windows Error) * Add commentary of the change --------- Co-authored-by: Youho99 <[email protected]> * Add DSR model (#1142) * added license, init.py and draft readme * added draft DSR files * minor comment update * Implemented dsr model + comments * added dsr discrete model * added defect generation in torch model + dsr to list of existing methods in init.py * fixed torch model, started implementing lightning model, implemented anomaly generator * added loss file for DSR * Added loss, improved lightning module * Finished up global implementation of DSR second phase * minor fixes * Bugfixes * Fixed DSR loss calculation * on_training_start -> on_train_start * pre-commit run * updated DSR documentation * reset config file * added automatic pretraining weight download * testing pretrained weights. fixed embedding size in upsampling module and image recon module, to be fixed in original branch * successful testing on pretrained dsr weights * checked test quality with pretrained weights, fixed anomaly score calculation * training is functional * Fixed training procedure * test still working * working upsampling module training and testing * fixed minor bugs * updated documentation * added tests and doc * adapted learning schedule to steps * Update src/anomalib/models/dsr/anomaly_generator.py Co-authored-by: Ashwin Vaidya <[email protected]> * Apply suggestions from code review Co-authored-by: Samet Akcay <[email protected]> Co-authored-by: Ashwin Vaidya <[email protected]> * refactored outputs into dicts * remove super() args * changed downloading weights from anomalib releases + minor fixes * pre commit hooks + minor fixes * removed configurable ckpt path refs + default iteration nb from paper * cleaned up dsr.rst and turned exceptions into RuntimeErrors * Added upsampling ratio parameter to set third training phase epochs * Added batched evalaution + minor code simplification * pre commit hooks * squeeze output image score tensor * readded new path check in efficient ad * fixed double step count with manual optimization * fixed trailing whitespace * Fix black issues * Apply suggestions from code review Co-authored-by: Samet Akcay <[email protected]> * review suggestions * updated architecture image links * Address mypy * changed output types for dsr model * readded dict outputs, adapted to TorchInferencer * fixed error in output dict * removed default imagenet norm --------- Co-authored-by: Samet Akcay <[email protected]> Co-authored-by: Ashwin Vaidya <[email protected]> * Fix unexpected key pixel_metrics.AUPRO.fpr_limit (#1055) * fix unexpected key pixel_metrics.AUPRO.fpr_limit Signed-off-by: FanJiangIntel <[email protected]> * load AUPRO before create_metric_collection Signed-off-by: FanJiangIntel <[email protected]> * code refine Signed-off-by: FanJiangIntel <[email protected]> * fix comment Signed-off-by: FanJiangIntel <[email protected]> * fix Signed-off-by: FanJiangIntel <[email protected]> * Support test Signed-off-by: Kang Wenjing <[email protected]> * Update test Signed-off-by: Kang Wenjing <[email protected]> * Update test Signed-off-by: Kang Wenjing <[email protected]> --------- Signed-off-by: FanJiangIntel <[email protected]> Signed-off-by: Kang Wenjing <[email protected]> Co-authored-by: FanJiangIntel <[email protected]> Co-authored-by: Samet Akcay <[email protected]> * Improved speed and memory usage of mean+std calculation (#1457) * preexisting OpenCV version check added to `setup.py`, ran formatting pre-commit hooks on previous contribution. (#1424) * testing upstream switch * picked up on stale OpenCV `setup.py` issue #1041 * 🐞 Hotfix: Limit Gradio Version (#1458) * Fix metadata path * Ignore hidden directories in folder dataset * Add check for mask_dir for segmentation tasks in Folder dataset * Limit the gradio version to <4 * Fix/efficient ad normalize before every validation (#1441) * Normalize anomaly maps before every validation * Remove print statement --------- Co-authored-by: Samet Akcay <[email protected]> * Fix DRAEM (#1431) * Fix beta in augmenter * Add scheduler * Change normalization to none * Replace two lr schedulers with MultiStepLR * Revert change to beta * Disable early stopping default * Format config * Add opacity parameter beta to config * Adding U-Flow method (#1415) * Added uflow model * Added documentation (README) for uflow model * Added uflow to the list of available models, and main README updated * Added missing images for the documentation * Update src/anomalib/models/uflow/anomaly_map.py Co-authored-by: Samet Akcay <[email protected]> * Update src/anomalib/models/uflow/anomaly_map.py Co-authored-by: Samet Akcay <[email protected]> * Update src/anomalib/models/uflow/feature_extraction.py Co-authored-by: Samet Akcay <[email protected]> * Update src/anomalib/models/uflow/torch_model.py Co-authored-by: Samet Akcay <[email protected]> * Added uflow to the reference guide in docs * Added uflow to the pre-merge tests * removed the _step function, and merged the code with training_step * added as a comment the values used in the paper * re-factorized feature extractors to use the TimmFeatureExtractor class * added annotations for some functions, where the flow graph is created * updated readme to fix images loading * Added link in the README to the original code for reproducing the results * Removed unused kwargs * Added docstrigs with args explanations to UFlow classes * Added models in a github release, and linked here * Passing all pre-commit checks * Changed freia's AllInOneBlock by Anomalib's version, and converted the subnet contructor to a Class, in order to be pickable, that is needed to export the model to torch --------- Co-authored-by: Samet Akcay <[email protected]> * Update README.md * 📘 Announce anomalib v1 on the main `README.md` (#1542) * Fix metadata path * Ignore hidden directories in folder dataset * Add check for mask_dir for segmentation tasks in Folder dataset * Limit the gradio version to <4 * Announce anomalib v1 on readme * Add the installation instructions and update the documentation link * Fixed DSR (#1486) * fixed DSR squeeze bug * added comment * Refactor/extensions custom dataset (#1562) * Explanation how to use extension names in the config file * Added information about extensions to the error message and control of the user input * Easier to read code * Replacing assert with raise * 📚 Modify the PR template (#1611) Update pull_request_template.md * Fix result image URLs (#1510) * Fix tests * refactor path + fix issues + fix linting issues * Migrate docs * fix typing * fix failing model tests * Fix tests * Address PR comments * Fixed shape error, allowing arbitary image sizes for EfficientAD (#1537) * Fixed shape error, allowing arbitrary image sizes. Replaced integer parsing by floor operation * Replaced calculation by ceil operation. Solution of shape error is to round up and not down for the last upsample layer * Add comment for ceil oepration * Formatting with pre-commit hook * Clean up badge --------- Signed-off-by: FanJiangIntel <[email protected]> Signed-off-by: Kang Wenjing <[email protected]> Co-authored-by: Dick Ameln <[email protected]> Co-authored-by: abc-125 <[email protected]> Co-authored-by: Samet Akcay <[email protected]> Co-authored-by: ggiret-thinkdeep <[email protected]> Co-authored-by: Youho99 <[email protected]> Co-authored-by: Philippe Carvalho <[email protected]> Co-authored-by: Wenjing Kang <[email protected]> Co-authored-by: FanJiangIntel <[email protected]> Co-authored-by: belfner <[email protected]> Co-authored-by: Abdulla Al Blooshi <[email protected]> Co-authored-by: Blaž Rolih <[email protected]> Co-authored-by: Matías Tailanian <[email protected]> Co-authored-by: Jan Schlüter <[email protected]> Co-authored-by: Ashwin Vaidya <[email protected]> Co-authored-by: Christopher <[email protected]>
- Loading branch information