Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Porting docs, examples, tutorials and galleries #5620

Merged
merged 13 commits into from
Mar 15, 2022

Conversation

datumbox
Copy link
Contributor

Related to #4679

@facebook-github-bot
Copy link

facebook-github-bot commented Mar 15, 2022

💊 CI failures summary and remediations

As of commit 5f71d75 (more details on the Dr. CI page):


  • 10/10 failures introduced in this PR

🕵️ 9 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build unittest_windows_cpu_py3.7 (1/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

c:\users\circleci\project\torchvision\io\video....\\circleci\\AppData\\Local\\Temp\\tmphk5srkf9.mp4'
test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps
  c:\users\circleci\project\torchvision\datasets\video_utils.py:218: UserWarning: There aren't enough frames in the current video to get a clip for the given clip length and frames between clips. The video (and potentially others) will be skipped.
    "There aren't enough frames in the current video to get a clip for the given clip length and "

test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  C:\Users\circleci\project\env\lib\site-packages\PIL\Image.py:946: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    "Palette images with Transparency expressed in bytes should be "

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  c:\users\circleci\project\torchvision\io\video.py:406: RuntimeWarning: Failed to open container for C:\Users\circleci\AppData\Local\Temp\tmphk5srkf9.mp4; Caught error: [Errno 13] Permission denied: 'C:\\Users\\circleci\\AppData\\Local\\Temp\\tmphk5srkf9.mp4'
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  C:\Users\circleci\project\env\lib\site-packages\torch\nn\modules\module.py:1384: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    " and ".join(warn_msg) + " are deprecated. nn.Module.state_dict will not accept them in the future. "

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_linux_cpu_py3.10 (2/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

/root/project/torchvision/io/video.py:406: Runt...log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  /root/project/env/lib/python3.10/site-packages/PIL/Image.py:945: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    warnings.warn(

test/test_io.py::TestVideo::test_probe_video_from_memory
  /root/project/torchvision/io/_video_opt.py:423: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1647328154790/work/torch/csrc/utils/tensor_new.cpp:954.)
    video_data = torch.frombuffer(video_data, dtype=torch.uint8)

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  /root/project/torchvision/io/video.py:406: RuntimeWarning: Failed to open container for /tmp/tmpytbzf3qw.mp4; Caught error: [Errno 1094995529] Invalid data found when processing input: '/tmp/tmpytbzf3qw.mp4'; last error log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  /root/project/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1383: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    warnings.warn(

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_windows_cpu_py3.8 (3/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

c:\users\circleci\project\torchvision\io\video....\\circleci\\AppData\\Local\\Temp\\tmpyw8nk21v.mp4'
test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps
  c:\users\circleci\project\torchvision\datasets\video_utils.py:217: UserWarning: There aren't enough frames in the current video to get a clip for the given clip length and frames between clips. The video (and potentially others) will be skipped.
    warnings.warn(

test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  C:\Users\circleci\project\env\lib\site-packages\PIL\Image.py:945: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    warnings.warn(

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  c:\users\circleci\project\torchvision\io\video.py:406: RuntimeWarning: Failed to open container for C:\Users\circleci\AppData\Local\Temp\tmpyw8nk21v.mp4; Caught error: [Errno 13] Permission denied: 'C:\\Users\\circleci\\AppData\\Local\\Temp\\tmpyw8nk21v.mp4'
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  C:\Users\circleci\project\env\lib\site-packages\torch\nn\modules\module.py:1383: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    warnings.warn(

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_macos_cpu_py3.10 (4/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

/Users/distiller/project/torchvision/io/video.p...log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps
  /Users/distiller/project/torchvision/datasets/video_utils.py:217: UserWarning: There aren't enough frames in the current video to get a clip for the given clip length and frames between clips. The video (and potentially others) will be skipped.
    warnings.warn(

test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  /Users/distiller/project/env/lib/python3.10/site-packages/PIL/Image.py:945: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    warnings.warn(

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  /Users/distiller/project/torchvision/io/video.py:406: RuntimeWarning: Failed to open container for /var/folders/6y/gy9gggt14379c_k39vwb50lc0000gn/T/tmph_9lsmai.mp4; Caught error: [Errno 1094995529] Invalid data found when processing input: '/var/folders/6y/gy9gggt14379c_k39vwb50lc0000gn/T/tmph_9lsmai.mp4'; last error log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  /Users/distiller/project/env/lib/python3.10/site-packages/torch/nn/modules/module.py:1383: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    warnings.warn(

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_windows_cpu_py3.9 (5/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

c:\users\circleci\project\torchvision\io\video....\\circleci\\AppData\\Local\\Temp\\tmpi9eo2m7_.mp4'
test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps
  c:\users\circleci\project\torchvision\datasets\video_utils.py:217: UserWarning: There aren't enough frames in the current video to get a clip for the given clip length and frames between clips. The video (and potentially others) will be skipped.
    warnings.warn(

test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  C:\Users\circleci\project\env\lib\site-packages\PIL\Image.py:945: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    warnings.warn(

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  c:\users\circleci\project\torchvision\io\video.py:406: RuntimeWarning: Failed to open container for C:\Users\circleci\AppData\Local\Temp\tmpi9eo2m7_.mp4; Caught error: [Errno 13] Permission denied: 'C:\\Users\\circleci\\AppData\\Local\\Temp\\tmpi9eo2m7_.mp4'
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  C:\Users\circleci\project\env\lib\site-packages\torch\nn\modules\module.py:1383: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    warnings.warn(

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_linux_cpu_py3.9 (6/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

/root/project/torchvision/io/video.py:406: Runt...log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps
  /root/project/torchvision/datasets/video_utils.py:217: UserWarning: There aren't enough frames in the current video to get a clip for the given clip length and frames between clips. The video (and potentially others) will be skipped.
    warnings.warn(

test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  /root/project/env/lib/python3.9/site-packages/PIL/Image.py:945: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    warnings.warn(

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  /root/project/torchvision/io/video.py:406: RuntimeWarning: Failed to open container for /tmp/tmpanw_0izp.mp4; Caught error: [Errno 1094995529] Invalid data found when processing input: '/tmp/tmpanw_0izp.mp4'; last error log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  /root/project/env/lib/python3.9/site-packages/torch/nn/modules/module.py:1383: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    warnings.warn(

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_linux_cpu_py3.7 (7/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

/root/project/torchvision/io/video.py:406: Runt...log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  /root/project/env/lib/python3.7/site-packages/PIL/Image.py:946: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    "Palette images with Transparency expressed in bytes should be "

test/test_io.py::TestVideo::test_probe_video_from_memory
  /root/project/torchvision/io/_video_opt.py:423: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1647328133854/work/torch/csrc/utils/tensor_new.cpp:954.)
    video_data = torch.frombuffer(video_data, dtype=torch.uint8)

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  /root/project/torchvision/io/video.py:406: RuntimeWarning: Failed to open container for /tmp/tmp6lgunj9x.mp4; Caught error: [Errno 1094995529] Invalid data found when processing input: '/tmp/tmp6lgunj9x.mp4'; last error log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  /root/project/env/lib/python3.7/site-packages/torch/nn/modules/module.py:1384: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    " and ".join(warn_msg) + " are deprecated. nn.Module.state_dict will not accept them in the future. "

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_linux_cpu_py3.8 (8/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

/root/project/torchvision/io/video.py:406: Runt...log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  /root/project/env/lib/python3.8/site-packages/PIL/Image.py:945: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    warnings.warn(

test/test_io.py::TestVideo::test_probe_video_from_memory
  /root/project/torchvision/io/_video_opt.py:423: UserWarning: The given buffer is not writable, and PyTorch does not support non-writable tensors. This means you can write to the underlying (supposedly non-writable) buffer using the tensor. You may want to copy the buffer to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at  /opt/conda/conda-bld/pytorch_1647328139846/work/torch/csrc/utils/tensor_new.cpp:954.)
    video_data = torch.frombuffer(video_data, dtype=torch.uint8)

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  /root/project/torchvision/io/video.py:406: RuntimeWarning: Failed to open container for /tmp/tmpqy4skw0a.mp4; Caught error: [Errno 1094995529] Invalid data found when processing input: '/tmp/tmpqy4skw0a.mp4'; last error log: [mov,mp4,m4a,3gp,3g2,mj2] moov atom not found
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  /root/project/env/lib/python3.8/site-packages/torch/nn/modules/module.py:1383: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    warnings.warn(

test/test_models.py::test_memory_efficient_densenet[densenet121]

See CircleCI build unittest_windows_cpu_py3.10 (9/9)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

c:\users\circleci\project\torchvision\io\video....\\circleci\\AppData\\Local\\Temp\\tmpctmn_ty1.mp4'
test/test_datasets_video_utils.py::TestVideo::test_video_clips_custom_fps
  c:\users\circleci\project\torchvision\datasets\video_utils.py:217: UserWarning: There aren't enough frames in the current video to get a clip for the given clip length and frames between clips. The video (and potentially others) will be skipped.
    warnings.warn(

test/test_image.py::test_decode_png[L-ImageReadMode.GRAY-palette_pytorch.png]
test/test_image.py::test_decode_png[RGB-ImageReadMode.RGB-palette_pytorch.png]
  C:\Users\circleci\project\env\lib\site-packages\PIL\Image.py:945: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images
    warnings.warn(

test/test_io.py::TestVideo::test_read_video_timestamps_corrupted_file
  c:\users\circleci\project\torchvision\io\video.py:406: RuntimeWarning: Failed to open container for C:\Users\circleci\AppData\Local\Temp\tmpctmn_ty1.mp4; Caught error: [Errno 13] Permission denied: 'C:\\Users\\circleci\\AppData\\Local\\Temp\\tmpctmn_ty1.mp4'
    warnings.warn(msg, RuntimeWarning)

test/test_models.py::test_memory_efficient_densenet[densenet121]
test/test_models.py::test_memory_efficient_densenet[densenet169]
test/test_models.py::test_memory_efficient_densenet[densenet201]
test/test_models.py::test_memory_efficient_densenet[densenet161]
  C:\Users\circleci\project\env\lib\site-packages\torch\nn\modules\module.py:1383: UserWarning: positional arguments and argument "destination" are deprecated. nn.Module.state_dict will not accept them in the future. Refer to https://pytorch.org/docs/master/generated/torch.nn.Module.html#torch.nn.Module.state_dict for details.
    warnings.warn(

test/test_models.py::test_memory_efficient_densenet[densenet121]

1 failure not recognized by patterns:

Job Step Action
CircleCI unittest_macos_cpu_py3.9 Run tests 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @datumbox , LGTM modulo the built example gallery look OK visually.
In the future we should go back through them and probably slightly edit some parts in the narration which may become outdated due to the new API (typically when we tell users they should normalize, etc.)

@datumbox
Copy link
Contributor Author

@NicolasHug Absolutely. I'm just taking care of code changes for now and ensuring that nothing breaks. We should put some serious effort on the docs once porting is completed. Thanks for the review.

@datumbox datumbox changed the title Update examples, tutorials and galleries for multiweight Porting examples, tutorials and galleries Mar 15, 2022
@datumbox datumbox changed the title Porting examples, tutorials and galleries Porting docs, examples, tutorials and galleries Mar 15, 2022
@datumbox datumbox merged commit 6d96ed5 into pytorch:multiweight Mar 15, 2022
@datumbox datumbox deleted the multiweight_docs branch March 15, 2022 18:07
datumbox added a commit that referenced this pull request Mar 22, 2022
* Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet.

* Porting googlenet

* Porting inception

* Porting mnasnet

* Porting mobilenetv2

* Porting mobilenetv3

* Porting regnet

* Porting resnet

* Porting shufflenetv2

* Porting squeezenet

* Porting vgg

* Porting vit

* Fix docstrings

* Fixing imports

* Adding missing import

* Fix mobilenet imports

* Fix tests

* Fix prototype tests

* Exclude get_weight from models on test

* Fix init files

* Porting googlenet

* Porting inception

* porting mobilenetv2

* porting mobilenetv3

* porting resnet

* porting shufflenetv2

* Fix test and linter

* Fixing docs.

* Porting Detection models (#5617)

* fix inits

* fix docs

* Port faster_rcnn

* Port fcos

* Port keypoint_rcnn

* Port mask_rcnn

* Port retinanet

* Port ssd

* Port ssdlite

* Fix linter

* Fixing tests

* Fixing tests

* Fixing vgg test

* Porting Optical Flow, Segmentation, Video models (#5619)

* Porting raft

* Porting video resnet

* Porting deeplabv3

* Porting fcn and lraspp

* Fixing the tests and linter

* Porting docs, examples, tutorials and galleries (#5620)

* Fix examples, tutorials and gallery

* Update gallery/plot_optical_flow.py

Co-authored-by: Nicolas Hug <[email protected]>

* Fix import

* Revert hardcoded normalization

* fix uncommitted changes

* Fix bug

* Fix more bugs

* Making resize optional for segmentation

* Fixing preset

* Fix mypy

* Fixing documentation strings

* Fix flake8

* minor refactoring

Co-authored-by: Nicolas Hug <[email protected]>

* Resolve conflict

* Porting model tests (#5622)

* Porting tests

* Remove unnecessary variable

* Fix linter

* Move prototype to extended tests

* Fix download models job

* Update CI on Multiweight branch to use the new weight download approach (#5628)

* port Pad to prototype transforms (#5621)

* port Pad to prototype transforms

* use literal

* Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624)

Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>

* pre-download model weights in CI docs build (#5625)

* pre-download model weights in CI docs build

* move changes into template

* change docs image

* Regenerated config.yml

Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>

* Porting reference scripts and updating presets (#5629)

* Making _preset.py classes

* Remove support of targets on presets.

* Rewriting the video preset

* Adding tests to check that the bundled transforms are JIT scriptable

* Rename all presets from *Eval to *Inference

* Minor refactoring

* Remove --prototype and --pretrained from reference scripts

* remove  pretained_backbone refs

* Corrections and simplifications

* Fixing bug

* Fixing linter

* Fix flake8

* restore documentation example

* minor fixes

* fix optical flow missing param

* Fixing commands

* Adding weights_backbone support in detection and segmentation

* Updating the commands for InceptionV3

* Setting `weights_backbone` to its fully BC value (#5653)

* Replace default `weights_backbone=None` with its BC values.

* Fixing tests

* Fix linter

* Update docs.

* Update preprocessing on reference scripts.

* Change qat/ptq to their full values.

* Refactoring preprocessing

* Fix video preset

* No initialization on VGG if pretrained

* Fix warning messages for backbone utils.

* Adding star to all preset constructors.

* Fix mypy.

Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
lezwon pushed a commit to lezwon/vision that referenced this pull request Mar 23, 2022
* Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet.

* Porting googlenet

* Porting inception

* Porting mnasnet

* Porting mobilenetv2

* Porting mobilenetv3

* Porting regnet

* Porting resnet

* Porting shufflenetv2

* Porting squeezenet

* Porting vgg

* Porting vit

* Fix docstrings

* Fixing imports

* Adding missing import

* Fix mobilenet imports

* Fix tests

* Fix prototype tests

* Exclude get_weight from models on test

* Fix init files

* Porting googlenet

* Porting inception

* porting mobilenetv2

* porting mobilenetv3

* porting resnet

* porting shufflenetv2

* Fix test and linter

* Fixing docs.

* Porting Detection models (pytorch#5617)

* fix inits

* fix docs

* Port faster_rcnn

* Port fcos

* Port keypoint_rcnn

* Port mask_rcnn

* Port retinanet

* Port ssd

* Port ssdlite

* Fix linter

* Fixing tests

* Fixing tests

* Fixing vgg test

* Porting Optical Flow, Segmentation, Video models (pytorch#5619)

* Porting raft

* Porting video resnet

* Porting deeplabv3

* Porting fcn and lraspp

* Fixing the tests and linter

* Porting docs, examples, tutorials and galleries (pytorch#5620)

* Fix examples, tutorials and gallery

* Update gallery/plot_optical_flow.py

Co-authored-by: Nicolas Hug <[email protected]>

* Fix import

* Revert hardcoded normalization

* fix uncommitted changes

* Fix bug

* Fix more bugs

* Making resize optional for segmentation

* Fixing preset

* Fix mypy

* Fixing documentation strings

* Fix flake8

* minor refactoring

Co-authored-by: Nicolas Hug <[email protected]>

* Resolve conflict

* Porting model tests (pytorch#5622)

* Porting tests

* Remove unnecessary variable

* Fix linter

* Move prototype to extended tests

* Fix download models job

* Update CI on Multiweight branch to use the new weight download approach (pytorch#5628)

* port Pad to prototype transforms (pytorch#5621)

* port Pad to prototype transforms

* use literal

* Bump up LibTorchvision version number for Podspec to release Cocoapods (pytorch#5624)

Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>

* pre-download model weights in CI docs build (pytorch#5625)

* pre-download model weights in CI docs build

* move changes into template

* change docs image

* Regenerated config.yml

Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>

* Porting reference scripts and updating presets (pytorch#5629)

* Making _preset.py classes

* Remove support of targets on presets.

* Rewriting the video preset

* Adding tests to check that the bundled transforms are JIT scriptable

* Rename all presets from *Eval to *Inference

* Minor refactoring

* Remove --prototype and --pretrained from reference scripts

* remove  pretained_backbone refs

* Corrections and simplifications

* Fixing bug

* Fixing linter

* Fix flake8

* restore documentation example

* minor fixes

* fix optical flow missing param

* Fixing commands

* Adding weights_backbone support in detection and segmentation

* Updating the commands for InceptionV3

* Setting `weights_backbone` to its fully BC value (pytorch#5653)

* Replace default `weights_backbone=None` with its BC values.

* Fixing tests

* Fix linter

* Update docs.

* Update preprocessing on reference scripts.

* Change qat/ptq to their full values.

* Refactoring preprocessing

* Fix video preset

* No initialization on VGG if pretrained

* Fix warning messages for backbone utils.

* Adding star to all preset constructors.

* Fix mypy.

Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
pmeier added a commit that referenced this pull request Mar 25, 2022
* added usps dataset

* fixed type issues

* fix mobilnet norm layer test (#5643)

* xfail mobilnet norm layer test

* fix test

* More robust check in tests for 16 bits images (#5652)

* Prefer nvidia channel for conda builds (#5648)

To mitigate missing `libcupti.so` dependency

* fix torchdata CI installation (#5657)

* update urls for kinetics dataset (#5578)

* update urls for kinetics dataset

* update urls for kinetics dataset

* remove errors

* update the changes and add test option to split

* added test to valid values for split arg

* change .txt to .csv for annotation url of k600

Co-authored-by: Nicolas Hug <[email protected]>

* Port Multi-weight support from prototype to main (#5618)

* Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet.

* Porting googlenet

* Porting inception

* Porting mnasnet

* Porting mobilenetv2

* Porting mobilenetv3

* Porting regnet

* Porting resnet

* Porting shufflenetv2

* Porting squeezenet

* Porting vgg

* Porting vit

* Fix docstrings

* Fixing imports

* Adding missing import

* Fix mobilenet imports

* Fix tests

* Fix prototype tests

* Exclude get_weight from models on test

* Fix init files

* Porting googlenet

* Porting inception

* porting mobilenetv2

* porting mobilenetv3

* porting resnet

* porting shufflenetv2

* Fix test and linter

* Fixing docs.

* Porting Detection models (#5617)

* fix inits

* fix docs

* Port faster_rcnn

* Port fcos

* Port keypoint_rcnn

* Port mask_rcnn

* Port retinanet

* Port ssd

* Port ssdlite

* Fix linter

* Fixing tests

* Fixing tests

* Fixing vgg test

* Porting Optical Flow, Segmentation, Video models (#5619)

* Porting raft

* Porting video resnet

* Porting deeplabv3

* Porting fcn and lraspp

* Fixing the tests and linter

* Porting docs, examples, tutorials and galleries (#5620)

* Fix examples, tutorials and gallery

* Update gallery/plot_optical_flow.py

Co-authored-by: Nicolas Hug <[email protected]>

* Fix import

* Revert hardcoded normalization

* fix uncommitted changes

* Fix bug

* Fix more bugs

* Making resize optional for segmentation

* Fixing preset

* Fix mypy

* Fixing documentation strings

* Fix flake8

* minor refactoring

Co-authored-by: Nicolas Hug <[email protected]>

* Resolve conflict

* Porting model tests (#5622)

* Porting tests

* Remove unnecessary variable

* Fix linter

* Move prototype to extended tests

* Fix download models job

* Update CI on Multiweight branch to use the new weight download approach (#5628)

* port Pad to prototype transforms (#5621)

* port Pad to prototype transforms

* use literal

* Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624)

Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>

* pre-download model weights in CI docs build (#5625)

* pre-download model weights in CI docs build

* move changes into template

* change docs image

* Regenerated config.yml

Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>

* Porting reference scripts and updating presets (#5629)

* Making _preset.py classes

* Remove support of targets on presets.

* Rewriting the video preset

* Adding tests to check that the bundled transforms are JIT scriptable

* Rename all presets from *Eval to *Inference

* Minor refactoring

* Remove --prototype and --pretrained from reference scripts

* remove  pretained_backbone refs

* Corrections and simplifications

* Fixing bug

* Fixing linter

* Fix flake8

* restore documentation example

* minor fixes

* fix optical flow missing param

* Fixing commands

* Adding weights_backbone support in detection and segmentation

* Updating the commands for InceptionV3

* Setting `weights_backbone` to its fully BC value (#5653)

* Replace default `weights_backbone=None` with its BC values.

* Fixing tests

* Fix linter

* Update docs.

* Update preprocessing on reference scripts.

* Change qat/ptq to their full values.

* Refactoring preprocessing

* Fix video preset

* No initialization on VGG if pretrained

* Fix warning messages for backbone utils.

* Adding star to all preset constructors.

* Fix mypy.

Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>

* Apply suggestions from code review

Co-authored-by: Philip Meier <[email protected]>

* use decompressor for extracting bz2

* Apply suggestions from code review

Co-authored-by: Philip Meier <[email protected]>

* Apply suggestions from code review

Co-authored-by: Philip Meier <[email protected]>

* fixed lint fails

* added tests for USPS

* check image shape

* fix tests

* check shape on image directly

* Apply suggestions from code review

Co-authored-by: Philip Meier <[email protected]>

* removed test and comments

* Update test/test_prototype_builtin_datasets.py

Co-authored-by: Nicolas Hug <[email protected]>

Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Nikita Shulga <[email protected]>
Co-authored-by: Sahil Goyal <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
facebook-github-bot pushed a commit that referenced this pull request Apr 5, 2022
Summary:
* Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet.

* Porting googlenet

* Porting inception

* Porting mnasnet

* Porting mobilenetv2

* Porting mobilenetv3

* Porting regnet

* Porting resnet

* Porting shufflenetv2

* Porting squeezenet

* Porting vgg

* Porting vit

* Fix docstrings

* Fixing imports

* Adding missing import

* Fix mobilenet imports

* Fix tests

* Fix prototype tests

* Exclude get_weight from models on test

* Fix init files

* Porting googlenet

* Porting inception

* porting mobilenetv2

* porting mobilenetv3

* porting resnet

* porting shufflenetv2

* Fix test and linter

* Fixing docs.

* Porting Detection models (#5617)

* fix inits

* fix docs

* Port faster_rcnn

* Port fcos

* Port keypoint_rcnn

* Port mask_rcnn

* Port retinanet

* Port ssd

* Port ssdlite

* Fix linter

* Fixing tests

* Fixing tests

* Fixing vgg test

* Porting Optical Flow, Segmentation, Video models (#5619)

* Porting raft

* Porting video resnet

* Porting deeplabv3

* Porting fcn and lraspp

* Fixing the tests and linter

* Porting docs, examples, tutorials and galleries (#5620)

* Fix examples, tutorials and gallery

* Update gallery/plot_optical_flow.py

* Fix import

* Revert hardcoded normalization

* fix uncommitted changes

* Fix bug

* Fix more bugs

* Making resize optional for segmentation

* Fixing preset

* Fix mypy

* Fixing documentation strings

* Fix flake8

* minor refactoring

* Resolve conflict

* Porting model tests (#5622)

* Porting tests

* Remove unnecessary variable

* Fix linter

* Move prototype to extended tests

* Fix download models job

* Update CI on Multiweight branch to use the new weight download approach (#5628)

* port Pad to prototype transforms (#5621)

* port Pad to prototype transforms

* use literal

* Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624)

* pre-download model weights in CI docs build (#5625)

* pre-download model weights in CI docs build

* move changes into template

* change docs image

* Regenerated config.yml

* Porting reference scripts and updating presets (#5629)

* Making _preset.py classes

* Remove support of targets on presets.

* Rewriting the video preset

* Adding tests to check that the bundled transforms are JIT scriptable

* Rename all presets from *Eval to *Inference

* Minor refactoring

* Remove --prototype and --pretrained from reference scripts

* remove  pretained_backbone refs

* Corrections and simplifications

* Fixing bug

* Fixing linter

* Fix flake8

* restore documentation example

* minor fixes

* fix optical flow missing param

* Fixing commands

* Adding weights_backbone support in detection and segmentation

* Updating the commands for InceptionV3

* Setting `weights_backbone` to its fully BC value (#5653)

* Replace default `weights_backbone=None` with its BC values.

* Fixing tests

* Fix linter

* Update docs.

* Update preprocessing on reference scripts.

* Change qat/ptq to their full values.

* Refactoring preprocessing

* Fix video preset

* No initialization on VGG if pretrained

* Fix warning messages for backbone utils.

* Adding star to all preset constructors.

* Fix mypy.

(Note: this ignores all push blocking failures!)

Reviewed By: datumbox

Differential Revision: D35216786

fbshipit-source-id: 0278c291d89b1d5e90a51d113ce226d807d067e5

Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
facebook-github-bot pushed a commit that referenced this pull request Apr 5, 2022
Summary:
* added usps dataset

* fixed type issues

* fix mobilnet norm layer test (#5643)

* xfail mobilnet norm layer test

* fix test

* More robust check in tests for 16 bits images (#5652)

* Prefer nvidia channel for conda builds (#5648)

To mitigate missing `libcupti.so` dependency

* fix torchdata CI installation (#5657)

* update urls for kinetics dataset (#5578)

* update urls for kinetics dataset

* update urls for kinetics dataset

* remove errors

* update the changes and add test option to split

* added test to valid values for split arg

* change .txt to .csv for annotation url of k600

* Port Multi-weight support from prototype to main (#5618)

* Moving basefiles outside of prototype and porting Alexnet, ConvNext, Densenet and EfficientNet.

* Porting googlenet

* Porting inception

* Porting mnasnet

* Porting mobilenetv2

* Porting mobilenetv3

* Porting regnet

* Porting resnet

* Porting shufflenetv2

* Porting squeezenet

* Porting vgg

* Porting vit

* Fix docstrings

* Fixing imports

* Adding missing import

* Fix mobilenet imports

* Fix tests

* Fix prototype tests

* Exclude get_weight from models on test

* Fix init files

* Porting googlenet

* Porting inception

* porting mobilenetv2

* porting mobilenetv3

* porting resnet

* porting shufflenetv2

* Fix test and linter

* Fixing docs.

* Porting Detection models (#5617)

* fix inits

* fix docs

* Port faster_rcnn

* Port fcos

* Port keypoint_rcnn

* Port mask_rcnn

* Port retinanet

* Port ssd

* Port ssdlite

* Fix linter

* Fixing tests

* Fixing tests

* Fixing vgg test

* Porting Optical Flow, Segmentation, Video models (#5619)

* Porting raft

* Porting video resnet

* Porting deeplabv3

* Porting fcn and lraspp

* Fixing the tests and linter

* Porting docs, examples, tutorials and galleries (#5620)

* Fix examples, tutorials and gallery

* Update gallery/plot_optical_flow.py

* Fix import

* Revert hardcoded normalization

* fix uncommitted changes

* Fix bug

* Fix more bugs

* Making resize optional for segmentation

* Fixing preset

* Fix mypy

* Fixing documentation strings

* Fix flake8

* minor refactoring

* Resolve conflict

* Porting model tests (#5622)

* Porting tests

* Remove unnecessary variable

* Fix linter

* Move prototype to extended tests

* Fix download models job

* Update CI on Multiweight branch to use the new weight download approach (#5628)

* port Pad to prototype transforms (#5621)

* port Pad to prototype transforms

* use literal

* Bump up LibTorchvision version number for Podspec to release Cocoapods (#5624)

* pre-download model weights in CI docs build (#5625)

* pre-download model weights in CI docs build

* move changes into template

* change docs image

* Regenerated config.yml

* Porting reference scripts and updating presets (#5629)

* Making _preset.py classes

* Remove support of targets on presets.

* Rewriting the video preset

* Adding tests to check that the bundled transforms are JIT scriptable

* Rename all presets from *Eval to *Inference

* Minor refactoring

* Remove --prototype and --pretrained from reference scripts

* remove  pretained_backbone refs

* Corrections and simplifications

* Fixing bug

* Fixing linter

* Fix flake8

* restore documentation example

* minor fixes

* fix optical flow missing param

* Fixing commands

* Adding weights_backbone support in detection and segmentation

* Updating the commands for InceptionV3

* Setting `weights_backbone` to its fully BC value (#5653)

* Replace default `weights_backbone=None` with its BC values.

* Fixing tests

* Fix linter

* Update docs.

* Update preprocessing on reference scripts.

* Change qat/ptq to their full values.

* Refactoring preprocessing

* Fix video preset

* No initialization on VGG if pretrained

* Fix warning messages for backbone utils.

* Adding star to all preset constructors.

* Fix mypy.

* Apply suggestions from code review

* use decompressor for extracting bz2

* Apply suggestions from code review

* Apply suggestions from code review

* fixed lint fails

* added tests for USPS

* check image shape

* fix tests

* check shape on image directly

* Apply suggestions from code review

* removed test and comments

* Update test/test_prototype_builtin_datasets.py

(Note: this ignores all push blocking failures!)

Reviewed By: datumbox

Differential Revision: D35216783

fbshipit-source-id: 556a63a89f15d1541ac2b479244a7b6c564eff14

Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Philip Meier <[email protected]>
Co-authored-by: Nicolas Hug <[email protected]>
Co-authored-by: Nikita Shulga <[email protected]>
Co-authored-by: Sahil Goyal <[email protected]>
Co-authored-by: Vasilis Vryniotis <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Co-authored-by: Anton Thomma <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants