Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Torch][AArch64] Skip test_load_model___wrong_language__to_pytorch #12660

Merged
merged 1 commit into from
Aug 31, 2022

Conversation

leandron
Copy link
Contributor

This patch makes test_load_model___wrong_language__to_pytorch to be
skipped in AArch64 due to a bug that can be reproduced when enabling
Integration Tests in machines with Torch installed in TVM.

The error message seen is:
OSError: /usr/local/lib/python3.7/dist-packages/torch/lib/
libgomp-d22c30c5.so.1: cannot allocate memory in static TLS block

While the test needs further investigation, it is being set as
skipped so other tests can be enabled and not to regress and allow
time for the investigation to be made.

This relates to the issue described in #10673.

cc @Mousius @driazati @NicolaLancellotti @ashutosh-arm @areusch @gromero

This patch makes test_load_model___wrong_language__to_pytorch to be
skipped in AArch64 due to a bug that can be reproduced when enabling
Integration Tests in machines with Torch installed in TVM.

The error message seen is:
OSError: /usr/local/lib/python3.7/dist-packages/torch/lib/
libgomp-d22c30c5.so.1: cannot allocate memory in static TLS block

While the test needs further investigation, it is being set as
skipped so other tests can be enabled and not to regress and allow
time for the investigation to be made.

Change-Id: I463ebaff9a7708552a937c6c3d7897314bb1a59b
@leandron
Copy link
Contributor Author

Once this is merged, we can then proceed and:

  1. Merge [TFLite][CI] Update TensorFlow dependency to 2.9.1 #12131
  2. generate/update Docker images
  3. Merge [TFLite] Support quantized GREATER op in TFLite frontend #11519

Copy link
Member

@driazati driazati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious why does CI not show this failure if it's running pytest tests/python/driver on aarch64 after #10677?

tests/python/driver/tvmc/test_frontends.py Show resolved Hide resolved
@leandron
Copy link
Contributor Author

Just curious why does CI not show this failure if it's running pytest tests/python/driver on aarch64 after #10677?

This is actually something I've reported in #12529 and exactly what is blocking us to update the AArch64 images with updates for ONNX/Torch and TF for better coverage.

#10677 is merged, but the production images are not reflecting that yet.

Copy link
Member

@driazati driazati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation! I'm eagerly watching this work since it will fix up #11995 so that process can start working again.

@leandron
Copy link
Contributor Author

It seems CI is happy now fyi @driazati @areusch @Mousius

@driazati driazati merged commit d54c065 into apache:main Aug 31, 2022
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…pache#12660)

This patch makes test_load_model___wrong_language__to_pytorch to be
skipped in AArch64 due to a bug that can be reproduced when enabling
Integration Tests in machines with Torch installed in TVM.

```
The error message seen is:
OSError: /usr/local/lib/python3.7/dist-packages/torch/lib/
libgomp-d22c30c5.so.1: cannot allocate memory in static TLS block
```

While the test needs further investigation, it is being set as
skipped so other tests can be enabled and not to regress and allow
time for the investigation to be made.

This relates to the issue described in apache#10673.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants