-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
activation_kernel.cu(21):error:__host__ or __device__ annotation on lambda requires --expt-extended-lambda nvcc flag #251
Comments
I am making a new version. which is in PR #256 with new setup instructions. This new PR will be merged soon. Let me know if you still have the issue. |
I'm sorry, I still met some bugs. My env is Ubuntu 18.04, torch 1.4.0, CUDA 10.1. running install /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(14): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(15): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(15): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(15): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(18): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(19): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(19): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(19): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(23): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(24): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(24): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/functional/padding.h(24): warning: integer conversion resulted in a change of sign /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/profiler.h(97): warning: attribute "visibility" does not apply here /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/autograd/profiler.h(112): warning: attribute "visibility" does not apply here /home/kururu/anaconda3/envs/kururudev-torchdev/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/enum.h(179): warning: statement is unreachable activation_kernel.cu(20): error: host or device annotation on lambda requires --expt-extended-lambda nvcc flag activation_kernel.cu(21): error: host or device annotation on lambda requires --expt-extended-lambda nvcc flag activation_kernel.cu(23): error: host or device annotation on lambda requires --expt-extended-lambda nvcc flag activation_kernel.cu(24): error: host or device annotation on lambda requires --expt-extended-lambda nvcc flag /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->double", defined at activation_kernel.cu:20) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->__nv_bool", defined at activation_kernel.cu:21) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->double", defined at activation_kernel.cu:20) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->__nv_bool", defined at activation_kernel.cu:21) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->double", defined at activation_kernel.cu:23) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->__nv_bool", defined at activation_kernel.cu:24) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->double", defined at activation_kernel.cu:23) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const double &)->__nv_bool", defined at activation_kernel.cu:24) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->float", defined at activation_kernel.cu:20) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->__nv_bool", defined at activation_kernel.cu:21) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->float", defined at activation_kernel.cu:20) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->__nv_bool", defined at activation_kernel.cu:21) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->float", defined at activation_kernel.cu:23) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->__nv_bool", defined at activation_kernel.cu:24) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->float", defined at activation_kernel.cu:23) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified /usr/local/cuda/include/thrust/system/cuda/detail/core/agent_launcher.h(926): error: The closure type for a lambda ("lambda [](const float &)->__nv_bool", defined at activation_kernel.cu:24) cannot be used in the template argument type of a global function template instantiation, unless the lambda is defined within a device or global function, or the lambda is an 'extended lambda' and the flag --expt-extended-lambda is specified 20 errors detected in the compilation of "/tmp/tmpxft_00006a79_00000000-6_activation_kernel.cpp1.ii". |
Also when I only use encoding.nn.SyncBatchNorm for model test, my process of python script hangs on, fill GPU memory bu no computing use. |
BTW, I only install ninja 1.8.2 in my python env but not install in my system, is it matter? |
I am not expert in system setup. I haven't tried ubuntu 18.04 or cuda 10.1. My setting is ubuntu 16.04 and cuda 10.0 with pytorch 1.4.0. |
Thank you for your patient explanation. |
That's wired. https://github.com/zhanghang1989/PyTorch-Encoding/blob/master/encoding/nn/syncbn.py#L175-L176 The eval mode should use the standard BN forward. |
Are you using the most recent version of the code? |
Could you try pip install torch-encoding --pre which installs the most recent version |
Thanks for your patient explanation. What I used is the most recent version. |
Is your issue related to PyCharm like this #260 |
activation_kernel.cu(21):error:host or device annotation on lambda requires --expt-extended-lambda nvcc flag
what is the problem?
Originally posted by @zhuizhunew in #66 (comment)
same issue!
The text was updated successfully, but these errors were encountered: