Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installing with pytorch 2.0 #36

Closed
samar-khanna opened this issue Jan 11, 2024 · 1 comment
Closed

Installing with pytorch 2.0 #36

samar-khanna opened this issue Jan 11, 2024 · 1 comment

Comments

@samar-khanna
Copy link

samar-khanna commented Jan 11, 2024

Thanks to the authors for making their very helpful code publicly available. This is for those of you working with newer versions of torch.

Currently, if you try to run python setup.py install with torch 2.0 (python3.8, cuda 11.8), it will fail to compile. To solve this, you need to add just one line in setup.py on L30:

        nvcc_flags += ['-std=c++14']

EDIT: This actually didn't work, it seemed to compile for me because it clashed with an existing build but fails if I try to build from scratch.

@samar-khanna samar-khanna closed this as not planned Won't fix, can't repro, duplicate, stale Jan 11, 2024
@bytesnake
Copy link

replacing dynamic dispatching with restricted version from ATen works for me (I also had to downgrade gcc to 10 NVlabs/instant-ngp#119)

#undef AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES
#define AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES(TYPE, NAME, ...)                                  \
  AT_DISPATCH_SWITCH(TYPE,NAME,                                           \
      AT_DISPATCH_CASE(at::ScalarType::ComplexFloat, __VA_ARGS__) \
      AT_DISPATCH_CASE(at::ScalarType::Float, __VA_ARGS__))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants