Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix AttributeError crash when running on non-CUDA systems #256

Merged
merged 4 commits into from
Aug 31, 2022

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Aug 31, 2022

No description provided.

@lstein lstein requested a review from tildebyte August 31, 2022 16:00
@lstein
Copy link
Collaborator Author

lstein commented Aug 31, 2022

This is a very minor update, but gets dream.py running on Mac MPS systems.

@tildebyte
Copy link
Contributor

Squash, please 😁

Otherwise, LGTM 👍

@tiems90
Copy link

tiems90 commented Aug 31, 2022

Just checked out this branch on an M1 Mac, but still getting an issue here.

Traceback (most recent call last):
  File "/Users/timo/Code/images/stable-diffusion/scripts/dream.py", line 535, in <module>
    main()
  File "/Users/timo/Code/images/stable-diffusion/scripts/dream.py", line 100, in main
    main_loop(t2i, opt.outdir, opt.prompt_as_dir, cmd_parser, infile)
  File "/Users/timo/Code/images/stable-diffusion/scripts/dream.py", line 227, in main_loop
    t2i.prompt2image(image_callback=image_writer, **vars(opt))
  File "/Users/timo/Code/images/stable-diffusion/ldm/simplet2i.py", line 289, in prompt2image
    torch.cuda.torch.cuda.reset_peak_memory_stats()
  File "/opt/homebrew/Caskroom/miniforge/base/envs/ldm/lib/python3.10/site-packages/torch/cuda/memory.py", line 260, in reset_peak_memory_stats
    return torch._C._cuda_resetPeakMemoryStats(device)
AttributeError: module 'torch._C' has no attribute '_cuda_resetPeakMemoryStats'

@eldridgegreg
Copy link

Just checked out this branch on an M1 Mac, but still getting an issue here.

Traceback (most recent call last):
  File "/Users/timo/Code/images/stable-diffusion/scripts/dream.py", line 535, in <module>
    main()
  File "/Users/timo/Code/images/stable-diffusion/scripts/dream.py", line 100, in main
    main_loop(t2i, opt.outdir, opt.prompt_as_dir, cmd_parser, infile)
  File "/Users/timo/Code/images/stable-diffusion/scripts/dream.py", line 227, in main_loop
    t2i.prompt2image(image_callback=image_writer, **vars(opt))
  File "/Users/timo/Code/images/stable-diffusion/ldm/simplet2i.py", line 289, in prompt2image
    torch.cuda.torch.cuda.reset_peak_memory_stats()
  File "/opt/homebrew/Caskroom/miniforge/base/envs/ldm/lib/python3.10/site-packages/torch/cuda/memory.py", line 260, in reset_peak_memory_stats
    return torch._C._cuda_resetPeakMemoryStats(device)
AttributeError: module 'torch._C' has no attribute '_cuda_resetPeakMemoryStats'
-        torch.cuda.torch.cuda.reset_peak_memory_stats()
+        if self.device == 'cuda':
+            torch.cuda.torch.cuda.reset_peak_memory_stats()

Resolves that for me but script is still non-functional with:

dream> monkey eating a doughnut
User specified autocast device_type must be 'cuda' or 'cpu'
Are you sure your system has an adequate NVIDIA GPU?
Usage stats:
   0 image(s) generated in 0.00s
   Max VRAM used for this generation: 0.00G
Outputs:

@tildebyte
Copy link
Contributor

AttributeError: module 'torch._C' has no attribute '_cuda_resetPeakMemoryStats'

As mentioned elsewhere, Mx/MPS support is very much W.I.P.; there's a PR open ti fix this exact issue.

@tiems90
Copy link

tiems90 commented Aug 31, 2022

Yeah, I believed this is exactly that PR based on the branch name fix-crash-on-MPS. I read there's no access to Mx devices for you two, so I thought it might be useful to know the fix in this PR does not resolve the issue.

If it's not helpful, apologies.

@tildebyte
Copy link
Contributor

Yeah, I believed this is exactly that PR based on the branch name fix-crash-on-MPS. I read there's no access to Mx devices for you two, so I thought it might be useful to know the fix in this PR does not resolve the issue.

If it's not helpful, apologies.

No worries! Feedback is always helpful. I wanted to set expectations for Mac users that things are in flux.

@lstein
Copy link
Collaborator Author

lstein commented Aug 31, 2022

Squash, please grin

Otherwise, LGTM +1

I dunno why I get the excess commit messages. I think it's because I create a branch, make my edits, commit, and then get a warning that the branch is out of date with main and needs a merge. The funny thing is that I try to do a pull from origin before branching. Maybe it is the mixture of merging pull requests from the github interface and working locally that is tripping me up.

Anyway, I've got the github interface set to doing a --squash each time.

@lstein lstein merged commit 4b560b5 into main Aug 31, 2022
@lstein lstein deleted the fix-crash-on-mps branch August 31, 2022 20:59
@tildebyte
Copy link
Contributor

mixture of merging pull requests from the github interface and working locally that is tripping me up

Probably

Doing a rebase or fast-forward against main is the best practice.

The only problem I have with always squashing everything in a PR is that there are plenty of times when you DO want multiple commits in history, to decouple things which are related (and thus in the same PR), but which don't depend on each other. It's not hard to imagine a scenario where you need to revert a single commit with a regression, but instead of doing that, you have to back out several "commits" worth of work from a single PR, then re-commit the changes which aren't involved with the regression.

@tildebyte tildebyte mentioned this pull request Aug 31, 2022
@junukwon7 junukwon7 mentioned this pull request Sep 1, 2022
@yudhanjaya
Copy link

The issue is still here. Testing on M1 Mac Pro yields same error:

dream> a cat pharaoh dreaming of empire
Traceback (most recent call last):
  File "/Users/yudhanjaya/stable-diffusion/scripts/dream.py", line 543, in <module>
    main()
  File "/Users/yudhanjaya/stable-diffusion/scripts/dream.py", line 103, in main
    main_loop(t2i, opt.outdir, opt.prompt_as_dir, cmd_parser, infile)
  File "/Users/yudhanjaya/stable-diffusion/scripts/dream.py", line 230, in main_loop
    t2i.prompt2image(image_callback=image_writer, **vars(opt))
  File "/Users/yudhanjaya/stable-diffusion/ldm/simplet2i.py", line 282, in prompt2image
    torch.cuda.torch.cuda.reset_peak_memory_stats()
  File "/Users/yudhanjaya/miniconda3/envs/ldm/lib/python3.10/site-packages/torch/cuda/memory.py", line 260, in reset_peak_memory_stats
    return torch._C._cuda_resetPeakMemoryStats(device)
AttributeError: module 'torch._C' has no attribute '_cuda_resetPeakMemoryStats'

commenting out the torch.cuda.torch.cuda.reset_peak_memory_stats() line gets it to run (same as before), but the output is again the same:


* Initialization done! Awaiting your command (-h for help, 'q' to quit)
dream> a cat pharaoh dreaming of empire
Generating:   0%|                                         | 0/1 [00:00<?, ?it/s]
"LayerNormKernelImpl" not implemented for 'Half'
Are you sure your system has an adequate NVIDIA GPU?
Usage stats:
   0 image(s) generated in 0.07s
   Max VRAM used for this generation: 0.00G
Outputs:

Changing the device default to cpu doesn't make a difference. It looks like an error is generated with LayerNormKernelImpl, then the error handler catches it and prints the general message.

@lstein
Copy link
Collaborator Author

lstein commented Sep 1, 2022

Doing a rebase or fast-forward against main is the best practice.

It definitely takes a lot more discipline to manage a fast-moving multi-collaborator project like this one than any of the projects I've been involved in before. I guess rebase -i and I are going to get to be good friends.

@tildebyte
Copy link
Contributor

Doing a rebase or fast-forward against main is the best practice.

It definitely takes a lot more discipline to manage a fast-moving multi-collaborator project like this one than any of the projects I've been involved in before. I guess rebase -i and I are going to get to be good friends.

May I humbly offer my personal cheatsheet? https://github.com/ben-alkov/_git_-Beyond-The-Basics/blob/master/git%20-%20beyond%20the%20basics.md#commit-magic

austinbrown34 pushed a commit to cognidesign/InvokeAI that referenced this pull request Dec 30, 2022
)

* fix AttributeError crash when running on non-CUDA systems; closes issue invoke-ai#234 and issue invoke-ai#250
* although this prevents dream.py script from crashing immediately on MPS systems, MPS support still very much a work in progress.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants