-
Notifications
You must be signed in to change notification settings - Fork 256
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
推理第五阶段总是直接崩溃,求助 #79
Comments
每次到第五阶段,大概几十秒之后,内存涨了2G,但还有6个多G空闲,然后程序直接就崩溃退出了,也没有任何报错。 |
虽然我没遇到过这样的问题,但是第五阶段相对于第四阶段多出了对于 explicit target 的计算,这里再次用到了 nvdiffrast,你可以检查下是否是 nvdiffrast 的问题。(比如 egl backend 是否支持,不支持的话需要改成 cuda backend 或者较慢的 pytorch3d 实现) |
怎么检查nvdiffrast ,如何操作呢, |
又折腾了一晚,重新用pycharm试了下,总算能跑通了。 |
控制台日志如下:
(venv) PS E:\WSL\2M\Unique3D> python app/gradio_local.py --port 7860
Warning! extra parameter in cli is not verified, may cause erros.
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 29.76it/s]
You have disabled the safety checker for <class 'custum_3d_diffusion.custum_pipeline.unifield_pipeline_img2mvimg.StableDiffusionImage2MVCustomPipeline'> by passing
safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .Warning! extra parameter in cli is not verified, may cause erros.
E:\WSL\2M\Unique3D\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning:
resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, useforce_download=True
.warnings.warn(
Loading pipeline components...: 100%|██████████████████████████████████████████████████| 5/5 [00:00<00:00, 2175.92it/s]
You have disabled the safety checker for <class 'custum_3d_diffusion.custum_pipeline.unifield_pipeline_img2img.StableDiffusionImageCustomPipeline'> by passing
safety_checker=None
. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at huggingface/diffusers#254 .E:\WSL\2M\Unique3D\venv\lib\site-packages\torch\utils\cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:04<00:00, 1.33it/s]
Pipelines loaded with
dtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference.Pipelines loaded with
dtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference.Pipelines loaded with
dtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference.Loading pipeline components...: 100%|██████████████████████████████████████████████████| 6/6 [00:00<00:00, 5932.54it/s]
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.E:\WSL\2M\Unique3D\venv\lib\site-packages\transformers\models\clip\modeling_clip.py:480: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
attn_output = torch.nn.functional.scaled_dot_product_attention(
0%| | 0/30 [00:00<?, ?it/s]Warning! condition_latents is not None, but self_attn_ref is not enabled! This warning will only be raised once.
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.82it/s]
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:10<00:00, 1.03s/it]
100%|██████████████████████████████████████████████████████████████████████████████████| 30/30 [00:30<00:00, 1.02s/it]
0%| | 0/200 [00:00<?, ?it/s]E:\WSL\2M\Unique3D\venv\lib\site-packages\torch\utils\cpp_extension.py:1967: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for compilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
warnings.warn(
E:\WSL\2M\Unique3D.\mesh_reconstruction\remesh.py:354: UserWarning: Using torch.cross without specifying the dim arg is deprecated.
Please either pass the dim explicitly or simply use torch.linalg.cross.
The default value of dim will change to agree with that of linalg.cross in a future release. (Triggered internally at ..\aten\src\ATen\native\Cross.cpp:66.)
n = torch.cross(e1,cl) + torch.cross(cr,e1) #sum of old normal vectors
100%|████████████████████████████████████████████████████████████████████████████████| 200/200 [00:07<00:00, 25.91it/s]
0%| | 0/100 [00:00<?, ?it/s]
(venv) PS E:\WSL\2M\Unique3D>
The text was updated successfully, but these errors were encountered: