-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
推理代码是需要多少张gpu显卡才能跑通的呀? #24
Comments
对,我是使用3张A100来inference的,不过3张>=32G的显卡(比如V100)应该就可以 |
我是尝试了单卡和双卡,今天又在四卡上尝试了,报错信息如下: warnings.warn(
|
如果是按照单卡跑,同时把启动命令改成普通的python test_wcx.py 这种,报错信息是如下: 麻烦抽空再看看这个的报错信息呢。 |
你用4卡跑的时候是swinir那块报错了,你能提供一下你测试的图像么? |
是对测试图像的尺寸有要求吗?我就网上随便找了张图测试的,尺寸信息是615x820x3的rgb图像。 |
刚刚更新了util_image.py,fix了这个bug,非常感谢你的issue~ |
使用最新的尝试了下,还是会报错,报错信息如下: warnings.warn(
|
transformers版本的问题,试一下 |
还是报错了 [2024-12-12 08:03:31,035] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 24603) of binary: /opt/conda/bin/python3
|
这个报错看不出来问题,方便的话可以添加下微信交流? |
15529609856是我的电话,可以搜到我的微信,非常感谢~ |
我用多卡和单卡推理的时候,会报不同的错误,按照推理代码默认的设置:
llava_device = 'cuda:1'
t5llm_device = 'cuda:2'
意思是至少需要3张显卡吗?请问作者这边是用几张什么类型的显卡跑的推理代码呢?
The text was updated successfully, but these errors were encountered: