You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[root@pytorch-339829066-master-0 QA-CLIP-main]# python3
Python 3.11.4 (main, Nov 1 2023, 16:06:27) [GCC 8.5.0 20210514 (TencentOS 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
import clip as clip
Traceback (most recent call last):
File "", line 1, in
File "/dockerdata/adams_workspace/QA-CLIP-main/clip/init.py", line 4, in
from .model import convert_state_dict
File "/dockerdata/adams_workspace/QA-CLIP-main/clip/model.py", line 16, in
FlashMHA = importlib.import_module('flash_attn.flash_attention').FlashMHA
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'flash_attn.flash_attention'
The text was updated successfully, but these errors were encountered:
[root@pytorch-339829066-master-0 QA-CLIP-main]# python3
Python 3.11.4 (main, Nov 1 2023, 16:06:27) [GCC 8.5.0 20210514 (TencentOS 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
The text was updated successfully, but these errors were encountered: