You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Installed and downloaded all, Python 3.12 and Pytorch 2.4.0+cu121 (Cuda 12.1) installed.
running the basic Jupyter notebook.
at the prediction task I get
"c:\Python312\segment-anything-2\sam2\modeling\backbones\hieradet.py:68: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
x = F.scaled_dot_product_attention("
Previously on the
"from sam2.sam2_image_predictor import SAM2ImagePredictor"
stage I got just a warning
"c:\Python312\segment-anything-2\sam2\modeling\sam\transformer.py:23: UserWarning: Flash Attention is disabled as it requires a GPU with Ampere (8.0) CUDA capability.
OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()"
So can I bypass this with some setting NOT to use FLASH Attention?
thanks
The text was updated successfully, but these errors were encountered:
@Dashenboy This is mainly a warning suggesting that the GPU is not supporting Flash Attention, so it will fall back to other scaled dot-product kernels. It doesn't needs fixing and you can still use SAM 2 in this case.
more details: Flash Attention is generally faster but is only fully supported with GPUs that have CUDA capabilities >= 8.0. If you GPU has a lower CUDA capability (as can be check on https://developer.nvidia.com/cuda-gpus), this warning will be printed suggesting that Flash Attention is not available for you (but you can ignore it).
Hi,
Installed and downloaded all, Python 3.12 and Pytorch 2.4.0+cu121 (Cuda 12.1) installed.
running the basic Jupyter notebook.
at the prediction task I get
"c:\Python312\segment-anything-2\sam2\modeling\backbones\hieradet.py:68: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
x = F.scaled_dot_product_attention("
Previously on the
"from sam2.sam2_image_predictor import SAM2ImagePredictor"
stage I got just a warning
"c:\Python312\segment-anything-2\sam2\modeling\sam\transformer.py:23: UserWarning: Flash Attention is disabled as it requires a GPU with Ampere (8.0) CUDA capability.
OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()"
So can I bypass this with some setting NOT to use FLASH Attention?
thanks
The text was updated successfully, but these errors were encountered: