NotImplementedError: No operator found for memory_efficient_attention_forward
with inputs: query : shape=(2, 8040, 8, 40) (torch.float16) key : shape=(2, 8040, 8, 40) (torch.float16) value : shape=(2, 8040, 8, 40) (torch.float16) attn_bias : <class ‘NoneType’> p : 0.0 decoderF
is not supported because: xFormers wasn’t build with CUDA support attn_bias type is <class ‘NoneType’> operator wasn’t built – see python -m xformers.info
for more info [email protected]
is not supported because: xFormers wasn’t build with CUDA support operator wasn’t built – see python -m xformers.info
for more info tritonflashattF
is not supported because: xFormers wasn’t build with CUDA support operator wasn’t built – see python -m xformers.info
for more info triton is not available cutlassF
is not supported because: xFormers wasn’t build with CUDA support operator wasn’t built – see python -m xformers.info
for more info smallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn’t build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn’t built – see python -m xformers.info
for more info unsupported embed per head: 40
Time taken: 1.3 sec
I already update:C:ProgramDataanaconda3>conda update –all
C:ProgramDataanaconda3>conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
python -m pip install –upgrade pip
shi ting is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.