stable diffusion dreambooth
NotImplementedError: No operator found for memory_efficient_attention_forward
with inputs: query : shape=(2, 8040, 8, 40) (torch.float16) key : shape=(2, 8040, 8, 40) (torch.float16) value : shape=(2, 8040, 8, 40) (torch.float16) attn_bias : <class ‘NoneType’> p : 0.0 decoderF
is not supported because: xFormers wasn’t build with CUDA support attn_bias type is <class ‘NoneType’> operator wasn’t built – see python -m xformers.info
for more info [email protected]
is not supported because: xFormers wasn’t build with CUDA support operator wasn’t built – see python -m xformers.info
for more info tritonflashattF
is not supported because: xFormers wasn’t build with CUDA support operator wasn’t built – see python -m xformers.info
for more info triton is not available cutlassF
is not supported because: xFormers wasn’t build with CUDA support operator wasn’t built – see python -m xformers.info
for more info smallkF
is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn’t build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn’t built – see python -m xformers.info
for more info unsupported embed per head: 40
Time taken: 1.3 sec