I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow.
I am wondering if this is normal.
Below are the details of my work environment.
input img frames
- 10~60
(KSampler speed 120~830s/it)
checkpoint model
- wildcardTURBO_sdxl, anythingXL
LoRA model
- gyblistyle, cartoon, EnvyOil
(also tried without Lora)
controlnet model
- depth, canny(used both)
KSampler(Advanced)
steps : 8
cfg : 2.5
sampler : dpm_2_ancestral, dpmpp_sde_gpu, dpmpp_2m, dpmpp_2(or3)m_sde_gpu
scheduler : karras
Python version: 3.10.11
VAE dtype: torch.bfloat16
Pytorch version: 2.1.2+cu121
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Platform: Windows10
and My GPU works only 0~4%(almost 1%) at KSampler
Things I have tried
• Executed using the run_gpu batch file.
• Tried adding –-lowvram, –normalvram, and –-gpu only to the run_gpu batch file.
•Received a PyTorch compilation error (video generation was successful).
Found that it was set to use the CPU.
Downgraded Python and PyTorch to enable GPU usage, and the PyTorch error disappeared.
•Observed the following message in the cmd window when running ComfyUI:
Total VRAM 24575 MB, total RAM 130990 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
VAE dtype: torch.bfloat16
•Adjusted the sampler and steps to fit the model.
•Changed the existing workflow and model optimized for SDXL to SD 1.5
Changing the conditions affected the video quality, but the progress speed at the KSampler step did not improve at all (the GPU also hardly works consistently).
When applying a LoRA model for a Pixar style to output a 3D video, it took about 12 hours to produce a 4-second video!!!
I am wondering if something is wrong or if it is supposed to take this long…