2024-08-14 15:14:17.991 [Info] User local requested 1 image with model 'flux1-dev-fp8.safetensors'... 2024-08-14 15:14:17.997 [Debug] [BackendHandler] Backend request #1 for model flux1-dev-fp8.safetensors, maxWait=7.00:00:00. 2024-08-14 15:14:18.002 [Debug] [BackendHandler] backend #0 will load a model: flux1-dev-fp8.safetensors, with 1 requests waiting for 0 seconds 2024-08-14 15:14:20.374 [Debug] [ComfyUI-0/STDERR] got prompt 2024-08-14 15:14:22.415 [Debug] [ComfyUI-0/STDERR] model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16 2024-08-14 15:14:22.604 [Debug] [ComfyUI-0/STDERR] model_type FLUX 2024-08-14 15:23:41.374 [Info] Creating new admin session 'local' for ::1 2024-08-14 15:23:43.451 [Debug] [Load ComfyUI Self-Starting #0] Got valid value set, will parse... 2024-08-14 15:23:43.451 [Debug] Comfy backend 0 using model folder format: forward slash / as no backslash was found 2024-08-14 15:23:43.451 [Debug] [Load ComfyUI Self-Starting #0] Done parsing value set. 2024-08-14 15:23:43.457 [Debug] Data refreshed! 2024-08-14 15:27:03.350 [Debug] [ComfyUI-0/STDERR] clip missing: ['text_projection.weight'] 2024-08-14 15:27:04.088 [Debug] [ComfyUI-0/STDERR] Using pytorch attention in VAE 2024-08-14 15:27:04.105 [Debug] [ComfyUI-0/STDERR] Using pytorch attention in VAE 2024-08-14 15:27:20.491 [Debug] [ComfyUI-0/STDERR] Prompt executed in 780.11 seconds 2024-08-14 15:27:22.651 [Debug] [BackendHandler] backend #0 loaded model, returning to pool 2024-08-14 15:27:23.223 [Debug] [BackendHandler] Backend request #1 found correct model on #0 2024-08-14 15:27:23.224 [Debug] [BackendHandler] Backend request #1 finished. 2024-08-14 15:27:23.229 [Debug] [ComfyUI-0/STDERR] got prompt 2024-08-14 15:27:23.314 [Debug] [ComfyUI-0/STDERR] Requested to load FluxClipModel_ 2024-08-14 15:27:23.314 [Debug] [ComfyUI-0/STDERR] Loading 1 new model 2024-08-14 15:27:26.384 [Debug] [ComfyUI-0/STDERR] F:\ai\SwarmUI\dlbackend\comfy\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) 2024-08-14 15:27:26.384 [Debug] [ComfyUI-0/STDERR] out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) 2024-08-14 15:27:27.913 [Debug] [ComfyUI-0/STDERR] Requested to load Flux 2024-08-14 15:27:27.913 [Debug] [ComfyUI-0/STDERR] Loading 1 new model 2024-08-14 15:27:30.500 [Debug] [ComfyUI-0/STDERR] 2024-08-14 15:27:33.129 [Debug] [ComfyUI-0/STDERR] 0%| | 0/20 [00:00