2025-07-17T01:04:02.321349296Z ========== 2025-07-17T01:04:02.321356746Z == CUDA == 2025-07-17T01:04:02.321361796Z ========== 2025-07-17T01:04:02.325039360Z CUDA Version 12.6.3 2025-07-17T01:04:02.326216403Z Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 2025-07-17T01:04:02.329411589Z This container image and its contents are governed by the NVIDIA Deep Learning Container License. 2025-07-17T01:04:02.329419439Z By pulling and using the container, you accept the terms and conditions of this license: 2025-07-17T01:04:02.329426468Z https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license 2025-07-17T01:04:02.329438488Z A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. 2025-07-17T01:04:02.351417938Z worker-comfyui - ComfyUI-Manager network_mode set to 'offline' in /comfyui/user/default/ComfyUI-Manager/config.ini 2025-07-17T01:04:02.351705381Z worker-comfyui: Starting ComfyUI 2025-07-17T01:04:02.351831068Z worker-comfyui: Starting RunPod Handler 2025-07-17T01:04:02.398346428Z Adding extra search path checkpoints /runpod-volume/models/checkpoints 2025-07-17T01:04:02.398364918Z Adding extra search path clip /runpod-volume/models/clip 2025-07-17T01:04:02.398367538Z Adding extra search path clip_vision /runpod-volume/models/clip_vision 2025-07-17T01:04:02.398369438Z Adding extra search path configs /runpod-volume/models/configs 2025-07-17T01:04:02.398371458Z Adding extra search path controlnet /runpod-volume/models/controlnet 2025-07-17T01:04:02.398373308Z Adding extra search path embeddings /runpod-volume/models/embeddings 2025-07-17T01:04:02.398375128Z Adding extra search path loras /runpod-volume/models/loras 2025-07-17T01:04:02.398404017Z Adding extra search path upscale_models /runpod-volume/models/upscale_models 2025-07-17T01:04:02.398424886Z Adding extra search path vae /runpod-volume/models/vae 2025-07-17T01:04:02.398429086Z Adding extra search path unet /runpod-volume/models/unet 2025-07-17T01:04:03.035531074Z [START] Security scan 2025-07-17T01:04:03.035558513Z [DONE] Security scan 2025-07-17T01:04:03.035564393Z Popen(['git', 'version'], cwd=/, stdin=None, shell=False, universal_newlines=False) 2025-07-17T01:04:03.037650065Z Popen(['git', 'version'], cwd=/, stdin=None, shell=False, universal_newlines=False) 2025-07-17T01:04:03.043730054Z ## ComfyUI-Manager: installing dependencies done. 2025-07-17T01:04:03.043757263Z ** ComfyUI startup time: 2025-07-17 01:04:03.043 2025-07-17T01:04:03.043762903Z ** Platform: Linux 2025-07-17T01:04:03.043768793Z ** Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] 2025-07-17T01:04:03.043773793Z ** Python executable: /opt/venv/bin/python 2025-07-17T01:04:03.043780933Z ** ComfyUI Path: /comfyui 2025-07-17T01:04:03.043787562Z ** ComfyUI Base Folder Path: /comfyui 2025-07-17T01:04:03.043856781Z ** User directory: /comfyui/user 2025-07-17T01:04:03.043926029Z ** ComfyUI-Manager config path: /comfyui/user/default/ComfyUI-Manager/config.ini 2025-07-17T01:04:03.043950719Z ** Log path: /comfyui/user/comfyui.log 2025-07-17T01:04:03.541775860Z Prestartup times for custom nodes: 2025-07-17T01:04:03.541778950Z 1.1 seconds: /comfyui/custom_nodes/ComfyUI-Manager 2025-07-17T01:04:04.529697073Z worker-comfyui - Starting handler... 2025-07-17T01:04:04.529723972Z --- Starting Serverless Worker | Version 1.7.9 --- 2025-07-17T01:04:04.634004361Z Checkpoint files will always be loaded safely. 2025-07-17T01:04:04.839569168Z {"requestId": null, "message": "Jobs in queue: 1", "level": "INFO"} 2025-07-17T01:04:04.839618927Z {"requestId": null, "message": "Jobs in progress: 1", "level": "INFO"} 2025-07-17T01:04:04.839625446Z {"requestId": "sync-04cffaf8-23a7-4e2b-8f43-9c96d0343240-e1", "message": "Started.", "level": "INFO"} 2025-07-17T01:04:04.839630497Z worker-comfyui - Checking API server at http://127.0.0.1:8188/... 2025-07-17T01:04:04.867394852Z /opt/venv/lib/python3.12/site-packages/torch/cuda/__init__.py:287: UserWarning: 2025-07-17T01:04:04.867448591Z NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation. 2025-07-17T01:04:04.867478290Z The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90. 2025-07-17T01:04:04.867484790Z If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ 2025-07-17T01:04:04.867496069Z warnings.warn( 2025-07-17T01:04:04.996648871Z Total VRAM 32120 MB, total RAM 1160740 MB 2025-07-17T01:04:04.996675990Z pytorch version: 2.7.0+cu126 2025-07-17T01:04:04.997069771Z Set vram state to: NORMAL_VRAM 2025-07-17T01:04:04.997371454Z Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync 2025-07-17T01:04:06.161218592Z Using pytorch attention 2025-07-17T01:04:07.294241155Z Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] 2025-07-17T01:04:07.294271874Z ComfyUI version: 0.3.30 2025-07-17T01:04:07.294792482Z Using selector: EpollSelector 2025-07-17T01:04:07.297230365Z ComfyUI frontend version: 1.17.11 2025-07-17T01:04:07.297779723Z [Prompt Server] web root: /opt/venv/lib/python3.12/site-packages/comfyui_frontend_package/static 2025-07-17T01:04:07.297921120Z Trying to load custom node /comfyui/comfy_extras/nodes_latent.py 2025-07-17T01:04:07.302336637Z Trying to load custom node /comfyui/comfy_extras/nodes_hypernetwork.py 2025-07-17T01:04:07.303428522Z Trying to load custom node /comfyui/comfy_extras/nodes_upscale_model.py 2025-07-17T01:04:07.345684060Z Trying to load custom node /comfyui/comfy_extras/nodes_post_processing.py 2025-07-17T01:04:07.345966054Z Trying to load custom node /comfyui/comfy_extras/nodes_mask.py 2025-07-17T01:04:07.349603000Z Trying to load custom node /comfyui/comfy_extras/nodes_compositing.py 2025-07-17T01:04:07.351267331Z Trying to load custom node /comfyui/comfy_extras/nodes_rebatch.py 2025-07-17T01:04:07.352466323Z Trying to load custom node /comfyui/comfy_extras/nodes_model_merging.py 2025-07-17T01:04:07.354853748Z Trying to load custom node /comfyui/comfy_extras/nodes_tomesd.py 2025-07-17T01:04:07.356280454Z Trying to load custom node /comfyui/comfy_extras/nodes_clip_sdxl.py 2025-07-17T01:04:07.357039507Z Trying to load custom node /comfyui/comfy_extras/nodes_canny.py 2025-07-17T01:04:07.617823232Z Trying to load custom node /comfyui/comfy_extras/nodes_freelunch.py 2025-07-17T01:04:07.619580081Z Trying to load custom node /comfyui/comfy_extras/nodes_custom_sampler.py 2025-07-17T01:04:07.625209230Z Trying to load custom node /comfyui/comfy_extras/nodes_hypertile.py 2025-07-17T01:04:07.626096030Z Trying to load custom node /comfyui/comfy_extras/nodes_model_advanced.py 2025-07-17T01:04:07.628229590Z Trying to load custom node /comfyui/comfy_extras/nodes_model_downscale.py 2025-07-17T01:04:07.628917504Z Trying to load custom node /comfyui/comfy_extras/nodes_images.py 2025-07-17T01:04:07.630513377Z Trying to load custom node /comfyui/comfy_extras/nodes_video_model.py 2025-07-17T01:04:07.632244277Z Trying to load custom node /comfyui/comfy_extras/nodes_sag.py 2025-07-17T01:04:07.633599226Z Trying to load custom node /comfyui/comfy_extras/nodes_perpneg.py 2025-07-17T01:04:07.634593283Z Trying to load custom node /comfyui/comfy_extras/nodes_stable3d.py 2025-07-17T01:04:07.635964381Z Trying to load custom node /comfyui/comfy_extras/nodes_sdupscale.py 2025-07-17T01:04:07.636483559Z Trying to load custom node /comfyui/comfy_extras/nodes_photomaker.py 2025-07-17T01:04:07.637922565Z Trying to load custom node /comfyui/comfy_extras/nodes_pixart.py 2025-07-17T01:04:07.638369915Z Trying to load custom node /comfyui/comfy_extras/nodes_cond.py 2025-07-17T01:04:07.638811925Z Trying to load custom node /comfyui/comfy_extras/nodes_morphology.py 2025-07-17T01:04:07.639646865Z Trying to load custom node /comfyui/comfy_extras/nodes_stable_cascade.py 2025-07-17T01:04:07.640645972Z Trying to load custom node /comfyui/comfy_extras/nodes_differential_diffusion.py 2025-07-17T01:04:07.641137641Z Trying to load custom node /comfyui/comfy_extras/nodes_ip2p.py 2025-07-17T01:04:07.641672618Z Trying to load custom node /comfyui/comfy_extras/nodes_model_merging_model_specific.py 2025-07-17T01:04:07.643134504Z Trying to load custom node /comfyui/comfy_extras/nodes_pag.py 2025-07-17T01:04:07.643714481Z Trying to load custom node /comfyui/comfy_extras/nodes_align_your_steps.py 2025-07-17T01:04:07.644335746Z Trying to load custom node /comfyui/comfy_extras/nodes_attention_multiply.py 2025-07-17T01:04:07.645544438Z Trying to load custom node /comfyui/comfy_extras/nodes_advanced_samplers.py 2025-07-17T01:04:07.646566835Z Trying to load custom node /comfyui/comfy_extras/nodes_webcam.py 2025-07-17T01:04:07.647001415Z Trying to load custom node /comfyui/comfy_extras/nodes_audio.py 2025-07-17T01:04:07.659423316Z Loading FFmpeg6 2025-07-17T01:04:07.733674272Z Successfully loaded FFmpeg6 2025-07-17T01:04:07.758076136Z Trying to load custom node /comfyui/comfy_extras/nodes_sd3.py 2025-07-17T01:04:07.760430661Z Trying to load custom node /comfyui/comfy_extras/nodes_gits.py 2025-07-17T01:04:07.767102146Z Trying to load custom node /comfyui/comfy_extras/nodes_controlnet.py 2025-07-17T01:04:07.767792290Z Trying to load custom node /comfyui/comfy_extras/nodes_hunyuan.py 2025-07-17T01:04:07.768946193Z Trying to load custom node /comfyui/comfy_extras/nodes_flux.py 2025-07-17T01:04:07.769524240Z Trying to load custom node /comfyui/comfy_extras/nodes_lora_extract.py 2025-07-17T01:04:07.770693033Z Trying to load custom node /comfyui/comfy_extras/nodes_torch_compile.py 2025-07-17T01:04:07.771100733Z Trying to load custom node /comfyui/comfy_extras/nodes_mochi.py 2025-07-17T01:04:07.771570762Z Trying to load custom node /comfyui/comfy_extras/nodes_slg.py 2025-07-17T01:04:07.771743928Z Trying to load custom node /comfyui/comfy_extras/nodes_mahiro.py 2025-07-17T01:04:07.772248507Z Trying to load custom node /comfyui/comfy_extras/nodes_lt.py 2025-07-17T01:04:07.775172799Z Trying to load custom node /comfyui/comfy_extras/nodes_hooks.py 2025-07-17T01:04:07.779087458Z Trying to load custom node /comfyui/comfy_extras/nodes_load_3d.py 2025-07-17T01:04:07.780202832Z Trying to load custom node /comfyui/comfy_extras/nodes_cosmos.py 2025-07-17T01:04:07.781151890Z Trying to load custom node /comfyui/comfy_extras/nodes_video.py 2025-07-17T01:04:07.782003050Z Trying to load custom node /comfyui/comfy_extras/nodes_lumina2.py 2025-07-17T01:04:07.782885570Z Trying to load custom node /comfyui/comfy_extras/nodes_wan.py 2025-07-17T01:04:07.785440770Z Trying to load custom node /comfyui/comfy_extras/nodes_lotus.py 2025-07-17T01:04:07.788660006Z Trying to load custom node /comfyui/comfy_extras/nodes_hunyuan3d.py 2025-07-17T01:04:07.792268342Z Trying to load custom node /comfyui/comfy_extras/nodes_primitive.py 2025-07-17T01:04:07.792999315Z Trying to load custom node /comfyui/comfy_extras/nodes_cfg.py 2025-07-17T01:04:07.793504943Z Trying to load custom node /comfyui/comfy_extras/nodes_optimalsteps.py 2025-07-17T01:04:07.794103549Z Trying to load custom node /comfyui/comfy_extras/nodes_hidream.py 2025-07-17T01:04:07.794718725Z Trying to load custom node /comfyui/comfy_extras/nodes_fresca.py 2025-07-17T01:04:07.795425878Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_api.py 2025-07-17T01:04:07.865629319Z Trying to load custom node /comfyui/custom_nodes/ComfyUI-Manager 2025-07-17T01:04:07.879090816Z ### Loading: ComfyUI-Manager (V3.32.5) 2025-07-17T01:04:07.879371429Z [ComfyUI-Manager] network_mode: offline 2025-07-17T01:04:07.882517127Z Popen(['git', 'rev-list', 'HEAD', '--'], cwd=/comfyui, stdin=None, shell=False, universal_newlines=False) 2025-07-17T01:04:07.904218003Z Popen(['git', 'cat-file', '--batch-check'], cwd=/comfyui, stdin=, shell=False, universal_newlines=False) 2025-07-17T01:04:07.907374289Z Popen(['git', 'cat-file', '--batch'], cwd=/comfyui, stdin=, shell=False, universal_newlines=False) 2025-07-17T01:04:07.910719081Z ### ComfyUI Revision: 3389 [a97f2f85] *DETACHED | Released on '2025-04-24' 2025-07-17T01:04:07.912603238Z Using selector: EpollSelector 2025-07-17T01:04:07.913214944Z [ComfyUI-Manager] All startup tasks have been completed. 2025-07-17T01:04:07.916712692Z Trying to load custom node /comfyui/custom_nodes/websocket_image_save.py 2025-07-17T01:04:07.917336298Z Trying to load custom node /comfyui/custom_nodes/comfyui-hfloader 2025-07-17T01:04:07.920075304Z Import times for custom nodes: 2025-07-17T01:04:07.920098294Z 0.0 seconds: /comfyui/custom_nodes/websocket_image_save.py 2025-07-17T01:04:07.920104064Z 0.0 seconds: /comfyui/custom_nodes/comfyui-hfloader 2025-07-17T01:04:07.920108594Z 0.1 seconds: /comfyui/custom_nodes/ComfyUI-Manager 2025-07-17T01:04:07.925292203Z Starting server 2025-07-17T01:04:07.925562227Z To see the GUI go to: http://127.0.0.1:8188 2025-07-17T01:04:07.973585182Z worker-comfyui - API is reachable 2025-07-17T01:04:07.973616181Z worker-comfyui - Connecting to websocket: ws://127.0.0.1:8188/ws?clientId=4d771806-686a-4c9b-8ced-84b5ba46697e 2025-07-17T01:04:07.975497168Z worker-comfyui - Websocket connected 2025-07-17T01:04:07.976808157Z got prompt 2025-07-17T01:04:07.977282606Z recursive file list on directory /comfyui/models/vae 2025-07-17T01:04:07.977356125Z found 1 files 2025-07-17T01:04:07.978650364Z recursive file list on directory /runpod-volume/models/vae 2025-07-17T01:04:07.980262097Z found 1 files 2025-07-17T01:04:07.980321925Z recursive file list on directory /comfyui/models/vae_approx 2025-07-17T01:04:07.980369164Z found 1 files 2025-07-17T01:04:07.980543230Z recursive file list on directory /comfyui/models/loras 2025-07-17T01:04:07.980587059Z found 1 files 2025-07-17T01:04:07.981037579Z recursive file list on directory /runpod-volume/models/loras 2025-07-17T01:04:07.982651451Z found 1 files 2025-07-17T01:04:07.982724410Z recursive file list on directory /comfyui/models/unet 2025-07-17T01:04:07.982777628Z found 1 files 2025-07-17T01:04:07.982823387Z recursive file list on directory /comfyui/models/diffusion_models 2025-07-17T01:04:07.982862346Z found 1 files 2025-07-17T01:04:07.983239798Z recursive file list on directory /runpod-volume/models/unet 2025-07-17T01:04:07.984368562Z found 1 files 2025-07-17T01:04:07.984447220Z recursive file list on directory /comfyui/models/text_encoders 2025-07-17T01:04:07.984495479Z found 1 files 2025-07-17T01:04:07.984535638Z recursive file list on directory /comfyui/models/clip 2025-07-17T01:04:07.984575517Z found 1 files 2025-07-17T01:04:07.984848140Z recursive file list on directory /runpod-volume/models/clip 2025-07-17T01:04:07.985845657Z found 2 files 2025-07-17T01:04:07.987250495Z worker-comfyui - Queued workflow with ID: 684edbfa-457f-4f2d-b7a5-5266f2f69fef 2025-07-17T01:04:07.987253995Z worker-comfyui - Waiting for workflow execution (684edbfa-457f-4f2d-b7a5-5266f2f69fef)... 2025-07-17T01:04:07.987427420Z worker-comfyui - Status update: 0 items remaining in queue 2025-07-17T01:04:07.987508569Z worker-comfyui - Status update: 1 items remaining in queue 2025-07-17T01:04:07.987577237Z worker-comfyui - Status update: 1 items remaining in queue 2025-07-17T01:04:08.119588602Z Using pytorch attention in VAE 2025-07-17T01:04:08.120612888Z Working with z of shape (1, 16, 32, 32) = 16384 dimensions. 2025-07-17T01:04:08.123298066Z Using pytorch attention in VAE 2025-07-17T01:04:09.102773264Z Model doesn't have a device attribute. 2025-07-17T01:04:09.108461382Z VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 2025-07-17T01:04:09.807680518Z model weight dtype torch.bfloat16, manual cast: None 2025-07-17T01:04:09.808409171Z model_type FLUX 2025-07-17T01:04:09.808429640Z adm 0 2025-07-17T01:04:19.137608305Z worker-comfyui - Websocket receive timed out. Still waiting... 2025-07-17T01:04:29.147556192Z worker-comfyui - Websocket receive timed out. Still waiting... 2025-07-17T01:04:39.157569840Z worker-comfyui - Websocket receive timed out. Still waiting... 2025-07-17T01:04:43.780604722Z Model doesn't have a device attribute. 2025-07-17T01:04:43.783307749Z CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 2025-07-17T01:04:51.351659706Z clip unexpected: ['encoder.embed_tokens.weight'] 2025-07-17T01:04:52.068572961Z clip missing: ['text_projection.weight'] 2025-07-17T01:04:53.152269759Z Starting new HTTPS connection (1): huggingface.co:443 2025-07-17T01:04:53.228825502Z https://huggingface.co:443 "HEAD /salexes/artur-test-6-lora/resolve/main/artur-test-6-lora-test.safetensors HTTP/1.1" 302 0 2025-07-17T01:04:53.230932612Z Attempting to acquire lock 130038793313728 on /comfyui/models/hf_cache_dir/.locks/models--salexes--artur-test-6-lora/76237d53542c15e104dacd8101c094912e8d291a0d229244e412eb1f33fa5d6f.lock 2025-07-17T01:04:53.231097989Z Lock 130038793313728 acquired on /comfyui/models/hf_cache_dir/.locks/models--salexes--artur-test-6-lora/76237d53542c15e104dacd8101c094912e8d291a0d229244e412eb1f33fa5d6f.lock 2025-07-17T01:04:53.232681552Z Starting new HTTPS connection (1): cdn-lfs-us-1.hf.co:443 2025-07-17T01:04:53.274926251Z https://cdn-lfs-us-1.hf.co:443 "GET /repos/c0/39/c0394ecb98a7df9a55c0ab3212e200c8370651821fdcfedb3344512a7f08f3ef/76237d53542c15e104dacd8101c094912e8d291a0d229244e412eb1f33fa5d6f?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27artur-test-6-lora-test.safetensors%3B+filename%3D%22artur-test-6-lora-test.safetensors%22%3B&Expires=1752717893&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTc1MjcxNzg5M319LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy11cy0xLmhmLmNvL3JlcG9zL2MwLzM5L2MwMzk0ZWNiOThhN2RmOWE1NWMwYWIzMjEyZTIwMGM4MzcwNjUxODIxZmRjZmVkYjMzNDQ1MTJhN2YwOGYzZWYvNzYyMzdkNTM1NDJjMTVlMTA0ZGFjZDgxMDFjMDk0OTEyZThkMjkxYTBkMjI5MjQ0ZTQxMmViMWYzM2ZhNWQ2Zj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoifV19&Signature=Q3oxFUNaueIMSL5kT38Ka57PChG8lN7w5xnJayNmA7P28VYDNI6Js6C2ozPcY5eXcKXLmFMEuI3g~Xed6xykYv2KAJhcPkj3gThql5mjk7FNtJ5X4n~eiOUZArozeEg9mZLNcXW7TSl7TJioBAXsTfS4LXIUgsw0TUSx7NwPa1BPLFA61H4QbQS2t~hutlLo~mO95Gdgx0dR0VnCAjpB6Gvz22Vl7K4pStnZ3-NJ4d384jElIeWuQQpOXD8gyFPGllVJSBLxerX3bFCL4SLRpvqGBLq4mUg2jfIKtMdd~Sbw4cs4A1hXlWGqDX8Q~qG5HrBZSiP0DPrGyoHaKNLltQ__&Key-Pair-Id=K24J24Z295AEI9 HTTP/1.1" 200 171969920 2025-07-17T01:04:54.126492029Z Attempting to release lock 130038793313728 on /comfyui/models/hf_cache_dir/.locks/models--salexes--artur-test-6-lora/76237d53542c15e104dacd8101c094912e8d291a0d229244e412eb1f33fa5d6f.lock 2025-07-17T01:04:54.126590037Z Lock 130038793313728 released on /comfyui/models/hf_cache_dir/.locks/models--salexes--artur-test-6-lora/76237d53542c15e104dacd8101c094912e8d291a0d229244e412eb1f33fa5d6f.lock 2025-07-17T01:04:54.126828081Z Loaded Lora from /comfyui/models/hf_cache_dir/models--salexes--artur-test-6-lora/snapshots/748b11aeb3716412fbaaae21f5b21f4c7c8462cf/artur-test-6-lora-test.safetensors 2025-07-17T01:04:54.205089384Z Token indices sequence length is longer than the specified maximum sequence length for this model (129 > 77). Running this sequence through the model will result in indexing errors 2025-07-17T01:04:54.206979190Z Requested to load FluxClipModel_ 2025-07-17T01:04:54.211811888Z lowvram: loaded module regularly t5xxl.transformer.shared Embedding(32128, 4096) 2025-07-17T01:04:54.211838267Z lowvram: loaded module regularly clip_l.transformer.text_model.embeddings.token_embedding Embedding(49408, 768) 2025-07-17T01:04:54.211893276Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.211916336Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.211936605Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.211970594Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.211982574Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212035483Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212061882Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212086192Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212149330Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212171520Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212200589Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212220939Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212231788Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212254108Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212272777Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212290537Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212307697Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212384555Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212405004Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212417484Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212434923Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212472143Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212479902Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212498492Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212516792Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212548441Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212566411Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212609530Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212630849Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212669058Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212708187Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212732377Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212766116Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212796995Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212825734Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212854434Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212890903Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.212917732Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212948492Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.212977031Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213069229Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213082968Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213091788Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213115178Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213148337Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213185216Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213226305Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213268004Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213299483Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213336383Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213356592Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213394891Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213418551Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213454630Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213479689Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213524678Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213567727Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213608786Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213643975Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213709904Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213720724Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213728053Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213799402Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213807932Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213812602Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213818782Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213866450Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213873180Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213903580Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213961638Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T01:04:54.213967168Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.213987687Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T01:04:54.214039466Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214068536Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214074116Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214133664Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214145944Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214168423Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214195513Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214240962Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214258811Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214286140Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214340539Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214390698Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214394508Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214437637Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214509015Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214514505Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214537285Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214569584Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214602883Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214614803Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214662492Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214680511Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214710891Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214725830Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214761989Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214786929Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214895566Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214921566Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214954865Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.214969855Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215008214Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215045043Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215119561Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215128911Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215162220Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215212029Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215257338Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215267588Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215292987Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215340166Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215389195Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215418044Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215431754Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215491873Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215530932Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215644179Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215666739Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215673028Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215680038Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215717337Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215726277Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215778336Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215792596Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215807735Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215900333Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215921083Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215925722Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215930022Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215951952Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215960092Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215967761Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.215987011Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216009281Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216024160Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216057190Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216072819Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216099229Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216138728Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216146727Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216157787Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216188116Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216203176Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216240875Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216262265Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216304414Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216334013Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216364112Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216371332Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216413341Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216418481Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216434741Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216469460Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216478060Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216520759Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216569158Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216638676Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216684655Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216727594Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216776873Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216816162Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216851661Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216887430Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216938999Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.216983078Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.217053646Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.217084506Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T01:04:54.217131005Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217183693Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217244202Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217283431Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217353049Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217424748Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217495686Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217540365Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217592354Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217642023Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217698481Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217750380Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T01:04:54.217798209Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.217842558Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.217875507Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.217929316Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.217968165Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218002904Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218049633Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218093402Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218137891Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218170290Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218194440Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218239829Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T01:04:54.218272658Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218301777Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218326847Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218366736Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218413125Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218447314Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218491483Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218516722Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218555672Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218612150Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218632130Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218666959Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218701698Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218729797Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218802776Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218847835Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218909873Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.218950822Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219004871Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219028230Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219067860Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219116038Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219176657Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219248845Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219263275Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219328903Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219372632Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219403072Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219434491Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219506829Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219515119Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219525129Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219569308Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219616437Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219669306Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219708755Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219728744Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219769303Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219808412Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219858261Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219882181Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.219948099Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.220003598Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.220040617Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.220082066Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.220121785Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.220142245Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.220173624Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T01:04:54.220197813Z lowvram: loaded module regularly clip_l.transformer.text_projection Linear(in_features=768, out_features=768, bias=False) 2025-07-17T01:04:54.220248782Z lowvram: loaded module regularly clip_l.transformer.text_model.embeddings.position_embedding Embedding(77, 768) 2025-07-17T01:04:54.220285411Z lowvram: loaded module regularly t5xxl.transformer.encoder.final_layer_norm T5LayerNorm() 2025-07-17T01:04:54.220339370Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220371819Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220402389Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220445068Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220475087Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220511126Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220543135Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220575934Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220599004Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220624623Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220653563Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220681502Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220712541Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220744541Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220757040Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220801659Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220829419Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220861298Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220904047Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220938366Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220946066Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220960706Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.220990775Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221014504Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221039504Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221046344Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221084413Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221117432Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221162821Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221193470Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221239359Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221269198Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221272648Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221350087Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221388686Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221414765Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221446594Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221489723Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221533222Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221573481Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221581431Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221621810Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221643990Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221664659Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221691039Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221729628Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221776847Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221794826Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.layer_norm T5LayerNorm() 2025-07-17T01:04:54.221862245Z lowvram: loaded module regularly clip_l.transformer.text_model.final_layer_norm LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.221906814Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.221949243Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.221995222Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222036301Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222082640Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222122979Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222166058Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222218927Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222246246Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222288625Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222328664Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222358253Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222414412Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222446261Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222470361Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222500310Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222542889Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222577028Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222601578Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222708705Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222723605Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222728684Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222741174Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222756484Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T01:04:54.222781773Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias Embedding(32, 64) 2025-07-17T01:04:56.627991899Z loaded completely 30395.4875 4777.53759765625 True 2025-07-17T01:04:56.640384311Z !!! Exception during processing !!! CUDA error: no kernel image is available for execution on the device 2025-07-17T01:04:56.640408321Z CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. 2025-07-17T01:04:56.640414321Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1 2025-07-17T01:04:56.640418880Z Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-07-17T01:04:56.643542578Z Traceback (most recent call last): 2025-07-17T01:04:56.643564697Z File "/comfyui/execution.py", line 347, in execute 2025-07-17T01:04:56.643571197Z output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) 2025-07-17T01:04:56.643576727Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643581187Z File "/comfyui/execution.py", line 222, in get_output_data 2025-07-17T01:04:56.643585877Z return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) 2025-07-17T01:04:56.643601337Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643617646Z File "/comfyui/execution.py", line 194, in _map_node_over_list 2025-07-17T01:04:56.643622696Z process_inputs(input_dict, i) 2025-07-17T01:04:56.643627256Z File "/comfyui/execution.py", line 183, in process_inputs 2025-07-17T01:04:56.643631936Z results.append(getattr(obj, func)(**inputs)) 2025-07-17T01:04:56.643636576Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643641116Z File "/comfyui/nodes.py", line 69, in encode 2025-07-17T01:04:56.643645576Z return (clip.encode_from_tokens_scheduled(tokens), ) 2025-07-17T01:04:56.643652985Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643658345Z File "/comfyui/comfy/sd.py", line 154, in encode_from_tokens_scheduled 2025-07-17T01:04:56.643663795Z pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True) 2025-07-17T01:04:56.643668845Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643674495Z File "/comfyui/comfy/sd.py", line 216, in encode_from_tokens 2025-07-17T01:04:56.643679895Z o = self.cond_stage_model.encode_token_weights(tokens) 2025-07-17T01:04:56.643691074Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643705864Z File "/comfyui/comfy/text_encoders/flux.py", line 53, in encode_token_weights 2025-07-17T01:04:56.643713924Z t5_out, t5_pooled = self.t5xxl.encode_token_weights(token_weight_pairs_t5) 2025-07-17T01:04:56.643721344Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643730484Z File "/comfyui/comfy/sd1_clip.py", line 45, in encode_token_weights 2025-07-17T01:04:56.643736924Z o = self.encode(to_encode) 2025-07-17T01:04:56.643743173Z ^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643749693Z File "/comfyui/comfy/sd1_clip.py", line 288, in encode 2025-07-17T01:04:56.643759633Z return self(tokens) 2025-07-17T01:04:56.643771333Z ^^^^^^^^^^^^ 2025-07-17T01:04:56.643778683Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl 2025-07-17T01:04:56.643787482Z return self._call_impl(*args, **kwargs) 2025-07-17T01:04:56.643795542Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643801122Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl 2025-07-17T01:04:56.643809952Z return forward_call(*args, **kwargs) 2025-07-17T01:04:56.643818531Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643826971Z File "/comfyui/comfy/sd1_clip.py", line 250, in forward 2025-07-17T01:04:56.643834181Z embeds, attention_mask, num_tokens = self.process_tokens(tokens, device) 2025-07-17T01:04:56.643842811Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643849851Z File "/comfyui/comfy/sd1_clip.py", line 204, in process_tokens 2025-07-17T01:04:56.643858011Z tokens_embed = self.transformer.get_input_embeddings()(tokens_embed, out_dtype=torch.float32) 2025-07-17T01:04:56.643866101Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643873690Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl 2025-07-17T01:04:56.643882420Z return self._call_impl(*args, **kwargs) 2025-07-17T01:04:56.643890040Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643899610Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl 2025-07-17T01:04:56.643907949Z return forward_call(*args, **kwargs) 2025-07-17T01:04:56.643915159Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643925009Z File "/comfyui/comfy/ops.py", line 225, in forward 2025-07-17T01:04:56.643938449Z return self.forward_comfy_cast_weights(*args, **kwargs) 2025-07-17T01:04:56.643952928Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643960928Z File "/comfyui/comfy/ops.py", line 220, in forward_comfy_cast_weights 2025-07-17T01:04:56.643967708Z weight, bias = cast_bias_weight(self, device=input.device, dtype=out_dtype) 2025-07-17T01:04:56.643982578Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.643990047Z File "/comfyui/comfy/ops.py", line 50, in cast_bias_weight 2025-07-17T01:04:56.643994327Z weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function) 2025-07-17T01:04:56.644000287Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.644011537Z File "/comfyui/comfy/model_management.py", line 947, in cast_to 2025-07-17T01:04:56.644020457Z return weight.to(dtype=dtype, copy=copy) 2025-07-17T01:04:56.644028477Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T01:04:56.644036557Z RuntimeError: CUDA error: no kernel image is available for execution on the device 2025-07-17T01:04:56.644048396Z CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. 2025-07-17T01:04:56.644056006Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1 2025-07-17T01:04:56.644060346Z Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-07-17T01:04:56.644678962Z Prompt executed in 48.66 seconds 2025-07-17T01:04:56.931681348Z worker-comfyui - Execution error received: Node Type: CLIPTextEncode, Node ID: 6, Message: CUDA error: no kernel image is available for execution on the device 2025-07-17T01:04:56.931734677Z CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. 2025-07-17T01:04:56.931741117Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1 2025-07-17T01:04:56.931746356Z Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-07-17T01:04:56.931756026Z worker-comfyui - Fetching history for prompt 684edbfa-457f-4f2d-b7a5-5266f2f69fef... 2025-07-17T01:04:56.935499419Z worker-comfyui - No outputs found in history for prompt 684edbfa-457f-4f2d-b7a5-5266f2f69fef. 2025-07-17T01:04:56.935531709Z worker-comfyui - Processing 0 output nodes... 2025-07-17T01:04:56.935538018Z worker-comfyui - Closing websocket connection. 2025-07-17T01:04:56.935907240Z worker-comfyui - Job completed with errors/warnings: ['Workflow execution error: Node Type: CLIPTextEncode, Node ID: 6, Message: CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n'] 2025-07-17T01:04:56.935922780Z worker-comfyui - Job failed with no output images. 2025-07-17T01:04:57.227937669Z {"requestId": "sync-04cffaf8-23a7-4e2b-8f43-9c96d0343240-e1", "message": "Finished.", "level": "INFO"}