2025-07-17T15:31:08.174705515Z ========== 2025-07-17T15:31:08.174735654Z == CUDA == 2025-07-17T15:31:08.174750354Z ========== 2025-07-17T15:31:08.180628838Z CUDA Version 12.8.1 2025-07-17T15:31:08.181470446Z Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. 2025-07-17T15:31:08.182446510Z This container image and its contents are governed by the NVIDIA Deep Learning Container License. 2025-07-17T15:31:08.182453460Z By pulling and using the container, you accept the terms and conditions of this license: 2025-07-17T15:31:08.182459210Z https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license 2025-07-17T15:31:08.182470430Z A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience. 2025-07-17T15:31:08.206061855Z worker-comfyui - ComfyUI-Manager network_mode set to 'offline' in /comfyui/user/default/ComfyUI-Manager/config.ini 2025-07-17T15:31:08.206456994Z worker-comfyui: Starting ComfyUI 2025-07-17T15:31:08.206701768Z worker-comfyui: Starting RunPod Handler 2025-07-17T15:31:08.253614555Z Adding extra search path checkpoints /runpod-volume/models/checkpoints 2025-07-17T15:31:08.253636475Z Adding extra search path clip /runpod-volume/models/clip 2025-07-17T15:31:08.253638744Z Adding extra search path clip_vision /runpod-volume/models/clip_vision 2025-07-17T15:31:08.253640764Z Adding extra search path configs /runpod-volume/models/configs 2025-07-17T15:31:08.253642724Z Adding extra search path controlnet /runpod-volume/models/controlnet 2025-07-17T15:31:08.253667524Z Adding extra search path embeddings /runpod-volume/models/embeddings 2025-07-17T15:31:08.253688803Z Adding extra search path loras /runpod-volume/models/loras 2025-07-17T15:31:08.253691293Z Adding extra search path upscale_models /runpod-volume/models/upscale_models 2025-07-17T15:31:08.253693473Z Adding extra search path vae /runpod-volume/models/vae 2025-07-17T15:31:08.253699753Z Adding extra search path unet /runpod-volume/models/unet 2025-07-17T15:31:08.899246324Z [START] Security scan 2025-07-17T15:31:08.899276294Z [DONE] Security scan 2025-07-17T15:31:08.899279674Z Popen(['git', 'version'], cwd=/, stdin=None, shell=False, universal_newlines=False) 2025-07-17T15:31:08.903326676Z Popen(['git', 'version'], cwd=/, stdin=None, shell=False, universal_newlines=False) 2025-07-17T15:31:08.908584387Z ## ComfyUI-Manager: installing dependencies done. 2025-07-17T15:31:08.908603796Z ** ComfyUI startup time: 2025-07-17 15:31:08.908 2025-07-17T15:31:08.908612606Z ** Platform: Linux 2025-07-17T15:31:08.908649045Z ** Python version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] 2025-07-17T15:31:08.908683025Z ** Python executable: /opt/venv/bin/python 2025-07-17T15:31:08.908717013Z ** ComfyUI Path: /comfyui 2025-07-17T15:31:08.908760612Z ** ComfyUI Base Folder Path: /comfyui 2025-07-17T15:31:08.908821791Z ** User directory: /comfyui/user 2025-07-17T15:31:08.908877759Z ** ComfyUI-Manager config path: /comfyui/user/default/ComfyUI-Manager/config.ini 2025-07-17T15:31:08.908911658Z ** Log path: /comfyui/user/comfyui.log 2025-07-17T15:31:09.416276910Z Prestartup times for custom nodes: 2025-07-17T15:31:09.416300779Z 1.2 seconds: /comfyui/custom_nodes/ComfyUI-Manager 2025-07-17T15:31:10.376246723Z worker-comfyui - Starting handler... 2025-07-17T15:31:10.376272462Z --- Starting Serverless Worker | Version 1.7.13 --- 2025-07-17T15:31:10.476648934Z Checkpoint files will always be loaded safely. 2025-07-17T15:31:10.568978778Z {"requestId": null, "message": "Jobs in queue: 1", "level": "INFO"} 2025-07-17T15:31:10.569008508Z {"requestId": null, "message": "Jobs in progress: 1", "level": "INFO"} 2025-07-17T15:31:10.569042077Z {"requestId": "76b9eef3-0370-4e19-86bb-fb13fdaad968-e1", "message": "Started.", "level": "INFO"} 2025-07-17T15:31:10.569054006Z worker-comfyui - Checking API server at http://127.0.0.1:8188/... 2025-07-17T15:31:10.699315136Z /opt/venv/lib/python3.12/site-packages/torch/cuda/__init__.py:287: UserWarning: 2025-07-17T15:31:10.699358195Z NVIDIA GeForce RTX 5090 with CUDA capability sm_120 is not compatible with the current PyTorch installation. 2025-07-17T15:31:10.699389844Z The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90. 2025-07-17T15:31:10.699394994Z If you want to use the NVIDIA GeForce RTX 5090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ 2025-07-17T15:31:10.699404694Z warnings.warn( 2025-07-17T15:31:10.826379471Z Total VRAM 32120 MB, total RAM 1160740 MB 2025-07-17T15:31:10.826405240Z pytorch version: 2.7.1+cu126 2025-07-17T15:31:10.826740321Z Set vram state to: NORMAL_VRAM 2025-07-17T15:31:10.826876647Z Device: cuda:0 NVIDIA GeForce RTX 5090 : cudaMallocAsync 2025-07-17T15:31:11.955374107Z Using pytorch attention 2025-07-17T15:31:12.003643878Z Loading FFmpeg6 2025-07-17T15:31:12.078983853Z Successfully loaded FFmpeg6 2025-07-17T15:31:13.323468171Z Python version: 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] 2025-07-17T15:31:13.323534339Z ComfyUI version: 0.3.44 2025-07-17T15:31:13.323779392Z Using selector: EpollSelector 2025-07-17T15:31:13.325917795Z ComfyUI frontend version: 1.23.4 2025-07-17T15:31:13.326411142Z [Prompt Server] web root: /opt/venv/lib/python3.12/site-packages/comfyui_frontend_package/static 2025-07-17T15:31:13.326472281Z Trying to load custom node /comfyui/comfy_extras/nodes_latent.py 2025-07-17T15:31:13.519889928Z Trying to load custom node /comfyui/comfy_extras/nodes_hypernetwork.py 2025-07-17T15:31:13.521242822Z Trying to load custom node /comfyui/comfy_extras/nodes_upscale_model.py 2025-07-17T15:31:13.561187434Z Trying to load custom node /comfyui/comfy_extras/nodes_post_processing.py 2025-07-17T15:31:13.561508975Z Trying to load custom node /comfyui/comfy_extras/nodes_mask.py 2025-07-17T15:31:13.565106880Z Trying to load custom node /comfyui/comfy_extras/nodes_compositing.py 2025-07-17T15:31:13.566712718Z Trying to load custom node /comfyui/comfy_extras/nodes_rebatch.py 2025-07-17T15:31:13.568070302Z Trying to load custom node /comfyui/comfy_extras/nodes_model_merging.py 2025-07-17T15:31:13.570341372Z Trying to load custom node /comfyui/comfy_extras/nodes_tomesd.py 2025-07-17T15:31:13.571752734Z Trying to load custom node /comfyui/comfy_extras/nodes_clip_sdxl.py 2025-07-17T15:31:13.572528504Z Trying to load custom node /comfyui/comfy_extras/nodes_canny.py 2025-07-17T15:31:13.660579041Z Trying to load custom node /comfyui/comfy_extras/nodes_freelunch.py 2025-07-17T15:31:13.662147840Z Trying to load custom node /comfyui/comfy_extras/nodes_custom_sampler.py 2025-07-17T15:31:13.668408764Z Trying to load custom node /comfyui/comfy_extras/nodes_hypertile.py 2025-07-17T15:31:13.669305610Z Trying to load custom node /comfyui/comfy_extras/nodes_model_advanced.py 2025-07-17T15:31:13.671490652Z Trying to load custom node /comfyui/comfy_extras/nodes_model_downscale.py 2025-07-17T15:31:13.672209063Z Trying to load custom node /comfyui/comfy_extras/nodes_images.py 2025-07-17T15:31:13.676012213Z Trying to load custom node /comfyui/comfy_extras/nodes_video_model.py 2025-07-17T15:31:13.677773146Z Trying to load custom node /comfyui/comfy_extras/nodes_train.py 2025-07-17T15:31:13.682017014Z Trying to load custom node /comfyui/comfy_extras/nodes_sag.py 2025-07-17T15:31:13.683310959Z Trying to load custom node /comfyui/comfy_extras/nodes_perpneg.py 2025-07-17T15:31:13.684398060Z Trying to load custom node /comfyui/comfy_extras/nodes_stable3d.py 2025-07-17T15:31:13.685763394Z Trying to load custom node /comfyui/comfy_extras/nodes_sdupscale.py 2025-07-17T15:31:13.686302660Z Trying to load custom node /comfyui/comfy_extras/nodes_photomaker.py 2025-07-17T15:31:13.687760291Z Trying to load custom node /comfyui/comfy_extras/nodes_pixart.py 2025-07-17T15:31:13.688226909Z Trying to load custom node /comfyui/comfy_extras/nodes_cond.py 2025-07-17T15:31:13.688837903Z Trying to load custom node /comfyui/comfy_extras/nodes_morphology.py 2025-07-17T15:31:13.689679920Z Trying to load custom node /comfyui/comfy_extras/nodes_stable_cascade.py 2025-07-17T15:31:13.690781482Z Trying to load custom node /comfyui/comfy_extras/nodes_differential_diffusion.py 2025-07-17T15:31:13.691310417Z Trying to load custom node /comfyui/comfy_extras/nodes_ip2p.py 2025-07-17T15:31:13.691878292Z Trying to load custom node /comfyui/comfy_extras/nodes_model_merging_model_specific.py 2025-07-17T15:31:13.693563528Z Trying to load custom node /comfyui/comfy_extras/nodes_pag.py 2025-07-17T15:31:13.694151122Z Trying to load custom node /comfyui/comfy_extras/nodes_align_your_steps.py 2025-07-17T15:31:13.694787555Z Trying to load custom node /comfyui/comfy_extras/nodes_attention_multiply.py 2025-07-17T15:31:13.695973244Z Trying to load custom node /comfyui/comfy_extras/nodes_advanced_samplers.py 2025-07-17T15:31:13.696997947Z Trying to load custom node /comfyui/comfy_extras/nodes_webcam.py 2025-07-17T15:31:13.697489414Z Trying to load custom node /comfyui/comfy_extras/nodes_audio.py 2025-07-17T15:31:13.699719645Z Trying to load custom node /comfyui/comfy_extras/nodes_sd3.py 2025-07-17T15:31:13.702053433Z Trying to load custom node /comfyui/comfy_extras/nodes_gits.py 2025-07-17T15:31:13.708126052Z Trying to load custom node /comfyui/comfy_extras/nodes_controlnet.py 2025-07-17T15:31:13.708827453Z Trying to load custom node /comfyui/comfy_extras/nodes_hunyuan.py 2025-07-17T15:31:13.709938384Z Trying to load custom node /comfyui/comfy_extras/nodes_flux.py 2025-07-17T15:31:13.710780872Z Trying to load custom node /comfyui/comfy_extras/nodes_lora_extract.py 2025-07-17T15:31:13.711937891Z Trying to load custom node /comfyui/comfy_extras/nodes_torch_compile.py 2025-07-17T15:31:13.714428075Z Trying to load custom node /comfyui/comfy_extras/nodes_mochi.py 2025-07-17T15:31:13.714915622Z Trying to load custom node /comfyui/comfy_extras/nodes_slg.py 2025-07-17T15:31:13.715145246Z Trying to load custom node /comfyui/comfy_extras/nodes_mahiro.py 2025-07-17T15:31:13.715678272Z Trying to load custom node /comfyui/comfy_extras/nodes_lt.py 2025-07-17T15:31:13.718681792Z Trying to load custom node /comfyui/comfy_extras/nodes_hooks.py 2025-07-17T15:31:13.722620488Z Trying to load custom node /comfyui/comfy_extras/nodes_load_3d.py 2025-07-17T15:31:13.730153179Z Trying to load custom node /comfyui/comfy_extras/nodes_cosmos.py 2025-07-17T15:31:13.731524622Z Trying to load custom node /comfyui/comfy_extras/nodes_video.py 2025-07-17T15:31:13.733157919Z Trying to load custom node /comfyui/comfy_extras/nodes_lumina2.py 2025-07-17T15:31:13.734143503Z Trying to load custom node /comfyui/comfy_extras/nodes_wan.py 2025-07-17T15:31:13.737401776Z Trying to load custom node /comfyui/comfy_extras/nodes_lotus.py 2025-07-17T15:31:13.741629555Z Trying to load custom node /comfyui/comfy_extras/nodes_hunyuan3d.py 2025-07-17T15:31:13.745179941Z Trying to load custom node /comfyui/comfy_extras/nodes_primitive.py 2025-07-17T15:31:13.745984739Z Trying to load custom node /comfyui/comfy_extras/nodes_cfg.py 2025-07-17T15:31:13.746508506Z Trying to load custom node /comfyui/comfy_extras/nodes_optimalsteps.py 2025-07-17T15:31:13.747205797Z Trying to load custom node /comfyui/comfy_extras/nodes_hidream.py 2025-07-17T15:31:13.747845230Z Trying to load custom node /comfyui/comfy_extras/nodes_fresca.py 2025-07-17T15:31:13.748555011Z Trying to load custom node /comfyui/comfy_extras/nodes_apg.py 2025-07-17T15:31:13.749284122Z Trying to load custom node /comfyui/comfy_extras/nodes_preview_any.py 2025-07-17T15:31:13.749779739Z Trying to load custom node /comfyui/comfy_extras/nodes_ace.py 2025-07-17T15:31:13.750402292Z Trying to load custom node /comfyui/comfy_extras/nodes_string.py 2025-07-17T15:31:13.752366520Z Trying to load custom node /comfyui/comfy_extras/nodes_camera_trajectory.py 2025-07-17T15:31:13.754312199Z Trying to load custom node /comfyui/comfy_extras/nodes_edit_model.py 2025-07-17T15:31:13.754757897Z Trying to load custom node /comfyui/comfy_extras/nodes_tcfg.py 2025-07-17T15:31:13.755473208Z Trying to load custom node /comfyui/comfy_api_nodes/canary.py 2025-07-17T15:31:13.756218848Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_ideogram.py 2025-07-17T15:31:13.923166096Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_openai.py 2025-07-17T15:31:13.927838163Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_minimax.py 2025-07-17T15:31:13.929135658Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_veo2.py 2025-07-17T15:31:13.930369616Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_kling.py 2025-07-17T15:31:13.938655756Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_bfl.py 2025-07-17T15:31:13.951643442Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_luma.py 2025-07-17T15:31:13.963133538Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_recraft.py 2025-07-17T15:31:13.972679855Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_pixverse.py 2025-07-17T15:31:13.980965145Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_stability.py 2025-07-17T15:31:13.989378882Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_pika.py 2025-07-17T15:31:13.992216037Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_runway.py 2025-07-17T15:31:13.994713291Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_tripo.py 2025-07-17T15:31:14.012925719Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_moonvalley.py 2025-07-17T15:31:14.016172443Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_rodin.py 2025-07-17T15:31:14.022636722Z Trying to load custom node /comfyui/comfy_api_nodes/nodes_gemini.py 2025-07-17T15:31:14.024721226Z Trying to load custom node /comfyui/custom_nodes/ComfyUI-Manager 2025-07-17T15:31:14.037977135Z ### Loading: ComfyUI-Manager (V3.34) 2025-07-17T15:31:14.038230968Z [ComfyUI-Manager] network_mode: offline 2025-07-17T15:31:14.041460083Z Popen(['git', 'rev-list', 'HEAD', '--'], cwd=/comfyui, stdin=None, shell=False, universal_newlines=False) 2025-07-17T15:31:14.067602581Z Popen(['git', 'cat-file', '--batch-check'], cwd=/comfyui, stdin=, shell=False, universal_newlines=False) 2025-07-17T15:31:14.071539296Z Popen(['git', 'cat-file', '--batch'], cwd=/comfyui, stdin=, shell=False, universal_newlines=False) 2025-07-17T15:31:14.074465909Z ### ComfyUI Revision: 3645 [c5de4955] *DETACHED | Released on '2025-07-08' 2025-07-17T15:31:14.076814466Z Using selector: EpollSelector 2025-07-17T15:31:14.080562257Z [ComfyUI-Manager] All startup tasks have been completed. 2025-07-17T15:31:14.084124823Z Trying to load custom node /comfyui/custom_nodes/websocket_image_save.py 2025-07-17T15:31:14.084716347Z Import times for custom nodes: 2025-07-17T15:31:14.084722967Z 0.0 seconds: /comfyui/custom_nodes/websocket_image_save.py 2025-07-17T15:31:14.084727897Z 0.1 seconds: /comfyui/custom_nodes/ComfyUI-Manager 2025-07-17T15:31:14.522759805Z Database URL: sqlite:////comfyui/user/comfyui.db 2025-07-17T15:31:14.529270353Z Context impl SQLiteImpl. 2025-07-17T15:31:14.529292002Z Will assume non-transactional DDL. 2025-07-17T15:31:14.529917245Z No target revision found. 2025-07-17T15:31:14.536842952Z Starting server 2025-07-17T15:31:14.537095695Z To see the GUI go to: http://127.0.0.1:8188 2025-07-17T15:31:14.567738123Z worker-comfyui - API is reachable 2025-07-17T15:31:14.567804772Z worker-comfyui - Connecting to websocket: ws://127.0.0.1:8188/ws?clientId=cb771bad-e2ea-48d8-a106-e4a014551c6b 2025-07-17T15:31:14.569838158Z worker-comfyui - Websocket connected 2025-07-17T15:37:09.366210413Z got prompt 2025-07-17T15:37:09.366442227Z recursive file list on directory /comfyui/models/vae 2025-07-17T15:37:09.366553214Z found 1 files 2025-07-17T15:37:09.367548338Z recursive file list on directory /runpod-volume/models/vae 2025-07-17T15:37:09.368973840Z found 1 files 2025-07-17T15:37:09.369060408Z recursive file list on directory /comfyui/models/vae_approx 2025-07-17T15:37:09.369123766Z found 1 files 2025-07-17T15:37:09.369226133Z recursive file list on directory /comfyui/models/loras 2025-07-17T15:37:09.369278062Z found 1 files 2025-07-17T15:37:09.370160488Z recursive file list on directory /runpod-volume/models/loras 2025-07-17T15:37:09.371297878Z found 1 files 2025-07-17T15:37:09.371368216Z recursive file list on directory /comfyui/models/unet 2025-07-17T15:37:09.371415025Z found 1 files 2025-07-17T15:37:09.371460104Z recursive file list on directory /comfyui/models/diffusion_models 2025-07-17T15:37:09.371530292Z found 1 files 2025-07-17T15:37:09.371820304Z recursive file list on directory /runpod-volume/models/unet 2025-07-17T15:37:09.372845007Z found 1 files 2025-07-17T15:37:09.372911506Z recursive file list on directory /comfyui/models/text_encoders 2025-07-17T15:37:09.372939895Z found 1 files 2025-07-17T15:37:09.372987973Z recursive file list on directory /comfyui/models/clip 2025-07-17T15:37:09.373042702Z found 1 files 2025-07-17T15:37:09.373321825Z recursive file list on directory /runpod-volume/models/clip 2025-07-17T15:37:09.374357927Z found 2 files 2025-07-17T15:37:09.375559605Z worker-comfyui - Queued workflow with ID: 967ea682-3413-471a-b81b-af91c08583d0 2025-07-17T15:37:09.375571395Z worker-comfyui - Waiting for workflow execution (967ea682-3413-471a-b81b-af91c08583d0)... 2025-07-17T15:37:09.375646173Z worker-comfyui - Status update: 0 items remaining in queue 2025-07-17T15:37:09.375689052Z worker-comfyui - Status update: 1 items remaining in queue 2025-07-17T15:37:09.375744500Z worker-comfyui - Status update: 1 items remaining in queue 2025-07-17T15:37:09.507518450Z Using pytorch attention in VAE 2025-07-17T15:37:09.508111654Z Working with z of shape (1, 16, 32, 32) = 16384 dimensions. 2025-07-17T15:37:09.508864394Z Using pytorch attention in VAE 2025-07-17T15:37:10.301207098Z Model doesn't have a device attribute. 2025-07-17T15:37:10.301267906Z VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16 2025-07-17T15:37:11.013598339Z model weight dtype torch.bfloat16, manual cast: None 2025-07-17T15:37:11.013959289Z model_type FLUX 2025-07-17T15:37:11.013999148Z adm 0 2025-07-17T15:37:20.333412836Z worker-comfyui - Websocket receive timed out. Still waiting... 2025-07-17T15:37:30.343512509Z worker-comfyui - Websocket receive timed out. Still waiting... 2025-07-17T15:37:40.353369099Z worker-comfyui - Websocket receive timed out. Still waiting... 2025-07-17T15:37:46.172160377Z Model doesn't have a device attribute. 2025-07-17T15:37:46.174752129Z CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16 2025-07-17T15:37:54.201159844Z clip unexpected: ['encoder.embed_tokens.weight'] 2025-07-17T15:37:54.906863742Z clip missing: ['text_projection.weight'] 2025-07-17T15:37:56.044979887Z Token indices sequence length is longer than the specified maximum sequence length for this model (129 > 77). Running this sequence through the model will result in indexing errors 2025-07-17T15:37:56.045905052Z Requested to load FluxClipModel_ 2025-07-17T15:37:56.050733944Z lowvram: loaded module regularly t5xxl.transformer.shared Embedding(32128, 4096) 2025-07-17T15:37:56.050764063Z lowvram: loaded module regularly clip_l.transformer.text_model.embeddings.token_embedding Embedding(49408, 768) 2025-07-17T15:37:56.050771853Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.050782343Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.050821172Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.050841331Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.050868011Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.050875891Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051117894Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051129264Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051134314Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051139124Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051144083Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051148493Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051152853Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051160373Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051165063Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051184822Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051237601Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051243931Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051248141Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051252590Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051256950Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051261170Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051319079Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051326559Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051342498Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051356908Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051371297Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051513234Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051529523Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051535273Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051539763Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051544863Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051551313Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051555993Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051623451Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051750337Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051756917Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051812886Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051818406Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051825755Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051830605Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051838705Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051850595Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051859094Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051870614Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051880804Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051892034Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051896784Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051902433Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051932863Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051938202Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.051956392Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.051961422Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052003331Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052064629Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.052107438Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052115478Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052120458Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.052126777Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052143397Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052165216Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.052188446Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052204255Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052240764Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.052265564Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052274063Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052301743Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.052337652Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052351341Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052372531Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wo Linear(in_features=10240, out_features=4096, bias=False) 2025-07-17T15:37:56.052397340Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_1 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052415610Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.DenseReluDense.wi_0 Linear(in_features=4096, out_features=10240, bias=False) 2025-07-17T15:37:56.052434279Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052466508Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052508257Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052538456Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052549046Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052554156Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052570156Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052635194Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052642744Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052647204Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052672053Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052716072Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052755421Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052811279Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052819699Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052862418Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052877427Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052909097Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052919846Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052939896Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.052980935Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053006644Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053011514Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053065412Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053081282Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053113781Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053135571Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053184699Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053224458Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053250597Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053304786Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053343115Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053370624Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053404364Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053440643Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053463602Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053493821Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053534010Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053569749Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053608308Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053630727Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053653057Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053690216Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053709165Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053742335Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053780983Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053821802Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053840142Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053874471Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053934279Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053940509Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053957039Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.053980058Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054020747Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054067716Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054113235Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054128514Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054172603Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054191683Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054226442Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054252961Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054285630Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054317469Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054363388Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054394477Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054426826Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054461465Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054483245Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054514374Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054564393Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054589262Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054619521Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054648960Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054676980Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054722969Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054767147Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054786047Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054828026Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054871654Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054909864Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054945843Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.054976922Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055016431Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055071249Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055101539Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055143417Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055192496Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055220765Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055253124Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055287123Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055325182Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055355912Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055403871Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.v Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055433190Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.q Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055460879Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.o Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055491278Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.k Linear(in_features=4096, out_features=4096, bias=False) 2025-07-17T15:37:56.055537637Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055573676Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055591466Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055624935Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055664054Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055704623Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055746921Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055786150Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055821049Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055846999Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055881268Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055902917Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.mlp.fc1 Linear(in_features=768, out_features=3072, bias=True) 2025-07-17T15:37:56.055939376Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.055972085Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056002325Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056051823Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056074483Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056114512Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056139011Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056170220Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056214329Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056252618Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056261658Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056308557Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.mlp.fc2 Linear(in_features=3072, out_features=768, bias=True) 2025-07-17T15:37:56.056339996Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056364755Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056393224Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056446083Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056463343Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056499291Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056537061Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056571030Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056592649Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056625228Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056657527Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056705196Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056742245Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056766095Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056798024Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056831103Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056881672Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056888361Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056944030Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.056950690Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057000628Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057006228Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057061417Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057086676Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057131385Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057162344Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057189303Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057221043Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057274491Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057293830Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057320400Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057368099Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057395718Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057443446Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057477706Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057519395Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057539604Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057568723Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057607482Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057642211Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057686550Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057723009Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057745118Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057773258Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057810047Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.v_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057847976Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.q_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057881435Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.out_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057916164Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.self_attn.k_proj Linear(in_features=768, out_features=768, bias=True) 2025-07-17T15:37:56.057957943Z lowvram: loaded module regularly clip_l.transformer.text_projection Linear(in_features=768, out_features=768, bias=False) 2025-07-17T15:37:56.058008371Z lowvram: loaded module regularly clip_l.transformer.text_model.embeddings.position_embedding Embedding(77, 768) 2025-07-17T15:37:56.058049620Z lowvram: loaded module regularly t5xxl.transformer.encoder.final_layer_norm T5LayerNorm() 2025-07-17T15:37:56.058074760Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058127738Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.9.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058163177Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058183877Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.8.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058227486Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058270645Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.7.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058277734Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058341503Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.6.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058347822Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058367302Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.5.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058451330Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058456600Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.4.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058461050Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058492499Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.3.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058499169Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058558887Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.23.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058568347Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058583056Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.22.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058616146Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058648675Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.21.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058654854Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058689333Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.20.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058712073Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058745232Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.2.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058784991Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058805930Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.19.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058828720Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058860269Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.18.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058905028Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058912068Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.17.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058956406Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058961936Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.16.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.058992465Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059021585Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.15.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059068674Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059088593Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.14.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059113662Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059160591Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.13.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059181411Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059202210Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.12.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059239609Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059262298Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.11.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059304437Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059349866Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.10.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059355566Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059406274Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.1.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059411354Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.1.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059466643Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.layer_norm T5LayerNorm() 2025-07-17T15:37:56.059506092Z lowvram: loaded module regularly clip_l.transformer.text_model.final_layer_norm LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059541071Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059574050Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.9.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059590020Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059627509Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.8.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059679257Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059706777Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.7.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059732496Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059767605Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.6.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059809074Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059836073Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.5.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059876632Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059915681Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.4.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059945160Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.059977919Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.3.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060017878Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060057357Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.2.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060093176Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060125265Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.11.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060160705Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060199944Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.10.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060227443Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060276611Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.1.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060281381Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.layer_norm2 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060315830Z lowvram: loaded module regularly clip_l.transformer.text_model.encoder.layers.0.layer_norm1 LayerNorm((768,), eps=1e-05, elementwise_affine=True) 2025-07-17T15:37:56.060352999Z lowvram: loaded module regularly t5xxl.transformer.encoder.block.0.layer.0.SelfAttention.relative_attention_bias Embedding(32, 64) 2025-07-17T15:37:56.689221113Z loaded completely 30395.4875 4777.53759765625 True 2025-07-17T15:37:56.696472111Z !!! Exception during processing !!! CUDA error: no kernel image is available for execution on the device 2025-07-17T15:37:56.696494840Z CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. 2025-07-17T15:37:56.696501060Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1 2025-07-17T15:37:56.696506180Z Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-07-17T15:37:56.698634183Z Traceback (most recent call last): 2025-07-17T15:37:56.698658763Z File "/comfyui/execution.py", line 361, in execute 2025-07-17T15:37:56.698664693Z output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) 2025-07-17T15:37:56.698674152Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698680072Z File "/comfyui/execution.py", line 236, in get_output_data 2025-07-17T15:37:56.698684572Z return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) 2025-07-17T15:37:56.698689162Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698693482Z File "/comfyui/execution.py", line 208, in _map_node_over_list 2025-07-17T15:37:56.698697702Z process_inputs(input_dict, i) 2025-07-17T15:37:56.698701901Z File "/comfyui/execution.py", line 197, in process_inputs 2025-07-17T15:37:56.698712041Z results.append(getattr(obj, func)(**inputs)) 2025-07-17T15:37:56.698716311Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698720541Z File "/comfyui/nodes.py", line 69, in encode 2025-07-17T15:37:56.698724711Z return (clip.encode_from_tokens_scheduled(tokens), ) 2025-07-17T15:37:56.698728901Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698733141Z File "/comfyui/comfy/sd.py", line 167, in encode_from_tokens_scheduled 2025-07-17T15:37:56.698737911Z pooled_dict = self.encode_from_tokens(tokens, return_pooled=return_pooled, return_dict=True) 2025-07-17T15:37:56.698755710Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698776030Z File "/comfyui/comfy/sd.py", line 229, in encode_from_tokens 2025-07-17T15:37:56.698784499Z o = self.cond_stage_model.encode_token_weights(tokens) 2025-07-17T15:37:56.698790229Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698796389Z File "/comfyui/comfy/text_encoders/flux.py", line 53, in encode_token_weights 2025-07-17T15:37:56.698802499Z t5_out, t5_pooled = self.t5xxl.encode_token_weights(token_weight_pairs_t5) 2025-07-17T15:37:56.698808839Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698814798Z File "/comfyui/comfy/sd1_clip.py", line 45, in encode_token_weights 2025-07-17T15:37:56.698824288Z o = self.encode(to_encode) 2025-07-17T15:37:56.698832098Z ^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698841338Z File "/comfyui/comfy/sd1_clip.py", line 288, in encode 2025-07-17T15:37:56.698852108Z return self(tokens) 2025-07-17T15:37:56.698859647Z ^^^^^^^^^^^^ 2025-07-17T15:37:56.698865127Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl 2025-07-17T15:37:56.698871157Z return self._call_impl(*args, **kwargs) 2025-07-17T15:37:56.698876827Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698882637Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl 2025-07-17T15:37:56.698889956Z return forward_call(*args, **kwargs) 2025-07-17T15:37:56.698902866Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698916636Z File "/comfyui/comfy/sd1_clip.py", line 250, in forward 2025-07-17T15:37:56.698927666Z embeds, attention_mask, num_tokens = self.process_tokens(tokens, device) 2025-07-17T15:37:56.698939415Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698949945Z File "/comfyui/comfy/sd1_clip.py", line 204, in process_tokens 2025-07-17T15:37:56.698959225Z tokens_embed = self.transformer.get_input_embeddings()(tokens_embed, out_dtype=torch.float32) 2025-07-17T15:37:56.698968625Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698972904Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl 2025-07-17T15:37:56.698981504Z return self._call_impl(*args, **kwargs) 2025-07-17T15:37:56.698989814Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.698993984Z File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl 2025-07-17T15:37:56.699002954Z return forward_call(*args, **kwargs) 2025-07-17T15:37:56.699012663Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.699023113Z File "/comfyui/comfy/ops.py", line 237, in forward 2025-07-17T15:37:56.699057192Z return self.forward_comfy_cast_weights(*args, **kwargs) 2025-07-17T15:37:56.699073512Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.699082682Z File "/comfyui/comfy/ops.py", line 232, in forward_comfy_cast_weights 2025-07-17T15:37:56.699093531Z weight, bias = cast_bias_weight(self, device=input.device, dtype=out_dtype) 2025-07-17T15:37:56.699103271Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.699113651Z File "/comfyui/comfy/ops.py", line 59, in cast_bias_weight 2025-07-17T15:37:56.699124330Z weight = comfy.model_management.cast_to(s.weight, dtype, device, non_blocking=non_blocking, copy=has_function, stream=offload_stream) 2025-07-17T15:37:56.699134900Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.699145290Z File "/comfyui/comfy/model_management.py", line 998, in cast_to 2025-07-17T15:37:56.699154989Z return weight.to(dtype=dtype, copy=copy) 2025-07-17T15:37:56.699168539Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2025-07-17T15:37:56.699173829Z RuntimeError: CUDA error: no kernel image is available for execution on the device 2025-07-17T15:37:56.699183529Z CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. 2025-07-17T15:37:56.699188169Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1 2025-07-17T15:37:56.699193248Z Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-07-17T15:37:56.699361734Z Prompt executed in 47.32 seconds 2025-07-17T15:37:56.931242872Z worker-comfyui - Execution error received: Node Type: CLIPTextEncode, Node ID: 6, Message: CUDA error: no kernel image is available for execution on the device 2025-07-17T15:37:56.931294381Z CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. 2025-07-17T15:37:56.931300911Z For debugging consider passing CUDA_LAUNCH_BLOCKING=1 2025-07-17T15:37:56.931305360Z Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. 2025-07-17T15:37:56.931318550Z worker-comfyui - Fetching history for prompt 967ea682-3413-471a-b81b-af91c08583d0... 2025-07-17T15:37:56.934582964Z worker-comfyui - No outputs found in history for prompt 967ea682-3413-471a-b81b-af91c08583d0. 2025-07-17T15:37:56.934609123Z worker-comfyui - Processing 0 output nodes... 2025-07-17T15:37:56.934615403Z worker-comfyui - Closing websocket connection. 2025-07-17T15:37:56.934958524Z worker-comfyui - Job completed with errors/warnings: ['Workflow execution error: Node Type: CLIPTextEncode, Node ID: 6, Message: CUDA error: no kernel image is available for execution on the device\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n\n'] 2025-07-17T15:37:56.934965794Z worker-comfyui - Job failed with no output images. 2025-07-17T15:37:57.177259296Z {"requestId": "40267528-1137-48cc-b971-d7af60aa93da-e1", "message": "Finished.", "level": "INFO"}