Starting the GUI... this might take some time... 08:16:24-402672 WARNING Skipping requirements verification. 08:16:24-405180 INFO headless: False 08:16:24-418116 INFO Using shell=True when running external commands... * Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. 08:19:27-655776 INFO Loading config... 08:21:09-864068 INFO Loading config... 08:23:45-632610 INFO Copy C:/Users/Ney/Downloads/New folder to C:/Users/Ney/Downloads/testkohya\img/1_hanatest1 woman... 08:23:45-637650 INFO Regularization images directory is missing... not copying regularisation images... 08:23:45-639668 INFO Done creating kohya_ss training folder structure at C:/Users/Ney/Downloads/testkohya... 08:25:10-693342 INFO Start training Dreambooth... 08:25:10-695850 INFO Validating lr scheduler arguments... 08:25:10-697850 INFO Validating optimizer arguments... 08:25:10-698850 INFO Validating C:/Users/Ney/Downloads/testkohya\log existence and writability... SUCCESS 08:25:10-699849 INFO Validating C:/Users/Ney/Downloads/testkohya\model existence and writability... SUCCESS 08:25:10-700850 INFO Validating C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/unet/flux1-dev.safetensors existence... SUCCESS 08:25:10-703712 INFO Validating C:/Users/Ney/Downloads/testkohya\img existence... SUCCESS 08:25:10-706307 INFO Folder 1_hanatest1 woman: 1 repeats found 08:25:10-707629 INFO Folder 1_hanatest1 woman: 1 images found 08:25:10-708635 INFO Folder 1_hanatest1 woman: 1 * 1 = 1 steps 08:25:10-709635 INFO Regularization factor: 1 08:25:10-710889 INFO Total steps: 1 08:25:10-712389 INFO Train batch size: 1 08:25:10-713906 INFO Gradient accumulation steps: 1 08:25:10-714905 INFO Epoch: 100 08:25:10-716528 INFO max_train_steps (1 / 1 / 1 * 100 * 1) = 100 08:25:10-717530 INFO lr_warmup_steps = 0 08:25:10-722043 WARNING Here is the trainer command as a reference. It will not be executed: C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Scripts\accelerate.EXE launch --dynamo_backend no --dynamo_mode default --gpu_ids 0 --mixed_precision bf16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 C:/Users/Ney/Documents/Generative_AI/Kohya_FLUX_DreamBooth_v17/kohya_ss/sd-scripts/flux_train.py --config_file C:/Users/Ney/Downloads/testkohya\model/config_dreambooth-20250227-082510.toml 08:25:10-723579 INFO Showing toml config file: C:/Users/Ney/Downloads/testkohya\model/config_dreambooth-20250227-082510.toml 08:25:10-730722 INFO adaptive_noise_scale = 0 ae = "C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/vae/ae.safetensors" blocks_to_swap = 29 bucket_no_upscale = true bucket_reso_steps = 64 cache_latents = true cache_latents_to_disk = true cache_text_encoder_outputs = true cache_text_encoder_outputs_to_disk = true caption_dropout_every_n_epochs = 0 caption_dropout_rate = 0 caption_extension = ".txt" clip_l = "C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/vae/ae.safetensors" discrete_flow_shift = 3.1582 double_blocks_to_swap = 0 dynamo_backend = "no" epoch = 100 full_bf16 = true fused_backward_pass = true gradient_accumulation_steps = 1 gradient_checkpointing = true guidance_scale = 1 huber_c = 0.1 huber_scale = 1 huber_schedule = "snr" keep_tokens = 0 learning_rate = 4e-6 learning_rate_te = 0 logging_dir = "C:/Users/Ney/Downloads/testkohya\\log" loss_type = "l2" lr_scheduler = "constant" lr_scheduler_args = [] lr_scheduler_num_cycles = 1 lr_scheduler_power = 1 lr_warmup_steps = 0 max_bucket_reso = 2048 max_data_loader_n_workers = 0 max_timestep = 1000 max_token_length = 75 max_train_steps = 100 mem_eff_save = true min_bucket_reso = 256 mixed_precision = "bf16" model_prediction_type = "raw" multires_noise_discount = 0.3 multires_noise_iterations = 0 noise_offset = 0 noise_offset_type = "Original" optimizer_args = [ "scale_parameter=False", "relative_step=False", "warmup_init=False", "weight_decay=0.01",] optimizer_type = "Adafactor" output_dir = "C:/Users/Ney/Downloads/testkohya\\model" output_name = "hanatest1" persistent_data_loader_workers = 0 pretrained_model_name_or_path = "C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/unet/flux1-dev.safetensors" prior_loss_weight = 1 resolution = "1024,1024" sample_prompts = "C:/Users/Ney/Downloads/testkohya\\model\\sample/prompt.txt" sample_sampler = "euler_a" save_every_n_epochs = 100 save_model_as = "safetensors" save_precision = "fp16" seed = 1 single_blocks_to_swap = 0 t5xxl = "C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/clip/t5xxl_fp16.safetensors " t5xxl_max_token_length = 512 timestep_sampling = "sigmoid" train_batch_size = 1 train_blocks = "all" train_data_dir = "C:/Users/Ney/Downloads/testkohya\\img" vae_batch_size = 1 wandb_run_name = "hanatest1" xformers = true 08:25:10-738245 INFO end of toml config file: C:/Users/Ney/Downloads/testkohya\model/config_dreambooth-20250227-082510.toml 08:25:18-385710 INFO Start training Dreambooth... 08:25:18-387221 INFO Validating lr scheduler arguments... 08:25:18-387221 INFO Validating optimizer arguments... 08:25:18-388224 INFO Validating C:/Users/Ney/Downloads/testkohya\log existence and writability... SUCCESS 08:25:18-389727 INFO Validating C:/Users/Ney/Downloads/testkohya\model existence and writability... SUCCESS 08:25:18-390730 INFO Validating C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/unet/flux1-dev.safetensors existence... SUCCESS 08:25:18-392249 INFO Validating C:/Users/Ney/Downloads/testkohya\img existence... SUCCESS 08:25:18-393251 INFO Folder 1_hanatest1 woman: 1 repeats found 08:25:18-394754 INFO Folder 1_hanatest1 woman: 1 images found 08:25:18-396262 INFO Folder 1_hanatest1 woman: 1 * 1 = 1 steps 08:25:18-397278 INFO Regularization factor: 1 08:25:18-398277 INFO Total steps: 1 08:25:18-399267 INFO Train batch size: 1 08:25:18-400769 INFO Gradient accumulation steps: 1 08:25:18-401772 INFO Epoch: 100 08:25:18-402276 INFO max_train_steps (1 / 1 / 1 * 100 * 1) = 100 08:25:18-403277 INFO lr_warmup_steps = 0 08:25:18-406586 INFO Saving training config to C:/Users/Ney/Downloads/testkohya\model\hanatest1_20250227-082518.json... 08:25:18-408808 INFO Executing command: C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Scripts\accelerate .EXE launch --dynamo_backend no --dynamo_mode default --gpu_ids 0 --mixed_precision bf16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 C:/Users/Ney/Documents/Generative_AI/Kohya_FLUX_DreamBooth_v17/kohya_ss/sd-scripts/flux_train.p y --config_file C:/Users/Ney/Downloads/testkohya\model/config_dreambooth-20250227-082518.toml C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\diffusers\utils\outputs.py:63: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead. torch.utils._pytree._register_pytree_node( C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\diffusers\utils\outputs.py:63: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead. torch.utils._pytree._register_pytree_node( 2025-02-27 08:25:29 INFO Loading settings from train_util.py:4625 C:/Users/Ney/Downloads/testkohya\model/config_dreambooth-20250227-082518 .toml... 2025-02-27 08:25:29 INFO Using DreamBooth method. flux_train.py:115 INFO prepare images. train_util.py:2053 INFO get image size from name of cache files train_util.py:1944 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00 flux_utils.py:152 INFO [Dataset 0] train_util.py:2589 INFO caching latents with caching strategy. train_util.py:1097 INFO caching latents... train_util.py:1146 100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.07it/s] C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This isexpected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-02-27 08:25:33 INFO Building CLIP-L flux_utils.py:179 INFO Loading state dict from flux_utils.py:275 C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/vae/ae .safetensors INFO Loaded CLIP-L: flux_utils.py:278 _IncompatibleKeys(missing_keys=['text_model.embeddings.token_embedding.we ight', 'text_model.embeddings.position_embedding.weight', 'text_model.encoder.layers.0.self_attn.k_proj.weight', 'text_model.encoder.layers.0.self_attn.k_proj.bias', 'text_model.encoder.layers.0.self_attn.v_proj.weight', 'text_model.encoder.layers.0.self_attn.v_proj.bias', 'text_model.encoder.layers.0.self_attn.q_proj.weight', 'text_model.encoder.layers.0.self_attn.q_proj.bias', 'text_model.encoder.layers.0.self_attn.out_proj.weight', 'text_model.encoder.layers.0.self_attn.out_proj.bias', 'text_model.encoder.layers.0.layer_norm1.weight', 'text_model.encoder.layers.0.layer_norm1.bias', 'text_model.encoder.layers.0.mlp.fc1.weight', 'text_model.encoder.layers.0.mlp.fc1.bias', 'text_model.encoder.layers.0.mlp.fc2.weight', 'text_model.encoder.layers.0.mlp.fc2.bias', 'text_model.encoder.layers.0.layer_norm2.weight', 'text_model.encoder.layers.0.layer_norm2.bias', 'text_model.encoder.layers.1.self_attn.k_proj.weight', 'text_model.encoder.layers.1.self_attn.k_proj.bias', 'text_model.encoder.layers.1.self_attn.v_proj.weight', 'text_model.encoder.layers.1.self_attn.v_proj.bias', 'text_model.encoder.layers.1.self_attn.q_proj.weight', 'text_model.encoder.layers.1.self_attn.q_proj.bias', 'text_model.encoder.layers.1.self_attn.out_proj.weight', 'text_model.encoder.layers.1.self_attn.out_proj.bias', 'text_model.encoder.layers.1.layer_norm1.weight', 'text_model.encoder.layers.1.layer_norm1.bias', 'text_model.encoder.layers.1.mlp.fc1.weight', 'text_model.encoder.layers.1.mlp.fc1.bias', 'text_model.encoder.layers.1.mlp.fc2.weight', 'text_model.encoder.layers.1.mlp.fc2.bias', 'text_model.encoder.layers.1.layer_norm2.weight', 'text_model.encoder.layers.1.layer_norm2.bias', 'text_model.encoder.layers.2.self_attn.k_proj.weight', 'text_model.encoder.layers.2.self_attn.k_proj.bias', 'text_model.encoder.layers.2.self_attn.v_proj.weight', 'text_model.encoder.layers.2.self_attn.v_proj.bias', 'text_model.encoder.layers.2.self_attn.q_proj.weight', 'text_model.encoder.layers.2.self_attn.q_proj.bias', 'text_model.encoder.layers.2.self_attn.out_proj.weight', 'text_model.encoder.layers.2.self_attn.out_proj.bias', 'text_model.encoder.layers.2.layer_norm1.weight', 'text_model.encoder.layers.2.layer_norm1.bias', 'text_model.encoder.layers.2.mlp.fc1.weight', 'text_model.encoder.layers.2.mlp.fc1.bias', 'text_model.encoder.layers.2.mlp.fc2.weight', 'text_model.encoder.layers.2.mlp.fc2.bias', 'text_model.encoder.layers.2.layer_norm2.weight', 'text_model.encoder.layers.2.layer_norm2.bias', 'text_model.encoder.layers.3.self_attn.k_proj.weight', 'text_model.encoder.layers.3.self_attn.k_proj.bias', 'text_model.encoder.layers.3.self_attn.v_proj.weight', 'text_model.encoder.layers.3.self_attn.v_proj.bias', 'text_model.encoder.layers.3.self_attn.q_proj.weight', 'text_model.encoder.layers.3.self_attn.q_proj.bias', 'text_model.encoder.layers.3.self_attn.out_proj.weight', 'text_model.encoder.layers.3.self_attn.out_proj.bias', 'text_model.encoder.layers.3.layer_norm1.weight', 'text_model.encoder.layers.3.layer_norm1.bias', 'text_model.encoder.layers.3.mlp.fc1.weight', 'text_model.encoder.layers.3.mlp.fc1.bias', 'text_model.encoder.layers.3.mlp.fc2.weight', 'text_model.encoder.layers.3.mlp.fc2.bias', 'text_model.encoder.layers.3.layer_norm2.weight', 'text_model.encoder.layers.3.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.k_proj.weight', 'text_model.encoder.layers.4.self_attn.k_proj.bias', 'text_model.encoder.layers.4.self_attn.v_proj.weight', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.4.self_attn.q_proj.weight', 'text_model.encoder.layers.4.self_attn.q_proj.bias', 'text_model.encoder.layers.4.self_attn.out_proj.weight', 'text_model.encoder.layers.4.self_attn.out_proj.bias', 'text_model.encoder.layers.4.layer_norm1.weight', 'text_model.encoder.layers.4.layer_norm1.bias', 'text_model.encoder.layers.4.mlp.fc1.weight', 'text_model.encoder.layers.4.mlp.fc1.bias', 'text_model.encoder.layers.4.mlp.fc2.weight', 'text_model.encoder.layers.4.mlp.fc2.bias', 'text_model.encoder.layers.4.layer_norm2.weight', 'text_model.encoder.layers.4.layer_norm2.bias', 'text_model.encoder.layers.5.self_attn.k_proj.weight', 'text_model.encoder.layers.5.self_attn.k_proj.bias', 'text_model.encoder.layers.5.self_attn.v_proj.weight', 'text_model.encoder.layers.5.self_attn.v_proj.bias', 'text_model.encoder.layers.5.self_attn.q_proj.weight', 'text_model.encoder.layers.5.self_attn.q_proj.bias', 'text_model.encoder.layers.5.self_attn.out_proj.weight', 'text_model.encoder.layers.5.self_attn.out_proj.bias', 'text_model.encoder.layers.5.layer_norm1.weight', 'text_model.encoder.layers.5.layer_norm1.bias', 'text_model.encoder.layers.5.mlp.fc1.weight', 'text_model.encoder.layers.5.mlp.fc1.bias', 'text_model.encoder.layers.5.mlp.fc2.weight', 'text_model.encoder.layers.5.mlp.fc2.bias', 'text_model.encoder.layers.5.layer_norm2.weight', 'text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.6.self_attn.k_proj.weight', 'text_model.encoder.layers.6.self_attn.k_proj.bias', 'text_model.encoder.layers.6.self_attn.v_proj.weight', 'text_model.encoder.layers.6.self_attn.v_proj.bias', 'text_model.encoder.layers.6.self_attn.q_proj.weight', 'text_model.encoder.layers.6.self_attn.q_proj.bias', 'text_model.encoder.layers.6.self_attn.out_proj.weight', 'text_model.encoder.layers.6.self_attn.out_proj.bias', 'text_model.encoder.layers.6.layer_norm1.weight', 'text_model.encoder.layers.6.layer_norm1.bias', 'text_model.encoder.layers.6.mlp.fc1.weight', 'text_model.encoder.layers.6.mlp.fc1.bias', 'text_model.encoder.layers.6.mlp.fc2.weight', 'text_model.encoder.layers.6.mlp.fc2.bias', 'text_model.encoder.layers.6.layer_norm2.weight', 'text_model.encoder.layers.6.layer_norm2.bias', 'text_model.encoder.layers.7.self_attn.k_proj.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias', 'text_model.encoder.layers.7.self_attn.v_proj.weight', 'text_model.encoder.layers.7.self_attn.v_proj.bias', 'text_model.encoder.layers.7.self_attn.q_proj.weight', 'text_model.encoder.layers.7.self_attn.q_proj.bias', 'text_model.encoder.layers.7.self_attn.out_proj.weight', 'text_model.encoder.layers.7.self_attn.out_proj.bias', 'text_model.encoder.layers.7.layer_norm1.weight', 'text_model.encoder.layers.7.layer_norm1.bias', 'text_model.encoder.layers.7.mlp.fc1.weight', 'text_model.encoder.layers.7.mlp.fc1.bias', 'text_model.encoder.layers.7.mlp.fc2.weight', 'text_model.encoder.layers.7.mlp.fc2.bias', 'text_model.encoder.layers.7.layer_norm2.weight', 'text_model.encoder.layers.7.layer_norm2.bias', 'text_model.encoder.layers.8.self_attn.k_proj.weight', 'text_model.encoder.layers.8.self_attn.k_proj.bias', 'text_model.encoder.layers.8.self_attn.v_proj.weight', 'text_model.encoder.layers.8.self_attn.v_proj.bias', 'text_model.encoder.layers.8.self_attn.q_proj.weight', 'text_model.encoder.layers.8.self_attn.q_proj.bias', 'text_model.encoder.layers.8.self_attn.out_proj.weight', 'text_model.encoder.layers.8.self_attn.out_proj.bias', 'text_model.encoder.layers.8.layer_norm1.weight', 'text_model.encoder.layers.8.layer_norm1.bias', 'text_model.encoder.layers.8.mlp.fc1.weight', 'text_model.encoder.layers.8.mlp.fc1.bias', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.8.mlp.fc2.bias', 'text_model.encoder.layers.8.layer_norm2.weight', 'text_model.encoder.layers.8.layer_norm2.bias', 'text_model.encoder.layers.9.self_attn.k_proj.weight', 'text_model.encoder.layers.9.self_attn.k_proj.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.9.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.q_proj.weight', 'text_model.encoder.layers.9.self_attn.q_proj.bias', 'text_model.encoder.layers.9.self_attn.out_proj.weight', 'text_model.encoder.layers.9.self_attn.out_proj.bias', 'text_model.encoder.layers.9.layer_norm1.weight', 'text_model.encoder.layers.9.layer_norm1.bias', 'text_model.encoder.layers.9.mlp.fc1.weight', 'text_model.encoder.layers.9.mlp.fc1.bias', 'text_model.encoder.layers.9.mlp.fc2.weight', 'text_model.encoder.layers.9.mlp.fc2.bias', 'text_model.encoder.layers.9.layer_norm2.weight', 'text_model.encoder.layers.9.layer_norm2.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.v_proj.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.10.mlp.fc1.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.layer_norm2.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.11.self_attn.v_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.q_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.11.self_attn.out_proj.bias', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.11.layer_norm2.bias', 'text_model.final_layer_norm.weight', 'text_model.final_layer_norm.bias'], unexpected_keys=['decoder.conv_in.bias', 'decoder.conv_in.weight', 'decoder.conv_out.bias', 'decoder.conv_out.weight', 'decoder.mid.attn_1.k.bias', 'decoder.mid.attn_1.k.weight', 'decoder.mid.attn_1.norm.bias', 'decoder.mid.attn_1.norm.weight', 'decoder.mid.attn_1.proj_out.bias', 'decoder.mid.attn_1.proj_out.weight', 'decoder.mid.attn_1.q.bias', 'decoder.mid.attn_1.q.weight', 'decoder.mid.attn_1.v.bias', 'decoder.mid.attn_1.v.weight', 'decoder.mid.block_1.conv1.bias', 'decoder.mid.block_1.conv1.weight', 'decoder.mid.block_1.conv2.bias', 'decoder.mid.block_1.conv2.weight', 'decoder.mid.block_1.norm1.bias', 'decoder.mid.block_1.norm1.weight', 'decoder.mid.block_1.norm2.bias', 'decoder.mid.block_1.norm2.weight', 'decoder.mid.block_2.conv1.bias', 'decoder.mid.block_2.conv1.weight', 'decoder.mid.block_2.conv2.bias', 'decoder.mid.block_2.conv2.weight', 'decoder.mid.block_2.norm1.bias', 'decoder.mid.block_2.norm1.weight', 'decoder.mid.block_2.norm2.bias', 'decoder.mid.block_2.norm2.weight', 'decoder.norm_out.bias', 'decoder.norm_out.weight', 'decoder.up.0.block.0.conv1.bias', 'decoder.up.0.block.0.conv1.weight', 'decoder.up.0.block.0.conv2.bias', 'decoder.up.0.block.0.conv2.weight', 'decoder.up.0.block.0.nin_shortcut.bias', 'decoder.up.0.block.0.nin_shortcut.weight', 'decoder.up.0.block.0.norm1.bias', 'decoder.up.0.block.0.norm1.weight', 'decoder.up.0.block.0.norm2.bias', 'decoder.up.0.block.0.norm2.weight', 'decoder.up.0.block.1.conv1.bias', 'decoder.up.0.block.1.conv1.weight', 'decoder.up.0.block.1.conv2.bias', 'decoder.up.0.block.1.conv2.weight', 'decoder.up.0.block.1.norm1.bias', 'decoder.up.0.block.1.norm1.weight', 'decoder.up.0.block.1.norm2.bias', 'decoder.up.0.block.1.norm2.weight', 'decoder.up.0.block.2.conv1.bias', 'decoder.up.0.block.2.conv1.weight', 'decoder.up.0.block.2.conv2.bias', 'decoder.up.0.block.2.conv2.weight', 'decoder.up.0.block.2.norm1.bias', 'decoder.up.0.block.2.norm1.weight', 'decoder.up.0.block.2.norm2.bias', 'decoder.up.0.block.2.norm2.weight', 'decoder.up.1.block.0.conv1.bias', 'decoder.up.1.block.0.conv1.weight', 'decoder.up.1.block.0.conv2.bias', 'decoder.up.1.block.0.conv2.weight', 'decoder.up.1.block.0.nin_shortcut.bias', 'decoder.up.1.block.0.nin_shortcut.weight', 'decoder.up.1.block.0.norm1.bias', 'decoder.up.1.block.0.norm1.weight', 'decoder.up.1.block.0.norm2.bias', 'decoder.up.1.block.0.norm2.weight', 'decoder.up.1.block.1.conv1.bias', 'decoder.up.1.block.1.conv1.weight', 'decoder.up.1.block.1.conv2.bias', 'decoder.up.1.block.1.conv2.weight', 'decoder.up.1.block.1.norm1.bias', 'decoder.up.1.block.1.norm1.weight', 'decoder.up.1.block.1.norm2.bias', 'decoder.up.1.block.1.norm2.weight', 'decoder.up.1.block.2.conv1.bias', 'decoder.up.1.block.2.conv1.weight', 'decoder.up.1.block.2.conv2.bias', 'decoder.up.1.block.2.conv2.weight', 'decoder.up.1.block.2.norm1.bias', 'decoder.up.1.block.2.norm1.weight', 'decoder.up.1.block.2.norm2.bias', 'decoder.up.1.block.2.norm2.weight', 'decoder.up.1.upsample.conv.bias', 'decoder.up.1.upsample.conv.weight', 'decoder.up.2.block.0.conv1.bias', 'decoder.up.2.block.0.conv1.weight', 'decoder.up.2.block.0.conv2.bias', 'decoder.up.2.block.0.conv2.weight', 'decoder.up.2.block.0.norm1.bias', 'decoder.up.2.block.0.norm1.weight', 'decoder.up.2.block.0.norm2.bias', 'decoder.up.2.block.0.norm2.weight', 'decoder.up.2.block.1.conv1.bias', 'decoder.up.2.block.1.conv1.weight', 'decoder.up.2.block.1.conv2.bias', 'decoder.up.2.block.1.conv2.weight', 'decoder.up.2.block.1.norm1.bias', 'decoder.up.2.block.1.norm1.weight', 'decoder.up.2.block.1.norm2.bias', 'decoder.up.2.block.1.norm2.weight', 'decoder.up.2.block.2.conv1.bias', 'decoder.up.2.block.2.conv1.weight', 'decoder.up.2.block.2.conv2.bias', 'decoder.up.2.block.2.conv2.weight', 'decoder.up.2.block.2.norm1.bias', 'decoder.up.2.block.2.norm1.weight', 'decoder.up.2.block.2.norm2.bias', 'decoder.up.2.block.2.norm2.weight', 'decoder.up.2.upsample.conv.bias', 'decoder.up.2.upsample.conv.weight', 'decoder.up.3.block.0.conv1.bias', 'decoder.up.3.block.0.conv1.weight', 'decoder.up.3.block.0.conv2.bias', 'decoder.up.3.block.0.conv2.weight', 'decoder.up.3.block.0.norm1.bias', 'decoder.up.3.block.0.norm1.weight', 'decoder.up.3.block.0.norm2.bias', 'decoder.up.3.block.0.norm2.weight', 'decoder.up.3.block.1.conv1.bias', 'decoder.up.3.block.1.conv1.weight', 'decoder.up.3.block.1.conv2.bias', 'decoder.up.3.block.1.conv2.weight', 'decoder.up.3.block.1.norm1.bias', 'decoder.up.3.block.1.norm1.weight', 'decoder.up.3.block.1.norm2.bias', 'decoder.up.3.block.1.norm2.weight', 'decoder.up.3.block.2.conv1.bias', 'decoder.up.3.block.2.conv1.weight', 'decoder.up.3.block.2.conv2.bias', 'decoder.up.3.block.2.conv2.weight', 'decoder.up.3.block.2.norm1.bias', 'decoder.up.3.block.2.norm1.weight', 'decoder.up.3.block.2.norm2.bias', 'decoder.up.3.block.2.norm2.weight', 'decoder.up.3.upsample.conv.bias', 'decoder.up.3.upsample.conv.weight', 'encoder.conv_in.bias', 'encoder.conv_in.weight', 'encoder.conv_out.bias', 'encoder.conv_out.weight', 'encoder.down.0.block.0.conv1.bias', 'encoder.down.0.block.0.conv1.weight', 'encoder.down.0.block.0.conv2.bias', 'encoder.down.0.block.0.conv2.weight', 'encoder.down.0.block.0.norm1.bias', 'encoder.down.0.block.0.norm1.weight', 'encoder.down.0.block.0.norm2.bias', 'encoder.down.0.block.0.norm2.weight', 'encoder.down.0.block.1.conv1.bias', 'encoder.down.0.block.1.conv1.weight', 'encoder.down.0.block.1.conv2.bias', 'encoder.down.0.block.1.conv2.weight', 'encoder.down.0.block.1.norm1.bias', 'encoder.down.0.block.1.norm1.weight', 'encoder.down.0.block.1.norm2.bias', 'encoder.down.0.block.1.norm2.weight', 'encoder.down.0.downsample.conv.bias', 'encoder.down.0.downsample.conv.weight', 'encoder.down.1.block.0.conv1.bias', 'encoder.down.1.block.0.conv1.weight', 'encoder.down.1.block.0.conv2.bias', 'encoder.down.1.block.0.conv2.weight', 'encoder.down.1.block.0.nin_shortcut.bias', 'encoder.down.1.block.0.nin_shortcut.weight', 'encoder.down.1.block.0.norm1.bias', 'encoder.down.1.block.0.norm1.weight', 'encoder.down.1.block.0.norm2.bias', 'encoder.down.1.block.0.norm2.weight', 'encoder.down.1.block.1.conv1.bias', 'encoder.down.1.block.1.conv1.weight', 'encoder.down.1.block.1.conv2.bias', 'encoder.down.1.block.1.conv2.weight', 'encoder.down.1.block.1.norm1.bias', 'encoder.down.1.block.1.norm1.weight', 'encoder.down.1.block.1.norm2.bias', 'encoder.down.1.block.1.norm2.weight', 'encoder.down.1.downsample.conv.bias', 'encoder.down.1.downsample.conv.weight', 'encoder.down.2.block.0.conv1.bias', 'encoder.down.2.block.0.conv1.weight', 'encoder.down.2.block.0.conv2.bias', 'encoder.down.2.block.0.conv2.weight', 'encoder.down.2.block.0.nin_shortcut.bias', 'encoder.down.2.block.0.nin_shortcut.weight', 'encoder.down.2.block.0.norm1.bias', 'encoder.down.2.block.0.norm1.weight', 'encoder.down.2.block.0.norm2.bias', 'encoder.down.2.block.0.norm2.weight', 'encoder.down.2.block.1.conv1.bias', 'encoder.down.2.block.1.conv1.weight', 'encoder.down.2.block.1.conv2.bias', 'encoder.down.2.block.1.conv2.weight', 'encoder.down.2.block.1.norm1.bias', 'encoder.down.2.block.1.norm1.weight', 'encoder.down.2.block.1.norm2.bias', 'encoder.down.2.block.1.norm2.weight', 'encoder.down.2.downsample.conv.bias', 'encoder.down.2.downsample.conv.weight', 'encoder.down.3.block.0.conv1.bias', 'encoder.down.3.block.0.conv1.weight', 'encoder.down.3.block.0.conv2.bias', 'encoder.down.3.block.0.conv2.weight', 'encoder.down.3.block.0.norm1.bias', 'encoder.down.3.block.0.norm1.weight', 'encoder.down.3.block.0.norm2.bias', 'encoder.down.3.block.0.norm2.weight', 'encoder.down.3.block.1.conv1.bias', 'encoder.down.3.block.1.conv1.weight', 'encoder.down.3.block.1.conv2.bias', 'encoder.down.3.block.1.conv2.weight', 'encoder.down.3.block.1.norm1.bias', 'encoder.down.3.block.1.norm1.weight', 'encoder.down.3.block.1.norm2.bias', 'encoder.down.3.block.1.norm2.weight', 'encoder.mid.attn_1.k.bias', 'encoder.mid.attn_1.k.weight', 'encoder.mid.attn_1.norm.bias', 'encoder.mid.attn_1.norm.weight', 'encoder.mid.attn_1.proj_out.bias', 'encoder.mid.attn_1.proj_out.weight', 'encoder.mid.attn_1.q.bias', 'encoder.mid.attn_1.q.weight', 'encoder.mid.attn_1.v.bias', 'encoder.mid.attn_1.v.weight', 'encoder.mid.block_1.conv1.bias', 'encoder.mid.block_1.conv1.weight', 'encoder.mid.block_1.conv2.bias', 'encoder.mid.block_1.conv2.weight', 'encoder.mid.block_1.norm1.bias', 'encoder.mid.block_1.norm1.weight', 'encoder.mid.block_1.norm2.bias', 'encoder.mid.block_1.norm2.weight', 'encoder.mid.block_2.conv1.bias', 'encoder.mid.block_2.conv1.weight', 'encoder.mid.block_2.conv2.bias', 'encoder.mid.block_2.conv2.weight', 'encoder.mid.block_2.norm1.bias', 'encoder.mid.block_2.norm1.weight', 'encoder.mid.block_2.norm2.bias', 'encoder.mid.block_2.norm2.weight', 'encoder.norm_out.bias', 'encoder.norm_out.weight']) INFO Loading state dict from flux_utils.py:330 C:/Users/Ney/Documents/Generative_AI/ComfyUI/ComfyUI_Models/models/clip/t 5xxl_fp16.safetensors 2025-02-27 08:25:58 INFO Loaded T5xxl: flux_utils.py:333 Traceback (most recent call last): File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\sd-scripts\flux_train.py", line 850, in train(args) File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\sd-scripts\flux_train.py", line 236, intrain clip_l.to(accelerator.device) File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\transformers\modeling_utils.py", line 2905, in to return super().to(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to return self._apply(convert) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply module._apply(fn) File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply module._apply(fn) File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply module._apply(fn) File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply param_applied = fn(param) ^^^^^^^^^ File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\torch\nn\modules\module.py", line 1333, in convert raise NotImplementedError( NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device. Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Scripts\accelerate.EXE\__main__.py", line 7, in File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\accelerate\commands\accelerate_cli.py", line 48, in main args.func(args) File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\accelerate\commands\launch.py", line 1199, in launch_command simple_launcher(args) File "C:\Users\Ney\Documents\Generative_AI\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Lib\site-packages\accelerate\commands\launch.py", line 778, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\Users\\Ney\\Documents\\Generative_AI\\Kohya_FLUX_DreamBooth_v17\\kohya_ss\\venv\\Scripts\\python.exe', 'C:/Users/Ney/Documents/Generative_AI/Kohya_FLUX_DreamBooth_v17/kohya_ss/sd-scripts/flux_train.py', '--config_file', 'C:/Users/Ney/Downloads/testkohya\\model/config_dreambooth-20250227-082518.toml']' returned non-zero exit status 1. 08:26:01-682412 INFO Training has ended.