18:29:26-604214 INFO Start training LoRA Standard ... 18:29:26-606158 INFO Validating lr scheduler arguments... 18:29:26-608031 INFO Validating optimizer arguments... 18:29:26-609606 INFO Validating /home/kasm-user/Desktop/training/log existence and writability... SUCCESS 18:29:26-611411 INFO Validating /home/kasm-user/Desktop/training/model existence and writability... SUCCESS 18:29:26-614956 INFO Validating stabilityai/stable-diffusion-xl-base-1.0 existence... SUCCESS 18:29:26-617766 INFO Validating /home/kasm-user/Desktop/training/img existence... SUCCESS 18:29:27-991006 INFO Folder 1_serene_grove landscape: 1 repeats found 18:29:27-993861 INFO Folder 1_serene_grove landscape: 20 images found 18:29:27-995594 INFO Folder 1_serene_grove landscape: 20 * 1 = 20 steps 18:29:27-997465 INFO Regularization factor: 1 18:29:27-999030 INFO Train batch size: 1 18:29:28-000678 INFO Gradient accumulation steps: 1 18:29:28-002258 INFO Epoch: 1 18:29:28-003684 INFO Max train steps: 3000 18:29:28-005112 INFO stop_text_encoder_training = 0 18:29:28-006738 INFO lr_warmup_steps = 0 18:29:28-008397 INFO Effective Learning Rate Configuration (based on GUI settings): 18:29:28-010409 INFO - Main LR (for optimizer & fallback): 1.00e+00 18:29:28-012123 INFO - Text Encoder (Primary/CLIP) Effective LR: 1.00e+00 (Specific Value) 18:29:28-013994 INFO - Text Encoder (T5XXL, if applicable) Effective LR: 1.00e+00 (Inherited from Primary TE LR) 18:29:28-015862 INFO - U-Net Effective LR: 1.00e+00 (Specific Value) 18:29:28-017525 INFO Note: These LRs reflect the GUI's direct settings. Advanced options in sd-scripts (e.g., block LRs, LoRA+) can further modify rates for specific layers. 18:29:28-020793 INFO Saving training config to /home/kasm-user/Desktop/training/model/S3r3b3 Gr0v3_20250718-182928.json... 18:29:28-023668 INFO Executing command: /home/kasm-user/venv/bin/accelerate launch --dynamo_backend no --dynamo_mode default --mixed_precision bf16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 /home/kasm-user/kohya_ss/sd-scripts/sdxl_train_network. py --config_file /home/kasm-user/Desktop/training/model/config_lora-2025 0718-182928.toml --lr_scheduler_type CosineAnnealingLR --lr_scheduler_args T_max=1000 eta_min=0e-0 ipex flag is deprecated, will be removed in Accelerate v1.10. From 2.7.0, PyTorch has all needed optimizations for Intel CPU and XPU. 2025-07-18 18:29:36.913656: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1752863376.935542 39766 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1752863376.944209 39766 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered W0000 00:00:1752863376.961390 39766 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1752863376.961422 39766 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1752863376.961427 39766 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1752863376.961430 39766 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. 2025-07-18 18:29:36.966180: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. /home/kasm-user/kohya_ss/sd-scripts/library/deepspeed_utils.py:131: SyntaxWarning: "is not" with a literal. Did you mean "!="? wrap_model_forward_with_torch_autocast = args.mixed_precision is not "no" 2025-07-18 18:29:39 INFO Loading settings from train_util.py:4651 /home/kasm-user/Desktop/training /model/config_lora-20250718-1829 28.toml... tokenizer_config.json: 100%|███████████████████| 905/905 [00:00<00:00, 6.81MB/s] vocab.json: 961kB [00:00, 31.6MB/s] merges.txt: 525kB [00:00, 62.9MB/s] special_tokens_map.json: 100%|█████████████████| 389/389 [00:00<00:00, 2.43MB/s] tokenizer.json: 2.22MB [00:00, 104MB/s] /home/kasm-user/venv/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( tokenizer_config.json: 100%|███████████████████| 904/904 [00:00<00:00, 8.48MB/s] vocab.json: 862kB [00:00, 68.1MB/s] merges.txt: 525kB [00:00, 59.6MB/s] special_tokens_map.json: 100%|█████████████████| 389/389 [00:00<00:00, 2.08MB/s] tokenizer.json: 2.22MB [00:00, 98.7MB/s] 2025-07-18 18:29:41 INFO Using DreamBooth method. train_network.py:517 INFO prepare images. train_util.py:2072 INFO get image size from name of train_util.py:1965 cache files 100%|███████████████████████████████████████| 20/20 [00:00<00:00, 366314.76it/s] INFO set image size from cache files: train_util.py:1995 0/20 INFO found directory train_util.py:2019 /home/kasm-user/Desktop/training /img/1_serene_grove landscape contains 20 image files read caption: 100%|██████████████████████████| 20/20 [00:00<00:00, 21167.32it/s] INFO 20 train images with repeats. train_util.py:2116 INFO 0 reg images with repeats. train_util.py:2120 WARNING no regularization images / train_util.py:2125 正則化画像が見つかりませんでした INFO [Dataset 0] config_util.py:580 batch_size: 1 resolution: (1024, 1024) resize_interpolation: None enable_bucket: True min_bucket_reso: 256 max_bucket_reso: 2048 bucket_reso_steps: 32 bucket_no_upscale: False [Subset 0 of Dataset 0] image_dir: "/home/kasm-user/Desktop/trainin g/img/1_serene_grove landscape" image_count: 20 num_repeats: 1 shuffle_caption: True keep_tokens: 0 caption_dropout_rate: 0.0 caption_dropout_every_n_epoc hs: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, alpha_mask: False resize_interpolation: None custom_attributes: {} is_reg: False class_tokens: serene_grove landscape caption_extension: .txt INFO [Prepare dataset 0] config_util.py:592 INFO loading image sizes. train_util.py:987 100%|████████████████████████████████████████| 20/20 [00:00<00:00, 35010.88it/s] INFO make buckets train_util.py:1010 INFO number of images (including train_util.py:1056 repeats) / 各bucketの画像枚数(繰り返し回数 を含む) INFO bucket 0: resolution (1248, train_util.py:1061 832), count: 20 INFO mean ar error (without repeats): train_util.py:1069 0.0 WARNING clip_skip will be unexpected sdxl_train_util.py:349 / SDXL学習ではclip_skipは動作 しません INFO preparing accelerator train_network.py:580 accelerator device: cuda INFO loading model for process 0/1 sdxl_train_util.py:32 INFO load Diffusers pretrained sdxl_train_util.py:87 models: stabilityai/stable-diffusion- xl-base-1.0, variant=None model_index.json: 100%|████████████████████████| 609/609 [00:00<00:00, 1.16MB/s] config.json: 100%|█████████████████████████████| 565/565 [00:00<00:00, 3.21MB/s] scheduler_config.json: 100%|████████████████████| 479/479 [00:00<00:00, 670kB/s] config.json: 100%|██████████████████████████████| 575/575 [00:00<00:00, 425kB/s] tokenizer_config.json: 100%|████████████████████| 725/725 [00:00<00:00, 672kB/s] special_tokens_map.json: 100%|█████████████████| 460/460 [00:00<00:00, 3.22MB/s] merges.txt: 525kB [00:00, 14.8MB/s] | 2/15 [00:00<00:01, 10.28it/s] config.json: 1.68kB [00:00, 762kB/s] | 0.00/460 [00:00