/kohya_ss/kohya_ss/kohya_gui/common_gui.py:1214: SyntaxWarning: invalid escape sequence '\i' "Please follow the folder structure documentation found at docs\image_folder_structure.md ..." /kohya_ss/kohya_ss/kohya_gui/common_gui.py:1248: SyntaxWarning: invalid escape sequence '\i' f"...please follow the folder structure documentation found at docs\image_folder_structure.md ..." /kohya_ss/kohya_ss/kohya_gui/common_gui.py:1258: SyntaxWarning: invalid escape sequence '\i' "Please follow the folder structure documentation found at docs\image_folder_structure.md ..." /kohya_ss/kohya_ss/kohya_gui/wd14_caption_gui.py:242: SyntaxWarning: invalid escape sequence '\`' info="tag replacement in the format of `source1,target1;source2,target2; ...`. Escape `,` and `;` with `\`. e.g. `tag1,tag2;tag3,tag4`", 23:03:18-493653 INFO Kohya_ss GUI version: v25.2.1 23:03:18-540884 INFO Submodule initialized and updated. 23:03:18-541709 INFO nVidia toolkit detected 23:03:19-823869 INFO Torch 2.7.1+cu118 23:03:20-117529 INFO Torch backend: nVidia CUDA 11.8 cuDNN 90100 23:03:20-150905 INFO Torch detected GPU: NVIDIA H100 80GB HBM3 VRAM 81090MB Arch 9.0 Cores 132 23:03:20-151825 INFO Python version is 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] 23:03:20-152659 INFO Installing/Validating requirements from requirements_pytorch_windows.txt... 23:03:20-154062 INFO Using uv for pip... 23:03:20-238241 ERROR Failed to install requirements from requirements_pytorch_windows.txt. Pip command: 'uv pip install -r requirements_pytorch_windows.txt'. Exit code: 2 23:03:20-239065 ERROR Pip stderr: error: No virtual environment found; run `uv venv` to create an environment, or pass `--system` to install into a non-virtual environment 23:03:20-240015 ERROR Please check the requirements file path, your internet connection, and ensure pip is functioning correctly. 23:03:20-535300 INFO headless: False 23:03:20-538722 INFO Using shell=True when running external commands... * Running on local URL: http://0.0.0.0:7860 * Running on public URL: https://903bc0cf19c7e560df.gradio.live This share link expires in 1 week. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces) 23:05:28-356733 INFO Copy /kaggle/working/kohya_ss/dataset/sreeleela14/images to /kaggle/working/kohya_ss/training_data/img/1_ohxw woman... 23:05:28-369518 INFO Regularization images directory is missing... not copying regularisation images... 23:05:28-372617 INFO Done creating kohya_ss training folder structure at /kaggle/working/kohya_ss/training_data... 23:05:35-037284 INFO Removing existing directory /kaggle/working/kohya_ss/training_data/img/1_ohxw woman... 23:05:35-041887 INFO Copy /kaggle/working/kohya_ss/dataset/sreeleela14/images to /kaggle/working/kohya_ss/training_data/img/1_ohxw woman... 23:05:35-055220 INFO Regularization images directory is missing... not copying regularisation images... 23:05:35-058036 INFO Done creating kohya_ss training folder structure at /kaggle/working/kohya_ss/training_data... 23:06:11-954936 INFO Copy /kaggle/working/kohya_ss/dataset/sreeleela14/images to /kaggle/working/kohya_ss/training_data/img/1_ohwx woman... 23:06:11-965543 INFO Regularization images directory is missing... not copying regularisation images... 23:06:11-968299 INFO Done creating kohya_ss training folder structure at /kaggle/working/kohya_ss/training_data... 23:07:00-032675 INFO Removing existing directory /kaggle/working/kohya_ss/training_data/img/1_ohwx woman... 23:07:00-037192 INFO Copy /kaggle/working/kohya_ss/dataset/sreeleela14/images to /kaggle/working/kohya_ss/training_data/img/1_ohwx woman... 23:07:00-047526 INFO Regularization images directory is missing... not copying regularisation images... 23:07:00-050531 INFO Done creating kohya_ss training folder structure at /kaggle/working/kohya_ss/training_data... 23:07:47-101988 INFO Copy /kaggle/working/kohya_ss/dataset/sreeleela14/images to /kaggle/working/kohya_ss/training_data/img/1_ohwx woman... 23:07:47-115127 INFO Regularization images directory is missing... not copying regularisation images... 23:07:47-117970 INFO Done creating kohya_ss training folder structure at /kaggle/working/kohya_ss/training_data... 23:10:50-645022 INFO Start training Dreambooth... 23:10:50-647638 INFO Validating lr scheduler arguments... 23:10:50-650177 INFO Validating optimizer arguments... 23:10:50-652317 INFO Validating /kaggle/working/kohya_ss/training_data/log existence and writability... SUCCESS 23:10:50-654695 INFO Validating /kaggle/working/kohya_ss/training_data/model existence and writability... SUCCESS 23:10:50-657460 INFO Validating /kaggle/working/kohya_ss/flux1-dev.safetensors existence... SUCCESS 23:10:50-659916 INFO Validating /kaggle/working/kohya_ss/training_data/img existence... SUCCESS 23:10:50-662494 INFO Error: '.ipynb_checkpoints' does not contain an underscore, skipping... 23:10:50-664814 INFO Folder 1_ohwx woman: 1 repeats found 23:10:50-666898 INFO Folder 1_ohwx woman: 10 images found 23:10:50-669102 INFO Folder 1_ohwx woman: 10 * 1 = 10 steps 23:10:50-671228 INFO Regularization factor: 1 23:10:50-673121 INFO Total steps: 10 23:10:50-675116 INFO Train batch size: 1 23:10:50-677046 INFO Gradient accumulation steps: 1 23:10:50-678934 INFO Epoch: 200 23:10:50-680952 INFO max_train_steps (10 / 1 / 1 * 200 * 1) = 2000 23:10:50-685485 INFO lr_warmup_steps = 0 23:10:50-691115 INFO Saving training config to /kaggle/working/kohya_ss/training_data/model/anupama_20 250708-231050.json... 23:10:50-694977 INFO Executing command: /usr/local/bin/accelerate launch --dynamo_backend no --dynamo_mode default --mixed_precision bf16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 /kohya_ss/kohya_ss/sd-scripts/flux_train.py --config_file /kaggle/working/kohya_ss/training_data/model/config_dre ambooth-20250708-231050.toml ipex flag is deprecated, will be removed in Accelerate v1.10. From 2.7.0, PyTorch has all needed optimizations for Intel CPU and XPU. /kohya_ss/kohya_ss/sd-scripts/library/deepspeed_utils.py:131: SyntaxWarning: "is not" with 'str' literal. Did you mean "!="? wrap_model_forward_with_torch_autocast = args.mixed_precision is not "no" /kohya_ss/kohya_ss/sd-scripts/library/strategy_base.py:90: SyntaxWarning: invalid escape sequence '\(' """ /kohya_ss/kohya_ss/sd-scripts/library/custom_train_functions.py:168: SyntaxWarning: invalid escape sequence '\(' """ /kohya_ss/kohya_ss/sd-scripts/library/lpw_stable_diffusion.py:64: SyntaxWarning: invalid escape sequence '\(' """ /kohya_ss/kohya_ss/sd-scripts/library/sdxl_lpw_stable_diffusion.py:76: SyntaxWarning: invalid escape sequence '\(' """ 2025-07-08 23:11:02 INFO Loading settings from ]8;id=662068;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=353969;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4651\4651]8;;\ /kaggle/working/kohya_ss/trainin g_data/model/config_dreambooth-2 0250708-231050.toml... 2025-07-08 23:11:02 INFO Using DreamBooth method. ]8;id=290266;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=652861;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#115\115]8;;\ WARNING ignore directory without repeats ]8;id=207180;file:///kohya_ss/kohya_ss/sd-scripts/library/config_util.py\config_util.py]8;;\:]8;id=345634;file:///kohya_ss/kohya_ss/sd-scripts/library/config_util.py#613\613]8;;\ / 繰り返し回数のないディレクトリを 無視します: .ipynb_checkpoints INFO prepare images. ]8;id=317871;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=438385;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2072\2072]8;;\ INFO get image size from name of ]8;id=40458;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=934573;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1965\1965]8;;\ cache files 100%|███████████████████████████████████████| 10/10 [00:00<00:00, 226719.14it/s] INFO set image size from cache files: ]8;id=619594;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=333665;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1995\1995]8;;\ 0/10 INFO found directory ]8;id=32241;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=20207;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2019\2019]8;;\ /kaggle/working/kohya_ss/trainin g_data/img/1_ohwx woman contains 10 image files read caption: 100%|██████████████████████████| 10/10 [00:00<00:00, 74764.78it/s] WARNING No caption file found for 10 ]8;id=253623;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=401950;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2050\2050]8;;\ images. Training will continue without captions for these images. If class token exists, it will be used. / 10枚の画像にキャプションファイル が見つかりませんでした。これらの 画像についてはキャプションなしで 学習を続行します。class tokenが存在する場合はそれを使い ます。 WARNING /kaggle/working/kohya_ss/trainin ]8;id=46925;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=795466;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2057\2057]8;;\ g_data/img/1_ohwx woman/anupamaparameswaran96_1751 774400_3670085681543397075_36808 84077.jpg WARNING /kaggle/working/kohya_ss/trainin ]8;id=513797;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=575270;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2057\2057]8;;\ g_data/img/1_ohwx woman/anupamaparameswaran96_1751 774400_3670085681543589299_36808 84077.jpg WARNING /kaggle/working/kohya_ss/trainin ]8;id=535972;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=188000;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2057\2057]8;;\ g_data/img/1_ohwx woman/anupamaparameswaran96_1751 774400_3670085681543625143_36808 84077.jpg WARNING /kaggle/working/kohya_ss/trainin ]8;id=456357;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=78153;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2057\2057]8;;\ g_data/img/1_ohwx woman/anupamaparameswaran96_1751 774400_3670085681551853828_36808 84077.jpg WARNING /kaggle/working/kohya_ss/trainin ]8;id=716374;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=85278;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2057\2057]8;;\ g_data/img/1_ohwx woman/anupamaparameswaran96_1751 774400_3670085681585397843_36808 84077.jpg WARNING /kaggle/working/kohya_ss/trainin ]8;id=63319;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=574405;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2055\2055]8;;\ g_data/img/1_ohwx woman/anupamaparameswaran96_1751 774400_3670085681585415928_36808 84077.jpg... and 5 more INFO 10 train images with repeats. ]8;id=938513;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=402039;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2116\2116]8;;\ INFO 0 reg images with repeats. ]8;id=39608;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=898621;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2120\2120]8;;\ WARNING no regularization images / ]8;id=293386;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=318072;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2125\2125]8;;\ 正則化画像が見つかりませんでした INFO [Dataset 0] ]8;id=211372;file:///kohya_ss/kohya_ss/sd-scripts/library/config_util.py\config_util.py]8;;\:]8;id=13351;file:///kohya_ss/kohya_ss/sd-scripts/library/config_util.py#580\580]8;;\ batch_size: 1 resolution: (1024, 1024) resize_interpolation: None enable_bucket: True min_bucket_reso: 256 max_bucket_reso: 2048 bucket_reso_steps: 64 bucket_no_upscale: True [Subset 0 of Dataset 0] image_dir: "/kaggle/working/kohya_ss/traini ng_data/img/1_ohwx woman" image_count: 10 num_repeats: 1 shuffle_caption: False keep_tokens: 0 caption_dropout_rate: 0 caption_dropout_every_n_epoc hs: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, alpha_mask: False resize_interpolation: None custom_attributes: {} is_reg: False class_tokens: ohwx woman caption_extension: .txt INFO [Prepare dataset 0] ]8;id=734954;file:///kohya_ss/kohya_ss/sd-scripts/library/config_util.py\config_util.py]8;;\:]8;id=779008;file:///kohya_ss/kohya_ss/sd-scripts/library/config_util.py#592\592]8;;\ INFO loading image sizes. ]8;id=998502;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=535179;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#987\987]8;;\ 100%|████████████████████████████████████████| 10/10 [00:00<00:00, 43645.20it/s] INFO make buckets ]8;id=746320;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=188706;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1010\1010]8;;\ WARNING min_bucket_reso and ]8;id=204286;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=23414;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1027\1027]8;;\ max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場 合は、bucketの解像度は画像サイズ から自動計算されるため、min_buck et_resoとmax_bucket_resoは無視さ れます INFO number of images (including ]8;id=97656;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=390667;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1056\1056]8;;\ repeats) / 各bucketの画像枚数(繰り返し回数 を含む) INFO bucket 0: resolution (832, ]8;id=918163;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=672676;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1061\1061]8;;\ 1088), count: 10 INFO mean ar error (without repeats): ]8;id=988525;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=421621;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1069\1069]8;;\ 0.014666190413822422 INFO Checking the state dict: Diffusers ]8;id=717209;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=346236;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#43\43]8;;\ or BFL, dev or schnell INFO prepare accelerator ]8;id=331556;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=640561;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#186\186]8;;\ accelerator device: cuda INFO Building AutoEncoder ]8;id=464197;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=907343;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#144\144]8;;\ INFO Loading state dict from ]8;id=575352;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=960489;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#149\149]8;;\ /kaggle/working/kohya_ss/ae.safet ensors INFO Loaded AE: INFO [Dataset 0] ]8;id=517553;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=868287;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2613\2613]8;;\ INFO caching latents with caching ]8;id=737822;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=912755;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1115\1115]8;;\ strategy. INFO caching latents... ]8;id=250206;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=842368;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1164\1164]8;;\ 100%|███████████████████████████████████████████| 10/10 [00:00<00:00, 10.49it/s] tokenizer_config.json: 100%|███████████████████| 905/905 [00:00<00:00, 3.32MB/s] vocab.json: 961kB [00:00, 28.8MB/s] merges.txt: 525kB [00:00, 19.9MB/s] special_tokens_map.json: 100%|█████████████████| 389/389 [00:00<00:00, 1.58MB/s] tokenizer.json: 2.22MB [00:00, 60.5MB/s] /usr/local/lib/python3.12/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( tokenizer_config.json: 1.86kB [00:00, 12.3MB/s] spiece.model: 100%|██████████████████████████| 792k/792k [00:00<00:00, 59.4MB/s] special_tokens_map.json: 1.79kB [00:00, 5.39MB/s] config.json: 100%|█████████████████████████████| 593/593 [00:00<00:00, 2.65MB/s] You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-07-08 23:11:06 INFO Building CLIP-L ]8;id=84353;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=335601;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#179\179]8;;\ INFO Loading state dict from ]8;id=316089;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=578045;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#275\275]8;;\ /kaggle/working/kohya_ss/clip_l.s afetensors INFO Loaded CLIP-L: INFO Loading state dict from ]8;id=616161;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=301630;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#330\330]8;;\ /kaggle/working/kohya_ss/t5xxl_fp 16.safetensors 2025-07-08 23:11:11 INFO Loaded T5xxl: 2025-07-08 23:11:13 INFO [Dataset 0] ]8;id=195800;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=34574;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2635\2635]8;;\ INFO caching Text Encoder outputs ]8;id=94187;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=711693;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1298\1298]8;;\ with caching strategy. INFO checking cache validity... ]8;id=40518;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=883383;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1309\1309]8;;\ 100%|████████████████████████████████████████| 10/10 [00:00<00:00, 70611.18it/s] INFO caching Text Encoder outputs... ]8;id=566860;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=716700;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1340\1340]8;;\ 100%|███████████████████████████████████████████| 10/10 [00:00<00:00, 18.82it/s] 2025-07-08 23:11:14 INFO cache Text Encoder outputs for ]8;id=547136;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=851054;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#249\249]8;;\ sample prompt: /kaggle/working/kohya_ss/training _data/model/sample/prompt.txt INFO Checking the state dict: Diffusers ]8;id=712480;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=618451;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#43\43]8;;\ or BFL, dev or schnell INFO Building Flux model dev from BFL ]8;id=472449;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=516586;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#101\101]8;;\ checkpoint INFO Loading state dict from ]8;id=374727;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=86374;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#118\118]8;;\ /kaggle/working/kohya_ss/flux1-de v.safetensors INFO Loaded Flux: FLUX: Gradient checkpointing enabled. CPU offload: False number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | ]8;id=284203;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=122824;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4963\4963]8;;\ {'scale_parameter': False, 'relative_step': False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, ]8;id=348689;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=446830;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4991\4991]8;;\ clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているた めclip_grad_normが有効になります 。0に設定して無効にしたほうがい いかもしれません WARNING constant_with_warmup will be ]8;id=896870;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=731560;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4995\4995]8;;\ good / スケジューラはconstant_with_warm upが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 10 num batches per epoch / 1epochのバッチ数: 10 num epochs / epoch数: 200 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 2000 steps: 0%| | 0/2000 [00:00 INFO [Dataset 0] ]8;id=517553;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=868287;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2613\2613]8;;\ INFO caching latents with caching ]8;id=737822;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=912755;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1115\1115]8;;\ strategy. INFO caching latents... ]8;id=250206;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=842368;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1164\1164]8;;\ 100%|█████████████████████████████████████████| 10/10 [00:00<00:00, 7891.45it/s] /usr/local/lib/python3.12/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-07-08 23:13:29 INFO Building CLIP-L ]8;id=84353;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=335601;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#179\179]8;;\ INFO Loading state dict from ]8;id=316089;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=578045;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#275\275]8;;\ /kaggle/working/kohya_ss/clip_l.s afetensors 2025-07-08 23:13:30 INFO Loaded CLIP-L: INFO Loading state dict from ]8;id=616161;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=301630;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#330\330]8;;\ /kaggle/working/kohya_ss/t5xxl_fp 16.safetensors 2025-07-08 23:13:35 INFO Loaded T5xxl: 2025-07-08 23:13:37 INFO [Dataset 0] ]8;id=195800;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=34574;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2635\2635]8;;\ INFO caching Text Encoder outputs ]8;id=94187;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=711693;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1298\1298]8;;\ with caching strategy. INFO checking cache validity... ]8;id=40518;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=883383;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1309\1309]8;;\ 100%|█████████████████████████████████████████| 10/10 [00:00<00:00, 4487.33it/s] INFO no Text Encoder outputs to cache ]8;id=566860;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=716700;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1336\1336]8;;\ INFO cache Text Encoder outputs for ]8;id=547136;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=851054;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#249\249]8;;\ sample prompt: /kaggle/working/kohya_ss/training _data/model/sample/prompt.txt INFO cache Text Encoder outputs for ]8;id=618451;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=865351;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: ohwx woman in a sophisticated winter ensemble: wool coat in burgundy, matching beret, cashmere turtleneck, plaid wool skirt below knees, leather gloves, classic ice skates, Italian wool scarf, city plaza setting with snow, twilight atmosphere with twinkling lights INFO cache Text Encoder outputs for ]8;id=472449;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=516586;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: INFO Checking the state dict: Diffusers ]8;id=831861;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=374727;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#43\43]8;;\ or BFL, dev or schnell INFO Building Flux model dev from BFL ]8;id=615592;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=660757;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#101\101]8;;\ checkpoint INFO Loading state dict from ]8;id=767022;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=284203;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#118\118]8;;\ /kaggle/working/kohya_ss/flux1-de v.safetensors INFO Loaded Flux: FLUX: Gradient checkpointing enabled. CPU offload: False number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | ]8;id=153467;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=896870;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4963\4963]8;;\ {'scale_parameter': False, 'relative_step': False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, ]8;id=665013;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=954220;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4991\4991]8;;\ clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているた めclip_grad_normが有効になります 。0に設定して無効にしたほうがい いかもしれません WARNING constant_with_warmup will be ]8;id=27993;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=130488;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4995\4995]8;;\ good / スケジューラはconstant_with_warm upが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 10 num batches per epoch / 1epochのバッチ数: 10 num epochs / epoch数: 200 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 2000 steps: 0%| | 0/2000 [00:00 2025-07-08 23:25:29 INFO [Dataset 0] ]8;id=517553;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=868287;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2613\2613]8;;\ INFO caching latents with caching ]8;id=737822;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=912755;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1115\1115]8;;\ strategy. INFO caching latents... ]8;id=250206;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=842368;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1164\1164]8;;\ 100%|█████████████████████████████████████████| 10/10 [00:00<00:00, 7491.17it/s] /usr/local/lib/python3.12/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-07-08 23:25:30 INFO Building CLIP-L ]8;id=84353;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=335601;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#179\179]8;;\ INFO Loading state dict from ]8;id=316089;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=578045;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#275\275]8;;\ /kaggle/working/kohya_ss/clip_l.s afetensors INFO Loaded CLIP-L: INFO Loading state dict from ]8;id=616161;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=301630;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#330\330]8;;\ /kaggle/working/kohya_ss/t5xxl_fp 16.safetensors 2025-07-08 23:25:35 INFO Loaded T5xxl: 2025-07-08 23:25:37 INFO [Dataset 0] ]8;id=195800;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=34574;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2635\2635]8;;\ INFO caching Text Encoder outputs ]8;id=94187;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=711693;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1298\1298]8;;\ with caching strategy. INFO checking cache validity... ]8;id=40518;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=883383;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1309\1309]8;;\ 100%|█████████████████████████████████████████| 10/10 [00:00<00:00, 4471.06it/s] INFO no Text Encoder outputs to cache ]8;id=566860;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=716700;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1336\1336]8;;\ INFO cache Text Encoder outputs for ]8;id=547136;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=851054;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#249\249]8;;\ sample prompt: /kaggle/working/kohya_ss/training _data/model/sample/prompt.txt INFO cache Text Encoder outputs for ]8;id=618451;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=865351;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: ohwx woman in a sophisticated winter ensemble: wool coat in burgundy, matching beret, cashmere turtleneck, plaid wool skirt below knees, leather gloves, classic ice skates, Italian wool scarf, city plaza setting with snow, twilight atmosphere with twinkling lights INFO cache Text Encoder outputs for ]8;id=472449;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=516586;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: INFO Checking the state dict: Diffusers ]8;id=831861;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=374727;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#43\43]8;;\ or BFL, dev or schnell INFO Building Flux model dev from BFL ]8;id=615592;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=660757;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#101\101]8;;\ checkpoint 2025-07-08 23:25:38 INFO Loading state dict from ]8;id=767022;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=284203;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#118\118]8;;\ /kaggle/working/kohya_ss/flux1-de v.safetensors INFO Loaded Flux: FLUX: Gradient checkpointing enabled. CPU offload: False number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | ]8;id=153467;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=896870;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4963\4963]8;;\ {'scale_parameter': False, 'relative_step': False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, ]8;id=665013;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=954220;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4991\4991]8;;\ clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているた めclip_grad_normが有効になります 。0に設定して無効にしたほうがい いかもしれません WARNING constant_with_warmup will be ]8;id=27993;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=130488;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4995\4995]8;;\ good / スケジューラはconstant_with_warm upが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 10 num batches per epoch / 1epochのバッチ数: 10 num epochs / epoch数: 200 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 2000 steps: 0%| | 0/2000 [00:00 2025-07-09 00:12:41 INFO [Dataset 0] ]8;id=517553;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=868287;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2613\2613]8;;\ INFO caching latents with caching ]8;id=737822;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=912755;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1115\1115]8;;\ strategy. INFO caching latents... ]8;id=250206;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=842368;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1164\1164]8;;\ 100%|█████████████████████████████████████████| 10/10 [00:00<00:00, 8700.07it/s] /usr/local/lib/python3.12/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-07-09 00:12:42 INFO Building CLIP-L ]8;id=84353;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=335601;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#179\179]8;;\ INFO Loading state dict from ]8;id=316089;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=578045;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#275\275]8;;\ /kaggle/working/kohya_ss/clip_l.s afetensors INFO Loaded CLIP-L: INFO Loading state dict from ]8;id=616161;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=301630;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#330\330]8;;\ /kaggle/working/kohya_ss/t5xxl_fp 16.safetensors 2025-07-09 00:12:47 INFO Loaded T5xxl: 2025-07-09 00:12:48 INFO [Dataset 0] ]8;id=195800;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=34574;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2635\2635]8;;\ INFO caching Text Encoder outputs ]8;id=94187;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=711693;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1298\1298]8;;\ with caching strategy. INFO checking cache validity... ]8;id=40518;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=883383;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1309\1309]8;;\ 100%|█████████████████████████████████████████| 10/10 [00:00<00:00, 5098.84it/s] INFO no Text Encoder outputs to cache ]8;id=566860;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=716700;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1336\1336]8;;\ INFO cache Text Encoder outputs for ]8;id=547136;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=851054;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#249\249]8;;\ sample prompt: /kaggle/working/kohya_ss/training _data/model/sample/prompt.txt INFO cache Text Encoder outputs for ]8;id=618451;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=865351;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: ohwx man close up photoshoot INFO cache Text Encoder outputs for ]8;id=472449;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=516586;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: 2025-07-09 00:12:49 INFO Checking the state dict: Diffusers ]8;id=831861;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=374727;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#43\43]8;;\ or BFL, dev or schnell INFO Building Flux model dev from BFL ]8;id=615592;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=660757;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#101\101]8;;\ checkpoint INFO Loading state dict from ]8;id=767022;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=284203;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#118\118]8;;\ /kaggle/working/kohya_ss/flux1-de v.safetensors INFO Loaded Flux: FLUX: Gradient checkpointing enabled. CPU offload: False number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | ]8;id=153467;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=896870;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4963\4963]8;;\ {'scale_parameter': False, 'relative_step': False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, ]8;id=665013;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=954220;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4991\4991]8;;\ clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているた めclip_grad_normが有効になります 。0に設定して無効にしたほうがい いかもしれません WARNING constant_with_warmup will be ]8;id=27993;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=130488;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4995\4995]8;;\ good / スケジューラはconstant_with_warm upが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 10 num batches per epoch / 1epochのバッチ数: 10 num epochs / epoch数: 200 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 2000 steps: 0%| | 0/2000 [00:00 INFO [Dataset 0] ]8;id=961564;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=595078;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2613\2613]8;;\ 2025-07-09 00:38:53 INFO caching latents with caching ]8;id=569366;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=469730;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1115\1115]8;;\ strategy. INFO caching latents... ]8;id=978147;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=513054;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1164\1164]8;;\ 100%|███████████████████████████████████████████| 15/15 [00:01<00:00, 14.72it/s] /usr/local/lib/python3.12/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-07-09 00:38:55 INFO Building CLIP-L ]8;id=130873;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=574033;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#179\179]8;;\ INFO Loading state dict from ]8;id=838260;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=632485;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#275\275]8;;\ /kaggle/working/kohya_ss/clip_l.s afetensors INFO Loaded CLIP-L: INFO Loading state dict from ]8;id=192800;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=198591;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#330\330]8;;\ /kaggle/working/kohya_ss/t5xxl_fp 16.safetensors 2025-07-09 00:39:00 INFO Loaded T5xxl: 2025-07-09 00:39:02 INFO [Dataset 0] ]8;id=919360;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=156814;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#2635\2635]8;;\ INFO caching Text Encoder outputs ]8;id=733293;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=967922;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1298\1298]8;;\ with caching strategy. INFO checking cache validity... ]8;id=739543;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=550055;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1309\1309]8;;\ 100%|████████████████████████████████████████| 15/15 [00:00<00:00, 81390.12it/s] INFO caching Text Encoder outputs... ]8;id=225654;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=938516;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#1340\1340]8;;\ 100%|███████████████████████████████████████████| 15/15 [00:00<00:00, 21.49it/s] INFO cache Text Encoder outputs for ]8;id=607854;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=288579;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#249\249]8;;\ sample prompt: /kaggle/working/kohya_ss/training _data/model/sample/prompt.txt INFO cache Text Encoder outputs for ]8;id=961482;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=831861;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: ohwx man close up photoshoot INFO cache Text Encoder outputs for ]8;id=120952;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py\flux_train.py]8;;\:]8;id=510073;file:///kohya_ss/kohya_ss/sd-scripts/flux_train.py#259\259]8;;\ prompt: INFO Checking the state dict: Diffusers ]8;id=199626;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=254841;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#43\43]8;;\ or BFL, dev or schnell INFO Building Flux model dev from BFL ]8;id=231169;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=390133;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#101\101]8;;\ checkpoint INFO Loading state dict from ]8;id=65213;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py\flux_utils.py]8;;\:]8;id=105494;file:///kohya_ss/kohya_ss/sd-scripts/library/flux_utils.py#118\118]8;;\ /kaggle/working/kohya_ss/flux1-de v.safetensors 2025-07-09 00:39:03 INFO Loaded Flux: FLUX: Gradient checkpointing enabled. CPU offload: False number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | ]8;id=631421;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=713649;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4963\4963]8;;\ {'scale_parameter': False, 'relative_step': False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, ]8;id=635791;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=870408;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4991\4991]8;;\ clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているた めclip_grad_normが有効になります 。0に設定して無効にしたほうがい いかもしれません WARNING constant_with_warmup will be ]8;id=874349;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py\train_util.py]8;;\:]8;id=121683;file:///kohya_ss/kohya_ss/sd-scripts/library/train_util.py#4995\4995]8;;\ good / スケジューラはconstant_with_warm upが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 15 num batches per epoch / 1epochのバッチ数: 15 num epochs / epoch数: 200 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 3000 steps: 0%| | 0/3000 [00:00