21:05:05-010821 WARNING Skipping requirements verification. 21:05:05-012820 INFO headless: False 21:05:05-013793 INFO Using shell=True when running external commands... * Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. 21:05:11-910261 INFO Start training Dreambooth... 21:05:11-914214 INFO Validating lr scheduler arguments... 21:05:11-915213 INFO Validating optimizer arguments... 21:05:11-916214 INFO Validating C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model existence and writability... SUCCESS 21:05:11-916214 INFO Validating C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/flux1-dev.safetensors existence... SUCCESS 21:05:11-917213 INFO Validating C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\img existence... SUCCESS 21:05:11-918213 INFO Folder 1_ohwx man: 1 repeats found 21:05:11-919681 INFO Folder 1_ohwx man: 115 images found 21:05:11-920681 INFO Folder 1_ohwx man: 115 * 1 = 115 steps 21:05:11-920681 INFO Regularization factor: 1 21:05:11-921682 INFO Total steps: 115 21:05:11-921682 INFO Train batch size: 1 21:05:11-922682 INFO Gradient accumulation steps: 1 21:05:11-923062 INFO Epoch: 20 21:05:11-923062 INFO max_train_steps (115 / 1 / 1 * 20 * 1) = 2300 21:05:11-924105 INFO lr_warmup_steps = 0 21:05:11-926104 INFO Saving training config to C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model\Quality_111_20250302-2105 11.json... 21:05:11-928061 INFO Executing command: C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Scripts\accelerate.EXE launch --dynamo_backend no --dynamo_mode default --gpu_ids 0 --mixed_precision bf16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 2 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/kohya_ss/sd-scripts/flux_train.py --config_file C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model/config_dreambooth-2025030 2-210511.toml C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\lib\site-packages\diffusers\utils\outputs.py:63: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead. torch.utils._pytree._register_pytree_node( C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\lib\site-packages\diffusers\utils\outputs.py:63: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead. torch.utils._pytree._register_pytree_node( 2025-03-02 21:05:17 INFO Loading settings from train_util.py:4625 C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model/co nfig_dreambooth-20250302-210511.toml... 2025-03-02 21:05:17 INFO Using DreamBooth method. flux_train.py:115 INFO prepare images. train_util.py:2053 INFO get image size from name of cache files train_util.py:1944 100%|████████████████████████████████████████████████████████████████████████████████████████| 115/115 [00:00 flux_utils.py:152 2025-03-02 21:05:18 INFO [Dataset 0] train_util.py:2589 INFO caching latents with caching strategy. train_util.py:1097 INFO caching latents... train_util.py:1146 100%|██████████████████████████████████████████████████████████████████████████████| 115/115 [00:00<00:00, 7358.65it/s] C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 INFO Building CLIP-L flux_utils.py:179 INFO Loading state dict from flux_utils.py:275 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/clip_l.safetens ors 2025-03-02 21:05:19 INFO Loaded CLIP-L: flux_utils.py:278 INFO Loading state dict from flux_utils.py:330 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/t5xxl_fp16.safe tensors 2025-03-02 21:05:31 INFO Loaded T5xxl: flux_utils.py:333 2025-03-02 21:05:33 INFO [Dataset 0] train_util.py:2611 INFO caching Text Encoder outputs with caching strategy. train_util.py:1280 INFO checking cache validity... train_util.py:1291 100%|██████████████████████████████████████████████████████████████████████████████| 115/115 [00:00<00:00, 3679.80it/s] 2025-03-02 21:05:34 INFO no Text Encoder outputs to cache train_util.py:1318 INFO cache Text Encoder outputs for sample prompt: flux_train.py:249 C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model\sam ple/prompt.txt INFO Checking the state dict: Diffusers or BFL, dev or schnell flux_utils.py:43 INFO Building Flux model dev from BFL checkpoint flux_utils.py:101 INFO Loading state dict from flux_utils.py:118 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/flux1-dev.safet ensors INFO Loaded Flux: flux_utils.py:137 FLUX: Gradient checkpointing enabled. CPU offload: False INFO enable block swap: blocks_to_swap=8 flux_train.py:304 FLUX: Block swap enabled. Swapping 8 blocks, double blocks: 4, single blocks: 8. number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | {'scale_parameter': False, 'relative_step': train_util.py:4937 False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, clip_grad_norm is enabled. consider set to train_util.py:4965 0 / max_grad_normが設定されているためclip_grad_normが有効になります。0に設定 して無効にしたほうがいいかもしれません WARNING constant_with_warmup will be good / train_util.py:4969 スケジューラはconstant_with_warmupが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 115 num batches per epoch / 1epochのバッチ数: 115 num epochs / epoch数: 20 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 2300 steps: 0%| | 0/2300 [00:00