Starting the GUI... this might take some time... 12:17:19-892211 WARNING Skipping requirements verification. 12:17:19-907836 INFO headless: False 12:17:19-907836 INFO Using shell=True when running external commands... * Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. 12:17:41-446816 INFO Start training Dreambooth... 12:17:41-447825 INFO Validating lr scheduler arguments... 12:17:41-448817 INFO Validating optimizer arguments... 12:17:41-449816 INFO Validating C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model existence and writability... SUCCESS 12:17:41-450533 INFO Validating C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/flux1-dev.safetensors existence... SUCCESS 12:17:41-450533 INFO Validating C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\img existence... SUCCESS 12:17:41-451580 INFO Folder 1_my$1$1$1face man: 1 repeats found 12:17:41-452580 INFO Folder 1_my$1$1$1face man: 115 images found 12:17:41-453580 INFO Folder 1_my$1$1$1face man: 115 * 1 = 115 steps 12:17:41-454136 INFO Regularization factor: 1 12:17:41-454136 INFO Total steps: 115 12:17:41-455160 INFO Train batch size: 2 12:17:41-456199 INFO Gradient accumulation steps: 1 12:17:41-456199 INFO Epoch: 40 12:17:41-457199 INFO max_train_steps (115 / 2 / 1 * 40 * 1) = 2300 12:17:41-457199 INFO lr_warmup_steps = 0 12:17:41-459200 INFO Saving training config to C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model\my_20250303-121741.json... 12:17:41-460056 INFO Executing command: C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\Scripts\accelerate.EXE launch --dynamo_backend no --dynamo_mode default --gpu_ids 0 --mixed_precision bf16 --num_processes 1 --num_machines 1 --num_cpu_threads_per_process 20 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/kohya_ss/sd-scripts/flux_train.py --config_file C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model/config_dreambooth-20250303-121741.toml C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\lib\site-packages\diffusers\utils\outputs.py:63: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead. torch.utils._pytree._register_pytree_node( C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\lib\site-packages\diffusers\utils\outputs.py:63: FutureWarning: `torch.utils._pytree._register_pytree_node` is deprecated. Please use `torch.utils._pytree.register_pytree_node` instead. torch.utils._pytree._register_pytree_node( 2025-03-03 12:17:47 INFO Loading settings from train_util.py:4625 C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model/config_dreamboo th-20250303-121741.toml... 2025-03-03 12:17:47 INFO Using DreamBooth method. flux_train.py:115 INFO prepare images. train_util.py:2053 INFO get image size from name of cache files train_util.py:1944 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 115/115 [00:00 flux_utils.py:152 INFO [Dataset 0] train_util.py:2589 INFO caching latents with caching strategy. train_util.py:1097 INFO caching latents... train_util.py:1146 100%|███████████████████████████████████████████████████████████████████████████████████████████| 115/115 [00:00<00:00, 3679.97it/s] C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-03-03 12:17:48 INFO Building CLIP-L flux_utils.py:179 INFO Loading state dict from flux_utils.py:275 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/clip_l.safetensors INFO Loaded CLIP-L: flux_utils.py:278 INFO Loading state dict from flux_utils.py:330 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/t5xxl_fp16.safetensors 2025-03-03 12:17:55 INFO Loaded T5xxl: flux_utils.py:333 2025-03-03 12:17:56 INFO [Dataset 0] train_util.py:2611 INFO caching Text Encoder outputs with caching strategy. train_util.py:1280 INFO checking cache validity... train_util.py:1291 100%|███████████████████████████████████████████████████████████████████████████████████████████| 115/115 [00:00<00:00, 7361.35it/s] INFO no Text Encoder outputs to cache train_util.py:1318 INFO cache Text Encoder outputs for sample prompt: flux_train.py:249 C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model\sample/prompt.tx t 2025-03-03 12:17:57 INFO Checking the state dict: Diffusers or BFL, dev or schnell flux_utils.py:43 INFO Building Flux model dev from BFL checkpoint flux_utils.py:101 INFO Loading state dict from flux_utils.py:118 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/flux1-dev.safetensors INFO Loaded Flux: flux_utils.py:137 FLUX: Gradient checkpointing enabled. CPU offload: False INFO enable block swap: blocks_to_swap=10 flux_train.py:304 FLUX: Block swap enabled. Swapping 10 blocks, double blocks: 5, single blocks: 10. number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | {'scale_parameter': False, 'relative_step': False, train_util.py:4937 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, clip_grad_norm is enabled. consider set to 0 / train_util.py:4965 max_grad_normが設定されているためclip_grad_normが有効になります。0に設定して無効にし たほうがいいかもしれません WARNING constant_with_warmup will be good / train_util.py:4969 スケジューラはconstant_with_warmupが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 115 num batches per epoch / 1epochのバッチ数: 58 num epochs / epoch数: 40 batch size per device / バッチサイズ: 2 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 2300 steps: 0%| | 0/2300 [00:00