NFO Loading settings from train_util.py:4435 D:/AI/Kohya_FLUX_DreamBooth_v10/train_imgs\model/config_dreambooth-20241 105-143845.toml... INFO D:/AI/Kohya_FLUX_DreamBooth_v10/train_imgs\model/config_dreambooth-20241 train_util.py:4454 105-143845 2024-11-05 14:38:55 INFO Using DreamBooth method. flux_train.py:107 INFO prepare images. train_util.py:1956 INFO get image size from name of cache files train_util.py:1873 100%|████████████████████████████████████████████████████████████████████████████████████████| 253/253 [00:00 flux_utils.py:171 2024-11-05 14:38:57 INFO [Dataset 0] train_util.py:2480 INFO caching latents with caching strategy. train_util.py:1048 INFO caching latents... train_util.py:1093 100%|████████████████████████████████████████████████████████████████████████████████| 253/253 [00:39<00:00, 6.47it/s] D:\AI\Kohya_FLUX_DreamBooth_v10\kohya_ss\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2024-11-05 14:39:37 INFO Building CLIP flux_utils.py:176 INFO Loading state dict from flux_utils.py:269 D:/AI/Kohya_FLUX_DreamBooth_v10/clip_l.safetensors INFO Loaded CLIP: flux_utils.py:272 INFO Loading state dict from flux_utils.py:317 D:/AI/Kohya_FLUX_DreamBooth_v10/t5xxl_fp16.safetensors INFO Loaded T5xxl: flux_utils.py:320 2024-11-05 14:39:46 INFO [Dataset 0] train_util.py:2502 INFO caching Text Encoder outputs with caching strategy. train_util.py:1227 INFO checking cache validity... train_util.py:1238 100%|█████████████████████████████████████████████████████████████████████████████| 253/253 [00:00<00:00, 84359.56it/s] INFO caching Text Encoder outputs... train_util.py:1269 100%|██████████████████████████████████████████████████████████████████████████████████| 37/37 [00:18<00:00, 1.99it/s] 2024-11-05 14:40:05 INFO cache Text Encoder outputs for sample prompt: flux_train.py:240 D:/AI/Kohya_FLUX_DreamBooth_v10/train_imgs\model\sample/prompt.txt INFO Checking the state dict: Diffusers or BFL, dev or schnell flux_utils.py:62 INFO Building Flux model dev from BFL checkpoint flux_utils.py:120 INFO Loading state dict from flux_utils.py:137 D:/AI/Kohya_FLUX_DreamBooth_v10/flux1-dev.safetensors INFO Loaded Flux: flux_utils.py:156 FLUX: Gradient checkpointing enabled. CPU offload: False INFO enable block swap: blocks_to_swap=10 flux_train.py:295 number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | {'scale_parameter': False, 'relative_step': train_util.py:4748 False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, clip_grad_norm is enabled. consider set to train_util.py:4776 0 / max_grad_normが設定されているためclip_grad_normが有効になります。0に設定 して無効にしたほうがいいかもしれません WARNING constant_with_warmup will be good / train_util.py:4780 スケジューラはconstant_with_warmupが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 253 num batches per epoch / 1epochのバッチ数: 46 num epochs / epoch数: 79 batch size per device / バッチサイズ: 7 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 3615 steps: 0%| | 0/3615 [00:00