<@205854764540362752> Hello Dr. Furkan Gözükara, i'm doing all with your video instruction for windows 11 https://www.youtube.com/watch?v=FvpWy1x5etM, and i don't understand why it doesn't work properly, i have 4090RTX and 64gb RAM and have very slow training, i tried for finetune configs and for training, ssame situation: enable full bf16 training. running training / 学習開始 num examples / サンプル数: 115 num batches per epoch / 1epochのバッチ数: 29 num epochs / epoch数: 40 batch size per device / バッチサイズ: 4 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 1150 steps: 0%| | 0/1150 [00:00 flux_utils.py:152 INFO [Dataset 0] train_util.py:2589 INFO caching latents with caching strategy. train_util.py:1097 INFO caching latents... train_util.py:1146 100%|█████████████████████████████████████████████████████████████████████████████| 115/115 [00:00<00:00, 10454.85it/s] C:\Users\PC\Desktop\AI\Projects\Kohya_FLUX_DreamBooth_v17\kohya_ss\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( You are using the default legacy behaviour of the . This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 2025-03-02 19:52:30 INFO Building CLIP-L flux_utils.py:179 INFO Loading state dict from flux_utils.py:275 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/clip_l.safetens ors INFO Loaded CLIP-L: flux_utils.py:278 INFO Loading state dict from flux_utils.py:330 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/t5xxl_fp16.safe tensors 2025-03-02 19:52:53 INFO Loaded T5xxl: flux_utils.py:333 2025-03-02 19:52:55 INFO [Dataset 0] train_util.py:2611 INFO caching Text Encoder outputs with caching strategy. train_util.py:1280 INFO checking cache validity... train_util.py:1291 100%|██████████████████████████████████████████████████████████████████████████████| 115/115 [00:00<00:00, 6051.40it/s] INFO no Text Encoder outputs to cache train_util.py:1318 INFO cache Text Encoder outputs for sample prompt: flux_train.py:249 C:/Users/PC/Desktop/AI/Projects/SwarmUI/Models/diffusion_models\model\sam ple/prompt.txt INFO Checking the state dict: Diffusers or BFL, dev or schnell flux_utils.py:43 INFO Building Flux model dev from BFL checkpoint flux_utils.py:101 INFO Loading state dict from flux_utils.py:118 C:/Users/PC/Desktop/AI/Projects/Kohya_FLUX_DreamBooth_v17/flux1-dev.safet ensors INFO Loaded Flux: flux_utils.py:137 FLUX: Gradient checkpointing enabled. CPU offload: False INFO enable block swap: blocks_to_swap=22 flux_train.py:304 FLUX: Block swap enabled. Swapping 22 blocks, double blocks: 11, single blocks: 22. number of trainable parameters: 11901408320 prepare optimizer, data loader etc. INFO use Adafactor optimizer | {'scale_parameter': False, 'relative_step': train_util.py:4937 False, 'warmup_init': False, 'weight_decay': 0.01} WARNING because max_grad_norm is set, clip_grad_norm is enabled. consider set to train_util.py:4965 0 / max_grad_normが設定されているためclip_grad_normが有効になります。0に設定 して無効にしたほうがいいかもしれません WARNING constant_with_warmup will be good / train_util.py:4969 スケジューラはconstant_with_warmupが良いかもしれません enable full bf16 training. running training / 学習開始 num examples / サンプル数: 115 num batches per epoch / 1epochのバッチ数: 115 num epochs / epoch数: 20 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 2300 steps: 0%| | 0/2300 [00:00