01:50:49-905395 INFO Version: v22.4.0 01:50:49-909395 INFO nVidia toolkit detected 01:50:52-613868 INFO Torch 2.0.1+cu118 01:50:52-650992 INFO Torch backend: nVidia CUDA 11.8 cuDNN 8700 01:50:52-652993 INFO Torch detected GPU: NVIDIA GeForce RTX 3070 Ti VRAM 8192 Arch (8, 6) Cores 48 01:50:52-653994 INFO Verifying modules installation status from requirements_windows_torch2.txt... 01:50:52-655993 INFO Verifying modules installation status from requirements.txt... 01:50:55-322562 INFO headless: False 01:50:55-326560 INFO Load CSS... Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. 01:51:05-312417 INFO Loading config... 01:51:56-565873 INFO Start training Dreambooth... 01:51:56-567378 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\img 01:51:56-568383 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\reg 01:51:56-569383 INFO Folder 20_sks fluidart : steps 400 01:51:56-570383 INFO Regularisation images are used... Will double the number of steps required... 01:51:56-570383 INFO max_train_steps (400 / 1 / 1 * 10 * 2) = 8000 01:51:56-571383 INFO stop_text_encoder_training = 0 01:51:56-572384 INFO lr_warmup_steps = 0 01:51:56-574736 INFO Saving training config to C:/Users/isman/Pictures/fluid art\model\fluidart_20231231-015156.json... 01:51:56-575741 INFO accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train.py" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=2048 --pretrained_model_name_or_path="C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/mode ls/checkpoints/zavychromaxl_v30.safetensors" --train_data_dir="C:/Users/isman/Pictures/fluid art\img" --reg_data_dir="C:/Users/isman/Pictures/fluid art\reg" --resolution="1024,1024" --output_dir="C:/Users/isman/Pictures/fluid art\model" --logging_dir="C:/Users/isman/Pictures/fluid art\log" --save_model_as=safetensors --output_name="fluidart" --lr_scheduler_num_cycles="10" --max_data_loader_n_workers="0" --learning_rate_te1="1e-05" --learning_rate_te2="1e-05" --learning_rate="0.0004" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="8000" --save_every_n_epochs="1" --mixed_precision="bf16" --save_precision="bf16" --cache_latents --cache_latents_to_disk --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale --noise_offset=0.0 prepare tokenizers Using DreamBooth method. prepare images. found directory C:\Users\isman\Pictures\fluid art\img\20_sks fluidart contains 20 image files No caption file found for 20 images. Training will continue without captions for these images. If class token exists, it will be used. / 20枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学習を 続行します。class tokenが存在する場合はそれを使います。 C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpeg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).webp C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (10).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (11).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (12).jpg... and 15 more found directory C:\Users\isman\Pictures\fluid art\reg\1_fluidart contains 0 image files ignore subset with image_dir='C:\Users\isman\Pictures\fluid art\reg\1_fluidart': no images found / 画像が見つからないた めサブセットを無視します 400 train images with repeating. 0 reg images. no regularization images / 正則化画像が見つかりませんでした [Dataset 0] batch_size: 1 resolution: (1024, 1024) enable_bucket: True min_bucket_reso: 256 max_bucket_reso: 2048 bucket_reso_steps: 64 bucket_no_upscale: True [Subset 0 of Dataset 0] image_dir: "C:\Users\isman\Pictures\fluid art\img\20_sks fluidart" image_count: 20 num_repeats: 20 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: sks fluidart caption_extension: .caption [Dataset 0] loading image sizes. 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 583.06it/s] make buckets min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (1024, 1024), count: 400 mean ar error (without repeats): 0.0 prepare accelerator loading model for process 0/1 load StableDiffusion checkpoint: C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/models/checkpoints/zavychromaxl_v30.safetensors building U-Net loading U-Net from checkpoint U-Net: building text encoders loading text encoders from checkpoint text encoder 1: text encoder 2: building VAE loading VAE from checkpoint VAE: Disable Diffusers' xformers Enable xformers for U-Net A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' [Dataset 0] caching latents. checking cache validity... 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 85.47it/s] caching latents... 0it [00:00, ?it/s] train unet: True, text_encoder1: False, text_encoder2: False number of models: 1 number of trainable parameters: 2567463684 prepare optimizer, data loader etc. use Adafactor optimizer | {'scale_parameter': False, 'relative_step': False, 'warmup_init': False} because max_grad_norm is set, clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているためclip_grad_normが有効になります。0に設定して無効にしたほうがいいかもしれません constant_with_warmup will be good / スケジューラはconstant_with_warmupが良いかもしれません running training / 学習開始 num examples / サンプル数: 400 num batches per epoch / 1epochのバッチ数: 400 num epochs / epoch数: 20 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 8000 steps: 0%| | 0/8000 [00:00 train(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\sdxl_train.py", line 563, in train noise_pred = unet(noisy_latents, timesteps, text_embedding, vector_embedding) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\utils\operations.py", line 636, in forward return model_forward(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\utils\operations.py", line 624, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\amp\autocast_mode.py", line 14, in decorate_autocast return func(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 1099, in forward h = call_module(module, h, emb, context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 1090, in call_module x = layer(x, context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 745, in forward hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 668, in forward output = self.forward_body(hidden_states, context, timestep) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 643, in forward_body hidden_states = self.attn1(norm_hidden_states) + hidden_states File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 438, in forward return self.forward_memory_efficient_xformers(hidden_states, context, mask) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 492, in forward_memory_efficient_xformers k_in = self.to_k(context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 14.24 GiB already allocated; 0 bytes free; 14.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF steps: 0%| | 0/8000 [00:04 File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main args.func(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command simple_launcher(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\Users\\isman\\Documents\\KOHYA\\kohya_ss\\venv\\Scripts\\python.exe', './sdxl_train.py', '--enable_bucket', '--min_bucket_reso=256', '--max_bucket_reso=2048', '--pretrained_model_name_or_path=C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/models/checkpoints/zavychromaxl_v30.safetensors', '--train_data_dir=C:/Users/isman/Pictures/fluid art\\img', '--reg_data_dir=C:/Users/isman/Pictures/fluid art\\reg', '--resolution=1024,1024', '--output_dir=C:/Users/isman/Pictures/fluid art\\model', '--logging_dir=C:/Users/isman/Pictures/fluid art\\log', '--save_model_as=safetensors', '--output_name=fluidart', '--lr_scheduler_num_cycles=10', '--max_data_loader_n_workers=0', '--learning_rate_te1=1e-05', '--learning_rate_te2=1e-05', '--learning_rate=0.0004', '--lr_scheduler=constant', '--train_batch_size=1', '--max_train_steps=8000', '--save_every_n_epochs=1', '--mixed_precision=bf16', '--save_precision=bf16', '--cache_latents', '--cache_latents_to_disk', '--optimizer_type=Adafactor', '--optimizer_args', 'scale_parameter=False', 'relative_step=False', 'warmup_init=False', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale', '--noise_offset=0.0']' returned non-zero exit status 1. 01:53:36-648100 INFO Save... 01:53:42-277605 INFO Start training Dreambooth... 01:53:42-280114 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\img 01:53:42-281118 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\reg 01:53:42-283116 INFO Folder 20_sks fluidart : steps 400 01:53:42-284118 INFO Regularisation images are used... Will double the number of steps required... 01:53:42-285114 INFO max_train_steps (400 / 1 / 1 * 10 * 2) = 8000 01:53:42-286117 INFO stop_text_encoder_training = 0 01:53:42-287114 INFO lr_warmup_steps = 0 01:53:42-288117 INFO Saving training config to C:/Users/isman/Pictures/fluid art\model\fluidart_20231231-015342.json... 01:53:42-290119 INFO accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train.py" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=2048 --pretrained_model_name_or_path="C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/mode ls/checkpoints/zavychromaxl_v30.safetensors" --train_data_dir="C:/Users/isman/Pictures/fluid art\img" --reg_data_dir="C:/Users/isman/Pictures/fluid art\reg" --resolution="1024,1024" --output_dir="C:/Users/isman/Pictures/fluid art\model" --logging_dir="C:/Users/isman/Pictures/fluid art\log" --save_model_as=safetensors --output_name="fluidart" --lr_scheduler_num_cycles="10" --max_data_loader_n_workers="0" --learning_rate_te1="1e-05" --learning_rate_te2="1e-05" --learning_rate="0.0004" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="8000" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="bf16" --cache_latents --cache_latents_to_disk --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale --noise_offset=0.0 prepare tokenizers Using DreamBooth method. prepare images. found directory C:\Users\isman\Pictures\fluid art\img\20_sks fluidart contains 20 image files No caption file found for 20 images. Training will continue without captions for these images. If class token exists, it will be used. / 20枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学習を 続行します。class tokenが存在する場合はそれを使います。 C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpeg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).webp C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (10).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (11).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (12).jpg... and 15 more found directory C:\Users\isman\Pictures\fluid art\reg\1_fluidart contains 0 image files ignore subset with image_dir='C:\Users\isman\Pictures\fluid art\reg\1_fluidart': no images found / 画像が見つからないた めサブセットを無視します 400 train images with repeating. 0 reg images. no regularization images / 正則化画像が見つかりませんでした [Dataset 0] batch_size: 1 resolution: (1024, 1024) enable_bucket: True min_bucket_reso: 256 max_bucket_reso: 2048 bucket_reso_steps: 64 bucket_no_upscale: True [Subset 0 of Dataset 0] image_dir: "C:\Users\isman\Pictures\fluid art\img\20_sks fluidart" image_count: 20 num_repeats: 20 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: sks fluidart caption_extension: .caption [Dataset 0] loading image sizes. 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 376.15it/s] make buckets min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (1024, 1024), count: 400 mean ar error (without repeats): 0.0 prepare accelerator loading model for process 0/1 load StableDiffusion checkpoint: C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/models/checkpoints/zavychromaxl_v30.safetensors building U-Net loading U-Net from checkpoint U-Net: building text encoders loading text encoders from checkpoint text encoder 1: text encoder 2: building VAE loading VAE from checkpoint VAE: Disable Diffusers' xformers Enable xformers for U-Net A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' [Dataset 0] caching latents. checking cache validity... 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 408.16it/s] caching latents... 0it [00:00, ?it/s] train unet: True, text_encoder1: False, text_encoder2: False number of models: 1 number of trainable parameters: 2567463684 prepare optimizer, data loader etc. use Adafactor optimizer | {'scale_parameter': False, 'relative_step': False, 'warmup_init': False} because max_grad_norm is set, clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているためclip_grad_normが有効になります。0に設定して無効にしたほうがいいかもしれません constant_with_warmup will be good / スケジューラはconstant_with_warmupが良いかもしれません running training / 学習開始 num examples / サンプル数: 400 num batches per epoch / 1epochのバッチ数: 400 num epochs / epoch数: 20 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 8000 steps: 0%| | 0/8000 [00:00 train(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\sdxl_train.py", line 563, in train noise_pred = unet(noisy_latents, timesteps, text_embedding, vector_embedding) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\utils\operations.py", line 636, in forward return model_forward(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\utils\operations.py", line 624, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\amp\autocast_mode.py", line 14, in decorate_autocast return func(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 1099, in forward h = call_module(module, h, emb, context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 1090, in call_module x = layer(x, context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 745, in forward hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 668, in forward output = self.forward_body(hidden_states, context, timestep) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 643, in forward_body hidden_states = self.attn1(norm_hidden_states) + hidden_states File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 438, in forward return self.forward_memory_efficient_xformers(hidden_states, context, mask) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 492, in forward_memory_efficient_xformers k_in = self.to_k(context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 14.24 GiB already allocated; 0 bytes free; 14.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF steps: 0%| | 0/8000 [00:03 File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main args.func(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command simple_launcher(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\Users\\isman\\Documents\\KOHYA\\kohya_ss\\venv\\Scripts\\python.exe', './sdxl_train.py', '--enable_bucket', '--min_bucket_reso=256', '--max_bucket_reso=2048', '--pretrained_model_name_or_path=C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/models/checkpoints/zavychromaxl_v30.safetensors', '--train_data_dir=C:/Users/isman/Pictures/fluid art\\img', '--reg_data_dir=C:/Users/isman/Pictures/fluid art\\reg', '--resolution=1024,1024', '--output_dir=C:/Users/isman/Pictures/fluid art\\model', '--logging_dir=C:/Users/isman/Pictures/fluid art\\log', '--save_model_as=safetensors', '--output_name=fluidart', '--lr_scheduler_num_cycles=10', '--max_data_loader_n_workers=0', '--learning_rate_te1=1e-05', '--learning_rate_te2=1e-05', '--learning_rate=0.0004', '--lr_scheduler=constant', '--train_batch_size=1', '--max_train_steps=8000', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=bf16', '--cache_latents', '--cache_latents_to_disk', '--optimizer_type=Adafactor', '--optimizer_args', 'scale_parameter=False', 'relative_step=False', 'warmup_init=False', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale', '--noise_offset=0.0']' returned non-zero exit status 1. 01:55:04-523326 INFO Save... 01:55:13-670792 INFO Loading config... 01:55:16-694446 INFO Loading config... 01:55:30-686389 INFO Save... 01:55:34-785863 INFO Loading config... 01:55:49-623970 INFO Loading config... 01:55:51-911156 INFO Loading config... 01:56:53-695008 INFO Save... 01:56:57-885942 INFO Loading config... 01:57:04-612445 INFO Start training LoRA Standard ... 01:57:04-613450 INFO Checking for duplicate image filenames in training data directory... Warning: Same filename 'fluidart (1)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).jpg Warning: Same filename 'fluidart (1)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).webp Warning: Same filename 'fluidart (2)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (2).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (2).jpg Warning: Same filename 'fluidart (4)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (4).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (4).jpg 01:57:04-616450 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\img 01:57:04-617450 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\reg 01:57:04-620449 INFO Folder 20_sks fluidart: 20 images found 01:57:04-620449 INFO Folder 20_sks fluidart: 400 steps 01:57:04-621450 WARNING Regularisation images are used... Will double the number of steps required... 01:57:04-623450 INFO Total steps: 400 01:57:04-624450 INFO Train batch size: 1 01:57:04-624450 INFO Gradient accumulation steps: 1 01:57:04-625450 INFO Epoch: 10 01:57:04-626450 INFO Regulatization factor: 2 01:57:04-626450 INFO max_train_steps (400 / 1 / 1 * 10 * 2) = 8000 01:57:04-627554 INFO stop_text_encoder_training = 0 01:57:04-628450 INFO lr_warmup_steps = 0 01:57:04-628450 INFO Saving training config to C:/Users/isman/Pictures/fluid art\model\fluidart_20231231-015704.json... 01:57:04-630450 INFO accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train_network.py" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=1536 --pretrained_model_name_or_path="C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/mode ls/checkpoints/zavychromaxl_v30.safetensors" --train_data_dir="C:/Users/isman/Pictures/fluid art\img" --reg_data_dir="C:/Users/isman/Pictures/fluid art\reg" --resolution="1024,1024" --output_dir="C:/Users/isman/Pictures/fluid art\model" --logging_dir="C:/Users/isman/Pictures/fluid art\log" --network_alpha="1" --save_model_as=safetensors --network_module=networks.lora --network_dim=8 --output_name="fluidart" --lr_scheduler_num_cycles="10" --cache_text_encoder_outputs --no_half_vae --learning_rate="0.0004" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="8000" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --cache_latents --cache_latents_to_disk --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_grad_norm="1" --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale --noise_offset=0.0 prepare tokenizers Using DreamBooth method. prepare images. found directory C:\Users\isman\Pictures\fluid art\img\20_sks fluidart contains 20 image files No caption file found for 20 images. Training will continue without captions for these images. If class token exists, it will be used. / 20枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学習を 続行します。class tokenが存在する場合はそれを使います。 C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpeg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).webp C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (10).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (11).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (12).jpg... and 15 more found directory C:\Users\isman\Pictures\fluid art\reg\1_fluidart contains 0 image files ignore subset with image_dir='C:\Users\isman\Pictures\fluid art\reg\1_fluidart': no images found / 画像が見つからないた めサブセットを無視します 400 train images with repeating. 0 reg images. no regularization images / 正則化画像が見つかりませんでした [Dataset 0] batch_size: 1 resolution: (1024, 1024) enable_bucket: True min_bucket_reso: 256 max_bucket_reso: 1536 bucket_reso_steps: 64 bucket_no_upscale: True [Subset 0 of Dataset 0] image_dir: "C:\Users\isman\Pictures\fluid art\img\20_sks fluidart" image_count: 20 num_repeats: 20 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: sks fluidart caption_extension: .caption [Dataset 0] loading image sizes. 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 382.05it/s] make buckets min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (1024, 1024), count: 400 mean ar error (without repeats): 0.0 Traceback (most recent call last): File "C:\Users\isman\Documents\KOHYA\kohya_ss\sdxl_train_network.py", line 189, in trainer.train(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\train_network.py", line 222, in train self.assert_extra_args(args, train_dataset_group) File "C:\Users\isman\Documents\KOHYA\kohya_ss\sdxl_train_network.py", line 32, in assert_extra_args assert ( AssertionError: network for Text Encoder cannot be trained with caching Text Encoder outputs / Text Encoderの出力をキャ ッシュしながらText Encoderのネットワークを学習することはできません Traceback (most recent call last): File "C:\Users\isman\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\isman\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main args.func(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command simple_launcher(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\Users\\isman\\Documents\\KOHYA\\kohya_ss\\venv\\Scripts\\python.exe', './sdxl_train_network.py', '--enable_bucket', '--min_bucket_reso=256', '--max_bucket_reso=1536', '--pretrained_model_name_or_path=C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/models/checkpoints/zavychromaxl_v30.safetensors', '--train_data_dir=C:/Users/isman/Pictures/fluid art\\img', '--reg_data_dir=C:/Users/isman/Pictures/fluid art\\reg', '--resolution=1024,1024', '--output_dir=C:/Users/isman/Pictures/fluid art\\model', '--logging_dir=C:/Users/isman/Pictures/fluid art\\log', '--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--network_dim=8', '--output_name=fluidart', '--lr_scheduler_num_cycles=10', '--cache_text_encoder_outputs', '--no_half_vae', '--learning_rate=0.0004', '--lr_scheduler=constant', '--train_batch_size=1', '--max_train_steps=8000', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--cache_latents', '--cache_latents_to_disk', '--optimizer_type=Adafactor', '--optimizer_args', 'scale_parameter=False', 'relative_step=False', 'warmup_init=False', '--max_grad_norm=1', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale', '--noise_offset=0.0']' returned non-zero exit status 1. 01:57:40-608010 INFO Save... 01:57:48-566431 INFO Loading config... 01:57:54-768855 INFO Start training LoRA Standard ... 01:57:54-769855 INFO Checking for duplicate image filenames in training data directory... Warning: Same filename 'fluidart (1)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).jpg Warning: Same filename 'fluidart (1)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (1).webp Warning: Same filename 'fluidart (2)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (2).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (2).jpg Warning: Same filename 'fluidart (4)' with different image extension found. This will cause training issues. Rename one of the file. Existing file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (4).jpeg Current file: C:/Users/isman/Pictures/fluid art\img\20_sks fluidart\fluidart (4).jpg 01:57:54-771855 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\img 01:57:54-772976 INFO Valid image folder names found in: C:/Users/isman/Pictures/fluid art\reg 01:57:54-774856 INFO Folder 20_sks fluidart: 20 images found 01:57:54-774856 INFO Folder 20_sks fluidart: 400 steps 01:57:54-775856 WARNING Regularisation images are used... Will double the number of steps required... 01:57:54-775856 INFO Total steps: 400 01:57:54-777360 INFO Train batch size: 1 01:57:54-778365 INFO Gradient accumulation steps: 1 01:57:54-778365 INFO Epoch: 10 01:57:54-779365 INFO Regulatization factor: 2 01:57:54-780365 INFO max_train_steps (400 / 1 / 1 * 10 * 2) = 8000 01:57:54-780365 INFO stop_text_encoder_training = 0 01:57:54-781365 INFO lr_warmup_steps = 0 01:57:54-782365 INFO Saving training config to C:/Users/isman/Pictures/fluid art\model\fluidart_20231231-015754.json... 01:57:54-783365 INFO accelerate launch --num_cpu_threads_per_process=2 "./sdxl_train_network.py" --enable_bucket --min_bucket_reso=256 --max_bucket_reso=1536 --pretrained_model_name_or_path="C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/mode ls/checkpoints/zavychromaxl_v30.safetensors" --train_data_dir="C:/Users/isman/Pictures/fluid art\img" --reg_data_dir="C:/Users/isman/Pictures/fluid art\reg" --resolution="1024,1024" --output_dir="C:/Users/isman/Pictures/fluid art\model" --logging_dir="C:/Users/isman/Pictures/fluid art\log" --network_alpha="1" --save_model_as=safetensors --network_module=networks.lora --network_dim=8 --output_name="fluidart" --lr_scheduler_num_cycles="10" --no_half_vae --learning_rate="0.0004" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="8000" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --cache_latents --cache_latents_to_disk --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False --max_grad_norm="1" --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale --noise_offset=0.0 prepare tokenizers Using DreamBooth method. prepare images. found directory C:\Users\isman\Pictures\fluid art\img\20_sks fluidart contains 20 image files No caption file found for 20 images. Training will continue without captions for these images. If class token exists, it will be used. / 20枚の画像にキャプションファイルが見つかりませんでした。これらの画像についてはキャプションなしで学習を 続行します。class tokenが存在する場合はそれを使います。 C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpeg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (1).webp C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (10).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (11).jpg C:\Users\isman\Pictures\fluid art\img\20_sks fluidart\fluidart (12).jpg... and 15 more found directory C:\Users\isman\Pictures\fluid art\reg\1_fluidart contains 0 image files ignore subset with image_dir='C:\Users\isman\Pictures\fluid art\reg\1_fluidart': no images found / 画像が見つからないた めサブセットを無視します 400 train images with repeating. 0 reg images. no regularization images / 正則化画像が見つかりませんでした [Dataset 0] batch_size: 1 resolution: (1024, 1024) enable_bucket: True min_bucket_reso: 256 max_bucket_reso: 1536 bucket_reso_steps: 64 bucket_no_upscale: True [Subset 0 of Dataset 0] image_dir: "C:\Users\isman\Pictures\fluid art\img\20_sks fluidart" image_count: 20 num_repeats: 20 shuffle_caption: False keep_tokens: 0 keep_tokens_separator: caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 caption_prefix: None caption_suffix: None color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False token_warmup_min: 1, token_warmup_step: 0, is_reg: False class_tokens: sks fluidart caption_extension: .caption [Dataset 0] loading image sizes. 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 526.31it/s] make buckets min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (1024, 1024), count: 400 mean ar error (without repeats): 0.0 preparing accelerator loading model for process 0/1 load StableDiffusion checkpoint: C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/models/checkpoints/zavychromaxl_v30.safetensors building U-Net loading U-Net from checkpoint U-Net: building text encoders loading text encoders from checkpoint text encoder 1: text encoder 2: building VAE loading VAE from checkpoint VAE: Enable xformers for U-Net A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' import network module: networks.lora [Dataset 0] caching latents. checking cache validity... 100%|█████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 500.00it/s] caching latents... 0it [00:00, ?it/s] create LoRA network. base dim (rank): 8, alpha: 1.0 neuron dropout: p=None, rank dropout: p=None, module dropout: p=None create LoRA for Text Encoder 1: create LoRA for Text Encoder 2: create LoRA for Text Encoder: 264 modules. create LoRA for U-Net: 722 modules. enable LoRA for text encoder enable LoRA for U-Net prepare optimizer, data loader etc. use Adafactor optimizer | {'scale_parameter': False, 'relative_step': False, 'warmup_init': False} because max_grad_norm is set, clip_grad_norm is enabled. consider set to 0 / max_grad_normが設定されているためclip_grad_normが有効になります。0に設定して無効にしたほうがいいかもしれません constant_with_warmup will be good / スケジューラはconstant_with_warmupが良いかもしれません running training / 学習開始 num train images * repeats / 学習画像の数×繰り返し回数: 400 num reg images / 正則化画像の数: 0 num batches per epoch / 1epochのバッチ数: 400 num epochs / epoch数: 20 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 8000 steps: 0%| | 0/8000 [00:00 trainer.train(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\train_network.py", line 781, in train noise_pred = self.call_unet( File "C:\Users\isman\Documents\KOHYA\kohya_ss\sdxl_train_network.py", line 169, in call_unet noise_pred = unet(noisy_latents, timesteps, text_embedding, vector_embedding) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\utils\operations.py", line 636, in forward return model_forward(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\utils\operations.py", line 624, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\amp\autocast_mode.py", line 14, in decorate_autocast return func(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 1106, in forward h = call_module(module, h, emb, context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 1090, in call_module x = layer(x, context) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 745, in forward hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 668, in forward output = self.forward_body(hidden_states, context, timestep) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 650, in forward_body hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 594, in forward hidden_states = module(hidden_states) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\library\sdxl_original_unet.py", line 572, in forward hidden_states, gate = self.proj(hidden_states).chunk(2, dim=-1) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\isman\Documents\KOHYA\kohya_ss\networks\lora.py", line 87, in forward org_forwarded = self.org_forward(x) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 14.31 GiB already allocated; 0 bytes free; 14.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF steps: 0%| | 0/8000 [00:09 File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 47, in main args.func(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 986, in launch_command simple_launcher(args) File "C:\Users\isman\Documents\KOHYA\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\Users\\isman\\Documents\\KOHYA\\kohya_ss\\venv\\Scripts\\python.exe', './sdxl_train_network.py', '--enable_bucket', '--min_bucket_reso=256', '--max_bucket_reso=1536', '--pretrained_model_name_or_path=C:/Users/isman/Documents/ComfyUI_windows_portable/ComfyUI/models/checkpoints/zavychromaxl_v30.safetensors', '--train_data_dir=C:/Users/isman/Pictures/fluid art\\img', '--reg_data_dir=C:/Users/isman/Pictures/fluid art\\reg', '--resolution=1024,1024', '--output_dir=C:/Users/isman/Pictures/fluid art\\model', '--logging_dir=C:/Users/isman/Pictures/fluid art\\log', '--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--network_dim=8', '--output_name=fluidart', '--lr_scheduler_num_cycles=10', '--no_half_vae', '--learning_rate=0.0004', '--lr_scheduler=constant', '--train_batch_size=1', '--max_train_steps=8000', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--cache_latents', '--cache_latents_to_disk', '--optimizer_type=Adafactor', '--optimizer_args', 'scale_parameter=False', 'relative_step=False', 'warmup_init=False', '--max_grad_norm=1', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale', '--noise_offset=0.0']' returned non-zero exit status 1.