[Dataset 0] loading image sizes. 100%|██████████████████████████████████████████████████████████████████████████████| 328/328 [00:00<00:00, 2074.11it/s] make buckets min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (512, 512), count: 640 mean ar error (without repeats): 0.0 prepare accelerator Using accelerator 0.15.0 or above. load StableDiffusion checkpoint loading u-net: loading vae: loading text encoder: Replace CrossAttention.forward to use xformers [Dataset 0] caching latents. 0%| | 0/328 [00:00 train(args) File "C:\kohya_ss\train_db.py", line 120, in train train_dataset_group.cache_latents(vae, args.vae_batch_size, args.cache_latents_to_disk, accelerator.is_main_process) File "C:\kohya_ss\library\train_util.py", line 1391, in cache_latents dataset.cache_latents(vae, vae_batch_size, cache_to_disk, is_main_process) File "C:\kohya_ss\library\train_util.py", line 805, in cache_latents latents = vae.encode(img_tensors).latent_dist.sample().to("cpu") File "C:\kohya_ss\venv\lib\site-packages\diffusers\models\vae.py", line 566, in encode h = self.encoder(x) File "C:\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\kohya_ss\venv\lib\site-packages\diffusers\models\vae.py", line 130, in forward sample = self.conv_in(sample) File "C:\kohya_ss\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\kohya_ss\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\kohya_ss\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: "slow_conv2d_cpu" not implemented for 'Half' Traceback (most recent call last): File "C:\Users\Kashyap\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Users\Kashyap\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "C:\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in File "C:\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main args.func(args) File "C:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command simple_launcher(args) File "C:\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\\kohya_ss\\venv\\Scripts\\python.exe', 'train_db.py', '--enable_bucket', '--pretrained_model_name_or_path=C:/SD/stable-diffusion-webui/models/Stable-diffusion/realisticVisionV20_v20.safetensors', '--train_data_dir=C:/SD/training/Kohya video/img', '--reg_data_dir=C:/SD/training/Kohya video/reg', '--resolution=512,512', '--output_dir=C:/SD/training/Kohya video/model', '--logging_dir=C:/SD/training/Kohya video/log', '--save_model_as=safetensors', '--output_name=test1', '--max_data_loader_n_workers=0', '--learning_rate=0.0001', '--lr_scheduler=cosine', '--lr_warmup_steps=192', '--train_batch_size=1', '--max_train_steps=1920', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--seed=1234', '--cache_latents', '--optimizer_type=AdamW8bit', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status 1.