Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --cache_latents --optimizer_type="AdamW" --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale
- 2023-04-01 06:00:22.256394: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
- 2023-04-01 06:00:23.292624: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
- 2023-04-01 06:00:23.292749: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
- 2023-04-01 06:00:23.292774: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
- 2023-04-01 06:00:26.507114: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
- 2023-04-01 06:00:27.530150: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
- 2023-04-01 06:00:27.530286: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
- 2023-04-01 06:00:27.530319: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
- /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/local/lib/python3.10/site-packages/xformers/_C.so)
- WARNING:root:WARNING: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/local/lib/python3.10/site-packages/xformers/_C.so)
- Need to compile C++ extensions to get sparse attention suport. Please run python setup.py build develop
- prepare tokenizer
- Use DreamBooth method.
- prepare images.
- found directory out/img/40_bottle icon contains 39 image files
- 1560 train images with repeating.
- 0 reg images.
- no regularization images / 正則化画像が見つかりませんでした
- [Dataset 0]
- batch_size: 1
- resolution: (512, 512)
- enable_bucket: True
- min_bucket_reso: 256
- max_bucket_reso: 1024
- bucket_reso_steps: 64
- bucket_no_upscale: True
- [Subset 0 of Dataset 0]
- image_dir: "out/img/40_bottle icon"
- image_count: 39
- num_repeats: 40
- shuffle_caption: False
- keep_tokens: 0
- caption_dropout_rate: 0.0
- caption_dropout_every_n_epoches: 0
- caption_tag_dropout_rate: 0.0
- color_aug: False
- flip_aug: False
- face_crop_aug_range: None
- random_crop: False
- token_warmup_min: 1,
- token_warmup_step: 0,
- is_reg: False
- class_tokens: bottle icon
- caption_extension: .caption
- [Dataset 0]
- loading image sizes.
- 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 39/39 [00:00<00:00, 3065.49it/s]
- make buckets
- min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます
- number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む)
- bucket 0: resolution (512, 512), count: 1560
- mean ar error (without repeats): 0.0
- prepare accelerator
- Using accelerator 0.15.0 or above.
- load StableDiffusion checkpoint
- loading u-net: <All keys matched successfully>
- loading vae: <All keys matched successfully>
- loading text encoder: <All keys matched successfully>
- Replace CrossAttention.forward to use xformers
- [Dataset 0]
- caching latents.
- 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 39/39 [00:05<00:00, 6.58it/s]
- import network module: networks.lora
- create LoRA network. base dim (rank): 8, alpha: 1.0
- create LoRA for Text Encoder: 72 modules.
- create LoRA for U-Net: 192 modules.
- enable LoRA for text encoder
- enable LoRA for U-Net
- prepare optimizer, data loader etc.
- use AdamW optimizer | {}
- running training / 学習開始
- num train images * repeats / 学習画像の数×繰り返し回数: 1560
- num reg images / 正則化画像の数: 0
- num batches per epoch / 1epochのバッチ数: 1560
- num epochs / epoch数: 1
- batch size per device / バッチサイズ: 1
- gradient accumulation steps / 勾配を合計するステップ数 = 1
- total optimization steps / 学習ステップ数: 1560
- steps: 0%| | 0/1560 [00:00<?, ?it/s]epoch 1/1
- Traceback (most recent call last):
- File "/kohya_ss/train_network.py", line 711, in <module>
- train(args)
- File "/kohya_ss/train_network.py", line 546, in train
- noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
- File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/site-packages/accelerate/utils/operations.py", line 490, in __call__
- return convert_to_fp32(self.model_forward(*args, **kwargs))
- File "/usr/local/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
- return func(*args, **kwargs)
- File "/usr/local/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 381, in forward
- sample, res_samples = downsample_block(
- File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 612, in forward
- hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
- File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/site-packages/diffusers/models/attention.py", line 216, in forward
- hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep)
- File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/site-packages/diffusers/models/attention.py", line 484, in forward
- hidden_states = self.attn1(norm_hidden_states) + hidden_states
- File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
- return forward_call(*input, **kwargs)
- File "/kohya_ss/library/train_util.py", line 1767, in forward_xformers
- out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None) # 最適なのを選んでくれる
- File "/usr/local/lib/python3.10/site-packages/xformers/ops.py", line 865, in memory_efficient_attention
- return op.apply(query, key, value, attn_bias, p).reshape(output_shape)
- File "/usr/local/lib/python3.10/site-packages/xformers/ops.py", line 319, in forward
- out, lse = cls.FORWARD_OPERATOR(
- File "/usr/local/lib/python3.10/site-packages/xformers/ops.py", line 46, in no_such_operator
- raise RuntimeError(
- RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with `python setup.py develop`?
- steps: 0%| | 0/1560 [00:00<?, ?it/s]
- Traceback (most recent call last):
- File "/usr/local/bin/accelerate", line 8, in <module>
- sys.exit(main())
- File "/usr/local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
- args.func(args)
- File "/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1104, in launch_command
- simple_launcher(args)
- File "/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 567, in simple_launcher
- raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
- subprocess.CalledProcessError: Command '['/usr/local/bin/python', 'train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=/v1-5-pruned.ckpt', '--train_data_dir=out/img', '--resolution=512,512', '--output_dir=out/model', '--logging_dir=out/log', '--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-5', '--unet_lr=0.0001', '--network_dim=8', '--output_name=bottle', '--lr_scheduler_num_cycles=1', '--learning_rate=0.0001', '--lr_scheduler=cosine', '--lr_warmup_steps=156', '--train_batch_size=1', '--max_train_steps=1560', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--cache_latents', '--optimizer_type=AdamW', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status 1.
- root@eea5853c3cdd:/kohya_ss#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement