Advertisement
kopyl

Untitled

Apr 1st, 2023
931
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 10.36 KB | None | 0 0
  1. n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --cache_latents --optimizer_type="AdamW" --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale
  2. 2023-04-01 06:00:22.256394: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
  3. 2023-04-01 06:00:23.292624: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
  4. 2023-04-01 06:00:23.292749: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
  5. 2023-04-01 06:00:23.292774: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
  6. 2023-04-01 06:00:26.507114: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
  7. 2023-04-01 06:00:27.530150: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
  8. 2023-04-01 06:00:27.530286: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
  9. 2023-04-01 06:00:27.530319: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
  10. /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/local/lib/python3.10/site-packages/xformers/_C.so)
  11. WARNING:root:WARNING: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by /usr/local/lib/python3.10/site-packages/xformers/_C.so)
  12. Need to compile C++ extensions to get sparse attention suport. Please run python setup.py build develop
  13. prepare tokenizer
  14. Use DreamBooth method.
  15. prepare images.
  16. found directory out/img/40_bottle icon contains 39 image files
  17. 1560 train images with repeating.
  18. 0 reg images.
  19. no regularization images / 正則化画像が見つかりませんでした
  20. [Dataset 0]
  21.   batch_size: 1
  22.   resolution: (512, 512)
  23.   enable_bucket: True
  24.   min_bucket_reso: 256
  25.   max_bucket_reso: 1024
  26.   bucket_reso_steps: 64
  27.   bucket_no_upscale: True
  28.  
  29.   [Subset 0 of Dataset 0]
  30.     image_dir: "out/img/40_bottle icon"
  31.     image_count: 39
  32.     num_repeats: 40
  33.     shuffle_caption: False
  34.     keep_tokens: 0
  35.     caption_dropout_rate: 0.0
  36.     caption_dropout_every_n_epoches: 0
  37.     caption_tag_dropout_rate: 0.0
  38.     color_aug: False
  39.     flip_aug: False
  40.     face_crop_aug_range: None
  41.     random_crop: False
  42.     token_warmup_min: 1,
  43.     token_warmup_step: 0,
  44.     is_reg: False
  45.     class_tokens: bottle icon
  46.     caption_extension: .caption
  47.  
  48.  
  49. [Dataset 0]
  50. loading image sizes.
  51. 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 39/39 [00:00<00:00, 3065.49it/s]
  52. make buckets
  53. min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます
  54. number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む)
  55. bucket 0: resolution (512, 512), count: 1560
  56. mean ar error (without repeats): 0.0
  57. prepare accelerator
  58. Using accelerator 0.15.0 or above.
  59. load StableDiffusion checkpoint
  60. loading u-net: <All keys matched successfully>
  61. loading vae: <All keys matched successfully>
  62. loading text encoder: <All keys matched successfully>
  63. Replace CrossAttention.forward to use xformers
  64. [Dataset 0]
  65. caching latents.
  66. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 39/39 [00:05<00:00,  6.58it/s]
  67. import network module: networks.lora
  68. create LoRA network. base dim (rank): 8, alpha: 1.0
  69. create LoRA for Text Encoder: 72 modules.
  70. create LoRA for U-Net: 192 modules.
  71. enable LoRA for text encoder
  72. enable LoRA for U-Net
  73. prepare optimizer, data loader etc.
  74. use AdamW optimizer | {}
  75. running training / 学習開始
  76.   num train images * repeats / 学習画像の数×繰り返し回数: 1560
  77.   num reg images / 正則化画像の数: 0
  78.   num batches per epoch / 1epochのバッチ数: 1560
  79.   num epochs / epoch数: 1
  80.   batch size per device / バッチサイズ: 1
  81.   gradient accumulation steps / 勾配を合計するステップ数 = 1
  82.   total optimization steps / 学習ステップ数: 1560
  83. steps:   0%|                                                                                                                                                              | 0/1560 [00:00<?, ?it/s]epoch 1/1
  84. Traceback (most recent call last):
  85.   File "/kohya_ss/train_network.py", line 711, in <module>
  86.     train(args)
  87.   File "/kohya_ss/train_network.py", line 546, in train
  88.     noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
  89.   File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
  90.     return forward_call(*input, **kwargs)
  91.   File "/usr/local/lib/python3.10/site-packages/accelerate/utils/operations.py", line 490, in __call__
  92.     return convert_to_fp32(self.model_forward(*args, **kwargs))
  93.   File "/usr/local/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
  94.     return func(*args, **kwargs)
  95.   File "/usr/local/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 381, in forward
  96.     sample, res_samples = downsample_block(
  97.   File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
  98.     return forward_call(*input, **kwargs)
  99.   File "/usr/local/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 612, in forward
  100.     hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample
  101.   File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
  102.     return forward_call(*input, **kwargs)
  103.   File "/usr/local/lib/python3.10/site-packages/diffusers/models/attention.py", line 216, in forward
  104.     hidden_states = block(hidden_states, context=encoder_hidden_states, timestep=timestep)
  105.   File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
  106.     return forward_call(*input, **kwargs)
  107.   File "/usr/local/lib/python3.10/site-packages/diffusers/models/attention.py", line 484, in forward
  108.     hidden_states = self.attn1(norm_hidden_states) + hidden_states
  109.   File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
  110.     return forward_call(*input, **kwargs)
  111.   File "/kohya_ss/library/train_util.py", line 1767, in forward_xformers
  112.     out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)  # 最適なのを選んでくれる
  113.   File "/usr/local/lib/python3.10/site-packages/xformers/ops.py", line 865, in memory_efficient_attention
  114.     return op.apply(query, key, value, attn_bias, p).reshape(output_shape)
  115.   File "/usr/local/lib/python3.10/site-packages/xformers/ops.py", line 319, in forward
  116.     out, lse = cls.FORWARD_OPERATOR(
  117.   File "/usr/local/lib/python3.10/site-packages/xformers/ops.py", line 46, in no_such_operator
  118.     raise RuntimeError(
  119. RuntimeError: No such operator xformers::efficient_attention_forward_cutlass - did you forget to build xformers with `python setup.py develop`?
  120. steps:   0%|                                                                                                                                                              | 0/1560 [00:00<?, ?it/s]
  121. Traceback (most recent call last):
  122.   File "/usr/local/bin/accelerate", line 8, in <module>
  123.     sys.exit(main())
  124.   File "/usr/local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
  125.     args.func(args)
  126.   File "/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1104, in launch_command
  127.     simple_launcher(args)
  128.   File "/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 567, in simple_launcher
  129.     raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
  130. subprocess.CalledProcessError: Command '['/usr/local/bin/python', 'train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=/v1-5-pruned.ckpt', '--train_data_dir=out/img', '--resolution=512,512', '--output_dir=out/model', '--logging_dir=out/log', '--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-5', '--unet_lr=0.0001', '--network_dim=8', '--output_name=bottle', '--lr_scheduler_num_cycles=1', '--learning_rate=0.0001', '--lr_scheduler=cosine', '--lr_warmup_steps=156', '--train_batch_size=1', '--max_train_steps=1560', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--cache_latents', '--optimizer_type=AdamW', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status 1.
  131. root@eea5853c3cdd:/kohya_ss#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement