Advertisement
kopyl

Untitled

Mar 28th, 2023
769
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 6.14 KB | None | 0 0
  1. ork_weights=100img_1-5.safetensors --prompt="man on the moon"
  2. 2023-03-28 07:01:52.416152: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
  3. To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
  4. 2023-03-28 07:01:52.539778: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
  5. 2023-03-28 07:01:52.945766: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
  6. 2023-03-28 07:01:52.945835: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
  7. 2023-03-28 07:01:52.945843: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
  8. load StableDiffusion checkpoint
  9. loading u-net: <All keys matched successfully>
  10. loading vae: <All keys matched successfully>
  11. loading text encoder: <All keys matched successfully>
  12. Replace CrossAttention.forward to use NAI style Hypernetwork and FlashAttention
  13. loading tokenizer
  14. prepare tokenizer
  15. import network module: networks.lora
  16. load network weights from: 100img_1-5.safetensors
  17. create LoRA network from weights
  18. create LoRA for Text Encoder: 72 modules.
  19. create LoRA for U-Net: 192 modules.
  20. enable LoRA for text encoder
  21. enable LoRA for U-Net
  22. weights are loaded: <All keys matched successfully>
  23. /workspace/sd-scripts/gen_img_diffusers.py:466: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  24.   "_class_name": "DDIMScheduler",
  25.   "_diffusers_version": "0.12.1",
  26.   "beta_end": 0.012,
  27.   "beta_schedule": "scaled_linear",
  28.   "beta_start": 0.00085,
  29.   "clip_sample": true,
  30.   "num_train_timesteps": 1000,
  31.   "prediction_type": "epsilon",
  32.   "set_alpha_to_one": true,
  33.   "steps_offset": 0,
  34.   "trained_betas": null
  35. }
  36.  is outdated. `steps_offset` should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving `steps_offset` might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  37.   deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
  38. /workspace/sd-scripts/gen_img_diffusers.py:479: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
  39.   "_class_name": "DDIMScheduler",
  40.   "_diffusers_version": "0.12.1",
  41.   "beta_end": 0.012,
  42.   "beta_schedule": "scaled_linear",
  43.   "beta_start": 0.00085,
  44.   "clip_sample": true,
  45.   "num_train_timesteps": 1000,
  46.   "prediction_type": "epsilon",
  47.   "set_alpha_to_one": true,
  48.   "steps_offset": 1,
  49.   "trained_betas": null
  50. }
  51.  has not set the configuration `clip_sample`. `clip_sample` should be set to False in the configuration file. Please make sure to update the config accordingly as not setting `clip_sample` in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  52.   deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
  53. pipeline is ready.
  54. iteration 1/1
  55. prompt 1/1: man on the moon
  56.   0%|                                                                                                                                 | 0/50 [00:00<?, ?it/s]
  57. Traceback (most recent call last):
  58.   File "/workspace/sd-scripts/gen_img_diffusers.py", line 3070, in <module>
  59.     main(args)
  60.   File "/workspace/sd-scripts/gen_img_diffusers.py", line 2855, in main
  61.     prev_image = process_batch(batch_data, highres_fix)[0]
  62.   File "/workspace/sd-scripts/gen_img_diffusers.py", line 2619, in process_batch
  63.     images = pipe(
  64.   File "/usr/local/lib/python3.10/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
  65.     return func(*args, **kwargs)
  66.   File "/workspace/sd-scripts/gen_img_diffusers.py", line 972, in __call__
  67.     noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
  68.   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
  69.     return forward_call(*input, **kwargs)
  70.   File "/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py", line 481, in forward
  71.     sample, res_samples = downsample_block(
  72.   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
  73.     return forward_call(*input, **kwargs)
  74.   File "/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_blocks.py", line 789, in forward
  75.     hidden_states = attn(
  76.   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
  77.     return forward_call(*input, **kwargs)
  78.   File "/usr/local/lib/python3.10/dist-packages/diffusers/models/transformer_2d.py", line 265, in forward
  79.     hidden_states = block(
  80.   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
  81.     return forward_call(*input, **kwargs)
  82.   File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention.py", line 291, in forward
  83.     attn_output = self.attn1(
  84.   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
  85.     return forward_call(*input, **kwargs)
  86. TypeError: replace_unet_cross_attn_to_memory_efficient.<locals>.forward_flash_attn() got an unexpected keyword argument 'encoder_hidden_states'
  87. root@4e6ee48bf930:/workspace#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement