Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- ork_weights=100img_1-5.safetensors --prompt="man on the moon"
- 2023-03-28 07:01:52.416152: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
- To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
- 2023-03-28 07:01:52.539778: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
- 2023-03-28 07:01:52.945766: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
- 2023-03-28 07:01:52.945835: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/nvidia/lib:/usr/local/nvidia/lib64
- 2023-03-28 07:01:52.945843: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
- load StableDiffusion checkpoint
- loading u-net: <All keys matched successfully>
- loading vae: <All keys matched successfully>
- loading text encoder: <All keys matched successfully>
- Replace CrossAttention.forward to use NAI style Hypernetwork and FlashAttention
- loading tokenizer
- prepare tokenizer
- import network module: networks.lora
- load network weights from: 100img_1-5.safetensors
- create LoRA network from weights
- create LoRA for Text Encoder: 72 modules.
- create LoRA for U-Net: 192 modules.
- enable LoRA for text encoder
- enable LoRA for U-Net
- weights are loaded: <All keys matched successfully>
- /workspace/sd-scripts/gen_img_diffusers.py:466: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
- "_class_name": "DDIMScheduler",
- "_diffusers_version": "0.12.1",
- "beta_end": 0.012,
- "beta_schedule": "scaled_linear",
- "beta_start": 0.00085,
- "clip_sample": true,
- "num_train_timesteps": 1000,
- "prediction_type": "epsilon",
- "set_alpha_to_one": true,
- "steps_offset": 0,
- "trained_betas": null
- }
- is outdated. `steps_offset` should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving `steps_offset` might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
- /workspace/sd-scripts/gen_img_diffusers.py:479: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
- "_class_name": "DDIMScheduler",
- "_diffusers_version": "0.12.1",
- "beta_end": 0.012,
- "beta_schedule": "scaled_linear",
- "beta_start": 0.00085,
- "clip_sample": true,
- "num_train_timesteps": 1000,
- "prediction_type": "epsilon",
- "set_alpha_to_one": true,
- "steps_offset": 1,
- "trained_betas": null
- }
- has not set the configuration `clip_sample`. `clip_sample` should be set to False in the configuration file. Please make sure to update the config accordingly as not setting `clip_sample` in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
- pipeline is ready.
- iteration 1/1
- prompt 1/1: man on the moon
- 0%| | 0/50 [00:00<?, ?it/s]
- Traceback (most recent call last):
- File "/workspace/sd-scripts/gen_img_diffusers.py", line 3070, in <module>
- main(args)
- File "/workspace/sd-scripts/gen_img_diffusers.py", line 2855, in main
- prev_image = process_batch(batch_data, highres_fix)[0]
- File "/workspace/sd-scripts/gen_img_diffusers.py", line 2619, in process_batch
- images = pipe(
- File "/usr/local/lib/python3.10/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
- return func(*args, **kwargs)
- File "/workspace/sd-scripts/gen_img_diffusers.py", line 972, in __call__
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
- File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_condition.py", line 481, in forward
- sample, res_samples = downsample_block(
- File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/dist-packages/diffusers/models/unet_2d_blocks.py", line 789, in forward
- hidden_states = attn(
- File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/dist-packages/diffusers/models/transformer_2d.py", line 265, in forward
- hidden_states = block(
- File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
- return forward_call(*input, **kwargs)
- File "/usr/local/lib/python3.10/dist-packages/diffusers/models/attention.py", line 291, in forward
- attn_output = self.attn1(
- File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
- return forward_call(*input, **kwargs)
- TypeError: replace_unet_cross_attn_to_memory_efficient.<locals>.forward_flash_attn() got an unexpected keyword argument 'encoder_hidden_states'
- root@4e6ee48bf930:/workspace#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement