Advertisement
kopyl

Untitled

Sep 25th, 2024
29
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 34.93 KB | None | 0 0
  1. [2024-09-25 15:07:08,094] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
  2. df: /root/.triton/autotune: No such file or directory
  3. [WARNING] On Ampere and higher architectures please use CUDA 11+
  4. [WARNING] On Ampere and higher architectures please use CUDA 11+
  5. [WARNING] On Ampere and higher architectures please use CUDA 11+
  6. [WARNING] On Ampere and higher architectures please use CUDA 11+
  7. [WARNING] On Ampere and higher architectures please use CUDA 11+
  8. [WARNING] On Ampere and higher architectures please use CUDA 11+
  9. W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779]
  10. W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779] *****************************************
  11. W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
  12. W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779] *****************************************
  13. [2024-09-25 15:07:11,803] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
  14. [2024-09-25 15:07:12,017] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
  15. [WARNING] On Ampere and higher architectures please use CUDA 11+
  16. [WARNING] On Ampere and higher architectures please use CUDA 11+
  17. [WARNING] On Ampere and higher architectures please use CUDA 11+
  18. [WARNING] On Ampere and higher architectures please use CUDA 11+
  19. [WARNING] On Ampere and higher architectures please use CUDA 11+
  20. [WARNING] On Ampere and higher architectures please use CUDA 11+
  21. [WARNING] On Ampere and higher architectures please use CUDA 11+
  22. [WARNING] On Ampere and higher architectures please use CUDA 11+
  23. [WARNING] On Ampere and higher architectures please use CUDA 11+
  24. [WARNING] On Ampere and higher architectures please use CUDA 11+
  25. [WARNING] On Ampere and higher architectures please use CUDA 11+
  26. [WARNING] On Ampere and higher architectures please use CUDA 11+
  27. [2024-09-25 15:07:12,682] [INFO] [comm.py:652:init_distributed] cdb=None
  28. [W925 15:07:12.581853686 Utils.hpp:164] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
  29. [W925 15:07:12.581877026 Utils.hpp:135] Warning: Environment variable NCCL_ASYNC_ERROR_HANDLING is deprecated; use TORCH_NCCL_ASYNC_ERROR_HANDLING instead (function operator())
  30. [2024-09-25 15:07:12,905] [INFO] [comm.py:652:init_distributed] cdb=None
  31. [2024-09-25 15:07:12,905] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
  32. [W925 15:07:12.805362643 Utils.hpp:164] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
  33. [W925 15:07:12.805398363 Utils.hpp:135] Warning: Environment variable NCCL_ASYNC_ERROR_HANDLING is deprecated; use TORCH_NCCL_ASYNC_ERROR_HANDLING instead (function operator())
  34. 09/25/2024 15:07:13 - INFO - __main__ - Distributed environment: DEEPSPEED Backend: nccl
  35. Num processes: 2
  36. Process index: 0
  37. Local process index: 0
  38. Device: cuda:0
  39.  
  40. Mixed precision type: bf16
  41. ds_config: {'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'nvme_path': None}, 'offload_param': {'device': 'none', 'nvme_path': None}, 'stage3_gather_16bit_weights_on_model_save': False}, 'gradient_clipping': 'auto', 'steps_per_print': inf, 'bf16': {'enabled': True}, 'fp16': {'enabled': False}}
  42.  
  43. You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
  44. 09/25/2024 15:07:13 - INFO - __main__ - Distributed environment: DEEPSPEED Backend: nccl
  45. Num processes: 2
  46. Process index: 1
  47. Local process index: 1
  48. Device: cuda:1
  49.  
  50. Mixed precision type: bf16
  51. ds_config: {'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'nvme_path': None}, 'offload_param': {'device': 'none', 'nvme_path': None}, 'stage3_gather_16bit_weights_on_model_save': False}, 'gradient_clipping': 'auto', 'steps_per_print': inf, 'bf16': {'enabled': True}, 'fp16': {'enabled': False}}
  52.  
  53. You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
  54. You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
  55. Downloading shards: 100%|██████████████████████| 2/2 [00:00<00:00, 14614.30it/s]
  56. Downloading shards: 100%|██████████████████████| 2/2 [00:00<00:00, 13911.46it/s]
  57. Loading checkpoint shards: 100%|██████████████████| 2/2 [00:02<00:00, 1.34s/it]
  58. Loading checkpoint shards: 100%|██████████████████| 2/2 [00:02<00:00, 1.35s/it]
  59. Fetching 3 files: 100%|█████████████████████████| 3/3 [00:00<00:00, 7021.71it/s]
  60. {'axes_dims_rope'} was not found in config. Values will be initialized to default values.
  61. Fetching 3 files: 100%|████████████████████████| 3/3 [00:00<00:00, 12958.71it/s]
  62. Using decoupled weight decay
  63. Using decoupled weight decay
  64. [2024-09-25 15:07:28,735] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 2
  65. [2024-09-25 15:07:29,079] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.15.1, git-hash=unknown, git-branch=unknown
  66. [2024-09-25 15:07:29,079] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 2
  67. x2-h100:38416:38416 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
  68. x2-h100:38416:38416 [0] NCCL INFO Bootstrap : Using eth0:10.0.0.16<0>
  69. x2-h100:38416:38416 [0] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
  70. x2-h100:38416:38416 [0] NCCL INFO cudaDriverVersion 12020
  71. NCCL version 2.20.5+cuda12.4
  72. x2-h100:38417:38417 [1] NCCL INFO cudaDriverVersion 12020
  73. x2-h100:38417:38417 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
  74. x2-h100:38417:38417 [1] NCCL INFO Bootstrap : Using eth0:10.0.0.16<0>
  75. x2-h100:38417:38417 [1] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
  76. x2-h100:38416:38882 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 0.
  77. x2-h100:38416:38882 [0] NCCL INFO Failed to open libibverbs.so[.1]
  78. x2-h100:38416:38882 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
  79. x2-h100:38416:38882 [0] NCCL INFO NET/Socket : Using [0]eth0:10.0.0.16<0>
  80. x2-h100:38416:38882 [0] NCCL INFO Using non-device net plugin version 0
  81. x2-h100:38416:38882 [0] NCCL INFO Using network Socket
  82. x2-h100:38417:38883 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 0.
  83. x2-h100:38417:38883 [1] NCCL INFO Failed to open libibverbs.so[.1]
  84. x2-h100:38417:38883 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
  85. x2-h100:38417:38883 [1] NCCL INFO NET/Socket : Using [0]eth0:10.0.0.16<0>
  86. x2-h100:38417:38883 [1] NCCL INFO Using non-device net plugin version 0
  87. x2-h100:38417:38883 [1] NCCL INFO Using network Socket
  88. x2-h100:38416:38882 [0] NCCL INFO comm 0x2864b6c0 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0xa404fa77c007f07f - Init START
  89. x2-h100:38417:38883 [1] NCCL INFO comm 0x27819340 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0xa404fa77c007f07f - Init START
  90. x2-h100:38417:38883 [1] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
  91. x2-h100:38417:38883 [1] NCCL INFO Setting affinity for GPU 1 to ffff,ffffff00,00000000
  92. x2-h100:38416:38882 [0] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
  93. x2-h100:38416:38882 [0] NCCL INFO Setting affinity for GPU 0 to ff,ffffffff
  94. x2-h100:38416:38882 [0] NCCL INFO comm 0x2864b6c0 rank 0 nRanks 2 nNodes 1 localRanks 2 localRank 0 MNNVL 0
  95. x2-h100:38417:38883 [1] NCCL INFO comm 0x27819340 rank 1 nRanks 2 nNodes 1 localRanks 2 localRank 1 MNNVL 0
  96. x2-h100:38416:38882 [0] NCCL INFO Channel 00/04 : 0 1
  97. x2-h100:38416:38882 [0] NCCL INFO Channel 01/04 : 0 1
  98. x2-h100:38416:38882 [0] NCCL INFO Channel 02/04 : 0 1
  99. x2-h100:38416:38882 [0] NCCL INFO Channel 03/04 : 0 1
  100. x2-h100:38416:38882 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
  101. x2-h100:38417:38883 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
  102. x2-h100:38416:38882 [0] NCCL INFO P2P Chunksize set to 131072
  103. x2-h100:38417:38883 [1] NCCL INFO P2P Chunksize set to 131072
  104. x2-h100:38417:38883 [1] NCCL INFO Channel 00 : 1[1] -> 0[0] via SHM/direct/direct
  105. x2-h100:38417:38883 [1] NCCL INFO Channel 01 : 1[1] -> 0[0] via SHM/direct/direct
  106. x2-h100:38417:38883 [1] NCCL INFO Channel 02 : 1[1] -> 0[0] via SHM/direct/direct
  107. x2-h100:38417:38883 [1] NCCL INFO Channel 03 : 1[1] -> 0[0] via SHM/direct/direct
  108. x2-h100:38416:38882 [0] NCCL INFO Channel 00 : 0[0] -> 1[1] via SHM/direct/direct
  109. x2-h100:38416:38882 [0] NCCL INFO Channel 01 : 0[0] -> 1[1] via SHM/direct/direct
  110. x2-h100:38416:38882 [0] NCCL INFO Channel 02 : 0[0] -> 1[1] via SHM/direct/direct
  111. x2-h100:38416:38882 [0] NCCL INFO Channel 03 : 0[0] -> 1[1] via SHM/direct/direct
  112. x2-h100:38416:38882 [0] NCCL INFO Connected all rings
  113. x2-h100:38416:38882 [0] NCCL INFO Connected all trees
  114. x2-h100:38417:38883 [1] NCCL INFO Connected all rings
  115. x2-h100:38417:38883 [1] NCCL INFO Connected all trees
  116. x2-h100:38417:38883 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
  117. x2-h100:38416:38882 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
  118. x2-h100:38417:38883 [1] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
  119. x2-h100:38416:38882 [0] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
  120. x2-h100:38416:38882 [0] NCCL INFO comm 0x2864b6c0 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0xa404fa77c007f07f - Init COMPLETE
  121. x2-h100:38417:38883 [1] NCCL INFO comm 0x27819340 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0xa404fa77c007f07f - Init COMPLETE
  122. [2024-09-25 15:07:42,852] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
  123. [2024-09-25 15:07:42,854] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
  124. [2024-09-25 15:07:42,854] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
  125. [2024-09-25 15:07:42,955] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = Prodigy
  126. [2024-09-25 15:07:42,955] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=Prodigy type=<class 'prodigyopt.prodigy.Prodigy'>
  127. [2024-09-25 15:07:42,955] [WARNING] [engine.py:1232:_do_optimizer_sanity_check] **** You are using ZeRO with an untested optimizer, proceed with caution *****
  128. [2024-09-25 15:07:42,955] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 2 optimizer
  129. [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:148:__init__] Reduce bucket size 500000000
  130. [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:149:__init__] Allgather bucket size 500000000
  131. [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:150:__init__] CPU Offload: False
  132. [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:151:__init__] Round robin gradient partitioning: False
  133. [2024-09-25 15:07:49,842] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states
  134. [2024-09-25 15:07:49,843] [INFO] [utils.py:782:see_memory_usage] MA 44.53 GB Max_MA 55.61 GB CA 55.63 GB Max_CA 56 GB
  135. [2024-09-25 15:07:49,843] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 28.74 GB, percent = 4.6%
  136. [2024-09-25 15:07:49,988] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states
  137. [2024-09-25 15:07:49,989] [INFO] [utils.py:782:see_memory_usage] MA 44.53 GB Max_MA 66.7 GB CA 77.8 GB Max_CA 78 GB
  138. [2024-09-25 15:07:49,989] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 28.69 GB, percent = 4.6%
  139. [2024-09-25 15:07:49,989] [INFO] [stage_1_and_2.py:543:__init__] optimizer state initialized
  140. [2024-09-25 15:07:50,101] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer
  141. [2024-09-25 15:07:50,102] [INFO] [utils.py:782:see_memory_usage] MA 44.53 GB Max_MA 44.53 GB CA 77.8 GB Max_CA 78 GB
  142. [2024-09-25 15:07:50,102] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 28.75 GB, percent = 4.6%
  143. [2024-09-25 15:07:50,108] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer
  144. [2024-09-25 15:07:50,109] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None
  145. [2024-09-25 15:07:50,109] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
  146. [2024-09-25 15:07:50,109] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[1.0], mom=[(0.9, 0.999)]
  147. [2024-09-25 15:07:50,111] [INFO] [config.py:999:print] DeepSpeedEngine configuration:
  148. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] activation_checkpointing_config {
  149. "partition_activations": false,
  150. "contiguous_memory_optimization": false,
  151. "cpu_checkpointing": false,
  152. "number_checkpoints": null,
  153. "synchronize_checkpoint_boundary": false,
  154. "profile": false
  155. }
  156. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
  157. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] amp_enabled .................. False
  158. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] amp_params ................... False
  159. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] autotuning_config ............ {
  160. "enabled": false,
  161. "start_step": null,
  162. "end_step": null,
  163. "metric_path": null,
  164. "arg_mappings": null,
  165. "metric": "throughput",
  166. "model_info": null,
  167. "results_dir": "autotuning_results",
  168. "exps_dir": "autotuning_exps",
  169. "overwrite": true,
  170. "fast": true,
  171. "start_profile_step": 3,
  172. "end_profile_step": 5,
  173. "tuner_type": "gridsearch",
  174. "tuner_early_stopping": 5,
  175. "tuner_num_trials": 50,
  176. "model_info_path": null,
  177. "mp_size": 1,
  178. "max_train_batch_size": null,
  179. "min_train_batch_size": 1,
  180. "max_train_micro_batch_size_per_gpu": 1.024000e+03,
  181. "min_train_micro_batch_size_per_gpu": 1,
  182. "num_tuning_micro_batch_sizes": 3
  183. }
  184. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] bfloat16_enabled ............. True
  185. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] bfloat16_immediate_grad_update False
  186. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] checkpoint_parallel_write_pipeline False
  187. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] checkpoint_tag_validation_enabled True
  188. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] checkpoint_tag_validation_fail False
  189. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f4b65602df0>
  190. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] communication_data_type ...... None
  191. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
  192. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] curriculum_enabled_legacy .... False
  193. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] curriculum_params_legacy ..... False
  194. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
  195. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] data_efficiency_enabled ...... False
  196. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] dataloader_drop_last ......... False
  197. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] disable_allgather ............ False
  198. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] dump_state ................... False
  199. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] dynamic_loss_scale_args ...... None
  200. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_enabled ........... False
  201. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_gas_boundary_resolution 1
  202. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_layer_name ........ bert.encoder.layer
  203. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_layer_num ......... 0
  204. [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_max_iter .......... 100
  205. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] eigenvalue_stability ......... 1e-06
  206. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] eigenvalue_tol ............... 0.01
  207. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] eigenvalue_verbose ........... False
  208. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] elasticity_enabled ........... False
  209. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] flops_profiler_config ........ {
  210. "enabled": false,
  211. "recompute_fwd_factor": 0.0,
  212. "profile_step": 1,
  213. "module_depth": -1,
  214. "top_modules": 1,
  215. "detailed": true,
  216. "output_file": null
  217. }
  218. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] fp16_auto_cast ............... None
  219. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] fp16_enabled ................. False
  220. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] fp16_master_weights_and_gradients False
  221. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] global_rank .................. 0
  222. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] grad_accum_dtype ............. None
  223. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] gradient_accumulation_steps .. 1
  224. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] gradient_clipping ............ 1.0
  225. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] gradient_predivide_factor .... 1.0
  226. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] graph_harvesting ............. False
  227. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
  228. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] initial_dynamic_scale ........ 1
  229. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] load_universal_checkpoint .... False
  230. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] loss_scale ................... 1.0
  231. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] memory_breakdown ............. False
  232. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] mics_hierarchial_params_gather False
  233. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] mics_shard_size .............. -1
  234. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName')
  235. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] nebula_config ................ {
  236. "enabled": false,
  237. "persistent_storage_path": null,
  238. "persistent_time_interval": 100,
  239. "num_of_version_in_retention": 2,
  240. "enable_nebula_load": true,
  241. "load_path": null
  242. }
  243. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] optimizer_legacy_fusion ...... False
  244. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] optimizer_name ............... None
  245. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] optimizer_params ............. None
  246. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
  247. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] pld_enabled .................. False
  248. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] pld_params ................... False
  249. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] prescale_gradients ........... False
  250. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] scheduler_name ............... None
  251. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] scheduler_params ............. None
  252. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] seq_parallel_communication_data_type torch.float32
  253. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] sparse_attention ............. None
  254. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] sparse_gradients_enabled ..... False
  255. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] steps_per_print .............. inf
  256. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] timers_config ................ enabled=True synchronized=True
  257. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] train_batch_size ............. 2
  258. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] train_micro_batch_size_per_gpu 1
  259. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] use_data_before_expert_parallel_ False
  260. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] use_node_local_storage ....... False
  261. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] wall_clock_breakdown ......... False
  262. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] weight_quantization_config ... None
  263. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] world_size ................... 2
  264. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_allow_untested_optimizer True
  265. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='none', nvme_path=None, buffer_count=5, buffer_size=100000000, max_in_cpu=1000000000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='none', nvme_path=None, buffer_count=4, pin_memory=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
  266. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_enabled ................. True
  267. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_force_ds_cpu_optimizer .. True
  268. [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_optimization_stage ...... 2
  269. [2024-09-25 15:07:50,112] [INFO] [config.py:989:print_user_config] json = {
  270. "train_batch_size": 2,
  271. "train_micro_batch_size_per_gpu": 1,
  272. "gradient_accumulation_steps": 1,
  273. "zero_optimization": {
  274. "stage": 2,
  275. "offload_optimizer": {
  276. "device": "none",
  277. "nvme_path": null
  278. },
  279. "offload_param": {
  280. "device": "none",
  281. "nvme_path": null
  282. },
  283. "stage3_gather_16bit_weights_on_model_save": false
  284. },
  285. "gradient_clipping": 1.0,
  286. "steps_per_print": inf,
  287. "bf16": {
  288. "enabled": true
  289. },
  290. "fp16": {
  291. "enabled": false
  292. },
  293. "zero_allow_untested_optimizer": true
  294. }
  295. 09/25/2024 15:07:50 - INFO - __main__ - ***** Running training *****
  296. 09/25/2024 15:07:50 - INFO - __main__ - Num examples = 10
  297. 09/25/2024 15:07:50 - INFO - __main__ - Num batches each epoch = 5
  298. 09/25/2024 15:07:50 - INFO - __main__ - Num Epochs = 1
  299. 09/25/2024 15:07:50 - INFO - __main__ - Instantaneous batch size per device = 1
  300. 09/25/2024 15:07:50 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8
  301. 09/25/2024 15:07:50 - INFO - __main__ - Gradient Accumulation steps = 4
  302. 09/25/2024 15:07:50 - INFO - __main__ - Total optimization steps = 2
  303. Steps: 0%| | 0/2 [00:00<?, ?it/s]x2-h100:38416:39020 [0] NCCL INFO Using non-device net plugin version 0
  304. x2-h100:38416:39020 [0] NCCL INFO Using network Socket
  305. x2-h100:38417:39021 [1] NCCL INFO Using non-device net plugin version 0
  306. x2-h100:38417:39021 [1] NCCL INFO Using network Socket
  307. x2-h100:38416:39020 [0] NCCL INFO comm 0x2839af40 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0x3cd46c61e3b20fdd - Init START
  308. x2-h100:38417:39021 [1] NCCL INFO comm 0x27531350 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0x3cd46c61e3b20fdd - Init START
  309. x2-h100:38417:39021 [1] NCCL INFO Setting affinity for GPU 1 to ffff,ffffff00,00000000
  310. x2-h100:38416:39020 [0] NCCL INFO Setting affinity for GPU 0 to ff,ffffffff
  311. x2-h100:38417:39021 [1] NCCL INFO comm 0x27531350 rank 1 nRanks 2 nNodes 1 localRanks 2 localRank 1 MNNVL 0
  312. x2-h100:38416:39020 [0] NCCL INFO comm 0x2839af40 rank 0 nRanks 2 nNodes 1 localRanks 2 localRank 0 MNNVL 0
  313. x2-h100:38417:39021 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
  314. x2-h100:38416:39020 [0] NCCL INFO Channel 00/04 : 0 1
  315. x2-h100:38417:39021 [1] NCCL INFO P2P Chunksize set to 131072
  316. x2-h100:38416:39020 [0] NCCL INFO Channel 01/04 : 0 1
  317. x2-h100:38416:39020 [0] NCCL INFO Channel 02/04 : 0 1
  318. x2-h100:38416:39020 [0] NCCL INFO Channel 03/04 : 0 1
  319. x2-h100:38416:39020 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
  320. x2-h100:38416:39020 [0] NCCL INFO P2P Chunksize set to 131072
  321. x2-h100:38417:39021 [1] NCCL INFO Channel 00 : 1[1] -> 0[0] via SHM/direct/direct
  322. x2-h100:38416:39020 [0] NCCL INFO Channel 00 : 0[0] -> 1[1] via SHM/direct/direct
  323. x2-h100:38417:39021 [1] NCCL INFO Channel 01 : 1[1] -> 0[0] via SHM/direct/direct
  324. x2-h100:38416:39020 [0] NCCL INFO Channel 01 : 0[0] -> 1[1] via SHM/direct/direct
  325. x2-h100:38417:39021 [1] NCCL INFO Channel 02 : 1[1] -> 0[0] via SHM/direct/direct
  326. x2-h100:38416:39020 [0] NCCL INFO Channel 02 : 0[0] -> 1[1] via SHM/direct/direct
  327. x2-h100:38417:39021 [1] NCCL INFO Channel 03 : 1[1] -> 0[0] via SHM/direct/direct
  328. x2-h100:38416:39020 [0] NCCL INFO Channel 03 : 0[0] -> 1[1] via SHM/direct/direct
  329. x2-h100:38416:39020 [0] NCCL INFO Connected all rings
  330. x2-h100:38417:39021 [1] NCCL INFO Connected all rings
  331. x2-h100:38416:39020 [0] NCCL INFO Connected all trees
  332. x2-h100:38417:39021 [1] NCCL INFO Connected all trees
  333. x2-h100:38417:39021 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
  334. x2-h100:38417:39021 [1] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
  335. x2-h100:38416:39020 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
  336. x2-h100:38416:39020 [0] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
  337. x2-h100:38416:39020 [0] NCCL INFO comm 0x2839af40 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0x3cd46c61e3b20fdd - Init COMPLETE
  338. x2-h100:38417:39021 [1] NCCL INFO comm 0x27531350 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0x3cd46c61e3b20fdd - Init COMPLETE
  339. [rank1]: Traceback (most recent call last):
  340. [rank1]: File "examples/dreambooth/train_dreambooth_flux.py", line 1795, in <module>
  341. [rank1]: main(args)
  342. [rank1]: File "examples/dreambooth/train_dreambooth_flux.py", line 1585, in main
  343. [rank1]: if transformer.config.guidance_embeds:
  344. [rank1]: AttributeError: 'dict' object has no attribute 'guidance_embeds'
  345. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  346. To disable this warning, you can either:
  347. - Avoid using `tokenizers` before the fork if possible
  348. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  349. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  350. To disable this warning, you can either:
  351. - Avoid using `tokenizers` before the fork if possible
  352. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  353. [rank0]: Traceback (most recent call last):
  354. [rank0]: File "examples/dreambooth/train_dreambooth_flux.py", line 1795, in <module>
  355. [rank0]: main(args)
  356. [rank0]: File "examples/dreambooth/train_dreambooth_flux.py", line 1585, in main
  357. [rank0]: if transformer.config.guidance_embeds:
  358. [rank0]: AttributeError: 'dict' object has no attribute 'guidance_embeds'
  359. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  360. To disable this warning, you can either:
  361. - Avoid using `tokenizers` before the fork if possible
  362. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  363. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  364. To disable this warning, you can either:
  365. - Avoid using `tokenizers` before the fork if possible
  366. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  367. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  368. To disable this warning, you can either:
  369. - Avoid using `tokenizers` before the fork if possible
  370. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  371. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  372. To disable this warning, you can either:
  373. - Avoid using `tokenizers` before the fork if possible
  374. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  375. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  376. To disable this warning, you can either:
  377. - Avoid using `tokenizers` before the fork if possible
  378. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  379. huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
  380. To disable this warning, you can either:
  381. - Avoid using `tokenizers` before the fork if possible
  382. - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
  383. Steps: 0%| | 0/2 [00:01<?, ?it/s]
  384. W0925 15:07:51.992582 140577774266176 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 38416 closing signal SIGTERM
  385. E0925 15:07:52.257055 140577774266176 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 1 (pid: 38417) of binary: /usr/bin/python
  386. Traceback (most recent call last):
  387. File "/usr/local/bin/accelerate", line 8, in <module>
  388. sys.exit(main())
  389. File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
  390. args.func(args)
  391. File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 1159, in launch_command
  392. deepspeed_launcher(args)
  393. File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 852, in deepspeed_launcher
  394. distrib_run.run(args)
  395. File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 892, in run
  396. elastic_launch(
  397. File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 133, in __call__
  398. return launch_agent(self._config, self._entrypoint, list(args))
  399. File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
  400. raise ChildFailedError(
  401. torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
  402. ============================================================
  403. examples/dreambooth/train_dreambooth_flux.py FAILED
  404. ------------------------------------------------------------
  405. Failures:
  406. <NO_OTHER_FAILURES>
  407. ------------------------------------------------------------
  408. Root Cause (first observed failure):
  409. [0]:
  410. time : 2024-09-25_15:07:51
  411. host : x2-h100.internal.cloudapp.net
  412. rank : 1 (local_rank: 1)
  413. exitcode : 1 (pid: 38417)
  414. error_file: <N/A>
  415. traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
  416. ============================================================
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement