Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [2024-09-25 15:07:08,094] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
- df: /root/.triton/autotune: No such file or directory
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779]
- W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779] *****************************************
- W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
- W0925 15:07:09.523465 140577774266176 torch/distributed/run.py:779] *****************************************
- [2024-09-25 15:07:11,803] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
- [2024-09-25 15:07:12,017] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [WARNING] On Ampere and higher architectures please use CUDA 11+
- [2024-09-25 15:07:12,682] [INFO] [comm.py:652:init_distributed] cdb=None
- [W925 15:07:12.581853686 Utils.hpp:164] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
- [W925 15:07:12.581877026 Utils.hpp:135] Warning: Environment variable NCCL_ASYNC_ERROR_HANDLING is deprecated; use TORCH_NCCL_ASYNC_ERROR_HANDLING instead (function operator())
- [2024-09-25 15:07:12,905] [INFO] [comm.py:652:init_distributed] cdb=None
- [2024-09-25 15:07:12,905] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
- [W925 15:07:12.805362643 Utils.hpp:164] Warning: Environment variable NCCL_BLOCKING_WAIT is deprecated; use TORCH_NCCL_BLOCKING_WAIT instead (function operator())
- [W925 15:07:12.805398363 Utils.hpp:135] Warning: Environment variable NCCL_ASYNC_ERROR_HANDLING is deprecated; use TORCH_NCCL_ASYNC_ERROR_HANDLING instead (function operator())
- 09/25/2024 15:07:13 - INFO - __main__ - Distributed environment: DEEPSPEED Backend: nccl
- Num processes: 2
- Process index: 0
- Local process index: 0
- Device: cuda:0
- Mixed precision type: bf16
- ds_config: {'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'nvme_path': None}, 'offload_param': {'device': 'none', 'nvme_path': None}, 'stage3_gather_16bit_weights_on_model_save': False}, 'gradient_clipping': 'auto', 'steps_per_print': inf, 'bf16': {'enabled': True}, 'fp16': {'enabled': False}}
- You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers
- 09/25/2024 15:07:13 - INFO - __main__ - Distributed environment: DEEPSPEED Backend: nccl
- Num processes: 2
- Process index: 1
- Local process index: 1
- Device: cuda:1
- Mixed precision type: bf16
- ds_config: {'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'nvme_path': None}, 'offload_param': {'device': 'none', 'nvme_path': None}, 'stage3_gather_16bit_weights_on_model_save': False}, 'gradient_clipping': 'auto', 'steps_per_print': inf, 'bf16': {'enabled': True}, 'fp16': {'enabled': False}}
- You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
- You are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
- Downloading shards: 100%|██████████████████████| 2/2 [00:00<00:00, 14614.30it/s]
- Downloading shards: 100%|██████████████████████| 2/2 [00:00<00:00, 13911.46it/s]
- Loading checkpoint shards: 100%|██████████████████| 2/2 [00:02<00:00, 1.34s/it]
- Loading checkpoint shards: 100%|██████████████████| 2/2 [00:02<00:00, 1.35s/it]
- Fetching 3 files: 100%|█████████████████████████| 3/3 [00:00<00:00, 7021.71it/s]
- {'axes_dims_rope'} was not found in config. Values will be initialized to default values.
- Fetching 3 files: 100%|████████████████████████| 3/3 [00:00<00:00, 12958.71it/s]
- Using decoupled weight decay
- Using decoupled weight decay
- [2024-09-25 15:07:28,735] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 2
- [2024-09-25 15:07:29,079] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.15.1, git-hash=unknown, git-branch=unknown
- [2024-09-25 15:07:29,079] [INFO] [config.py:733:__init__] Config mesh_device None world_size = 2
- x2-h100:38416:38416 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
- x2-h100:38416:38416 [0] NCCL INFO Bootstrap : Using eth0:10.0.0.16<0>
- x2-h100:38416:38416 [0] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
- x2-h100:38416:38416 [0] NCCL INFO cudaDriverVersion 12020
- NCCL version 2.20.5+cuda12.4
- x2-h100:38417:38417 [1] NCCL INFO cudaDriverVersion 12020
- x2-h100:38417:38417 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
- x2-h100:38417:38417 [1] NCCL INFO Bootstrap : Using eth0:10.0.0.16<0>
- x2-h100:38417:38417 [1] NCCL INFO NET/Plugin : dlerror=libnccl-net.so: cannot open shared object file: No such file or directory No plugin found (libnccl-net.so), using internal implementation
- x2-h100:38416:38882 [0] NCCL INFO NCCL_IB_DISABLE set by environment to 0.
- x2-h100:38416:38882 [0] NCCL INFO Failed to open libibverbs.so[.1]
- x2-h100:38416:38882 [0] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
- x2-h100:38416:38882 [0] NCCL INFO NET/Socket : Using [0]eth0:10.0.0.16<0>
- x2-h100:38416:38882 [0] NCCL INFO Using non-device net plugin version 0
- x2-h100:38416:38882 [0] NCCL INFO Using network Socket
- x2-h100:38417:38883 [1] NCCL INFO NCCL_IB_DISABLE set by environment to 0.
- x2-h100:38417:38883 [1] NCCL INFO Failed to open libibverbs.so[.1]
- x2-h100:38417:38883 [1] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^docker0,lo
- x2-h100:38417:38883 [1] NCCL INFO NET/Socket : Using [0]eth0:10.0.0.16<0>
- x2-h100:38417:38883 [1] NCCL INFO Using non-device net plugin version 0
- x2-h100:38417:38883 [1] NCCL INFO Using network Socket
- x2-h100:38416:38882 [0] NCCL INFO comm 0x2864b6c0 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0xa404fa77c007f07f - Init START
- x2-h100:38417:38883 [1] NCCL INFO comm 0x27819340 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0xa404fa77c007f07f - Init START
- x2-h100:38417:38883 [1] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
- x2-h100:38417:38883 [1] NCCL INFO Setting affinity for GPU 1 to ffff,ffffff00,00000000
- x2-h100:38416:38882 [0] NCCL INFO NCCL_P2P_LEVEL set by environment to LOC
- x2-h100:38416:38882 [0] NCCL INFO Setting affinity for GPU 0 to ff,ffffffff
- x2-h100:38416:38882 [0] NCCL INFO comm 0x2864b6c0 rank 0 nRanks 2 nNodes 1 localRanks 2 localRank 0 MNNVL 0
- x2-h100:38417:38883 [1] NCCL INFO comm 0x27819340 rank 1 nRanks 2 nNodes 1 localRanks 2 localRank 1 MNNVL 0
- x2-h100:38416:38882 [0] NCCL INFO Channel 00/04 : 0 1
- x2-h100:38416:38882 [0] NCCL INFO Channel 01/04 : 0 1
- x2-h100:38416:38882 [0] NCCL INFO Channel 02/04 : 0 1
- x2-h100:38416:38882 [0] NCCL INFO Channel 03/04 : 0 1
- x2-h100:38416:38882 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
- x2-h100:38417:38883 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
- x2-h100:38416:38882 [0] NCCL INFO P2P Chunksize set to 131072
- x2-h100:38417:38883 [1] NCCL INFO P2P Chunksize set to 131072
- x2-h100:38417:38883 [1] NCCL INFO Channel 00 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38417:38883 [1] NCCL INFO Channel 01 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38417:38883 [1] NCCL INFO Channel 02 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38417:38883 [1] NCCL INFO Channel 03 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38416:38882 [0] NCCL INFO Channel 00 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38416:38882 [0] NCCL INFO Channel 01 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38416:38882 [0] NCCL INFO Channel 02 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38416:38882 [0] NCCL INFO Channel 03 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38416:38882 [0] NCCL INFO Connected all rings
- x2-h100:38416:38882 [0] NCCL INFO Connected all trees
- x2-h100:38417:38883 [1] NCCL INFO Connected all rings
- x2-h100:38417:38883 [1] NCCL INFO Connected all trees
- x2-h100:38417:38883 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
- x2-h100:38416:38882 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
- x2-h100:38417:38883 [1] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
- x2-h100:38416:38882 [0] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
- x2-h100:38416:38882 [0] NCCL INFO comm 0x2864b6c0 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0xa404fa77c007f07f - Init COMPLETE
- x2-h100:38417:38883 [1] NCCL INFO comm 0x27819340 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0xa404fa77c007f07f - Init COMPLETE
- [2024-09-25 15:07:42,852] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
- [2024-09-25 15:07:42,854] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
- [2024-09-25 15:07:42,854] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer
- [2024-09-25 15:07:42,955] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = Prodigy
- [2024-09-25 15:07:42,955] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=Prodigy type=<class 'prodigyopt.prodigy.Prodigy'>
- [2024-09-25 15:07:42,955] [WARNING] [engine.py:1232:_do_optimizer_sanity_check] **** You are using ZeRO with an untested optimizer, proceed with caution *****
- [2024-09-25 15:07:42,955] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 2 optimizer
- [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:148:__init__] Reduce bucket size 500000000
- [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:149:__init__] Allgather bucket size 500000000
- [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:150:__init__] CPU Offload: False
- [2024-09-25 15:07:42,955] [INFO] [stage_1_and_2.py:151:__init__] Round robin gradient partitioning: False
- [2024-09-25 15:07:49,842] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states
- [2024-09-25 15:07:49,843] [INFO] [utils.py:782:see_memory_usage] MA 44.53 GB Max_MA 55.61 GB CA 55.63 GB Max_CA 56 GB
- [2024-09-25 15:07:49,843] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 28.74 GB, percent = 4.6%
- [2024-09-25 15:07:49,988] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states
- [2024-09-25 15:07:49,989] [INFO] [utils.py:782:see_memory_usage] MA 44.53 GB Max_MA 66.7 GB CA 77.8 GB Max_CA 78 GB
- [2024-09-25 15:07:49,989] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 28.69 GB, percent = 4.6%
- [2024-09-25 15:07:49,989] [INFO] [stage_1_and_2.py:543:__init__] optimizer state initialized
- [2024-09-25 15:07:50,101] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer
- [2024-09-25 15:07:50,102] [INFO] [utils.py:782:see_memory_usage] MA 44.53 GB Max_MA 44.53 GB CA 77.8 GB Max_CA 78 GB
- [2024-09-25 15:07:50,102] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 28.75 GB, percent = 4.6%
- [2024-09-25 15:07:50,108] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer
- [2024-09-25 15:07:50,109] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None
- [2024-09-25 15:07:50,109] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = None
- [2024-09-25 15:07:50,109] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[1.0], mom=[(0.9, 0.999)]
- [2024-09-25 15:07:50,111] [INFO] [config.py:999:print] DeepSpeedEngine configuration:
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] activation_checkpointing_config {
- "partition_activations": false,
- "contiguous_memory_optimization": false,
- "cpu_checkpointing": false,
- "number_checkpoints": null,
- "synchronize_checkpoint_boundary": false,
- "profile": false
- }
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False}
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] amp_enabled .................. False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] amp_params ................... False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] autotuning_config ............ {
- "enabled": false,
- "start_step": null,
- "end_step": null,
- "metric_path": null,
- "arg_mappings": null,
- "metric": "throughput",
- "model_info": null,
- "results_dir": "autotuning_results",
- "exps_dir": "autotuning_exps",
- "overwrite": true,
- "fast": true,
- "start_profile_step": 3,
- "end_profile_step": 5,
- "tuner_type": "gridsearch",
- "tuner_early_stopping": 5,
- "tuner_num_trials": 50,
- "model_info_path": null,
- "mp_size": 1,
- "max_train_batch_size": null,
- "min_train_batch_size": 1,
- "max_train_micro_batch_size_per_gpu": 1.024000e+03,
- "min_train_micro_batch_size_per_gpu": 1,
- "num_tuning_micro_batch_sizes": 3
- }
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] bfloat16_enabled ............. True
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] bfloat16_immediate_grad_update False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] checkpoint_parallel_write_pipeline False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] checkpoint_tag_validation_enabled True
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] checkpoint_tag_validation_fail False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f4b65602df0>
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] communication_data_type ...... None
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] curriculum_enabled_legacy .... False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] curriculum_params_legacy ..... False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] data_efficiency_enabled ...... False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] dataloader_drop_last ......... False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] disable_allgather ............ False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] dump_state ................... False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] dynamic_loss_scale_args ...... None
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_enabled ........... False
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_gas_boundary_resolution 1
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_layer_name ........ bert.encoder.layer
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_layer_num ......... 0
- [2024-09-25 15:07:50,111] [INFO] [config.py:1003:print] eigenvalue_max_iter .......... 100
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] eigenvalue_stability ......... 1e-06
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] eigenvalue_tol ............... 0.01
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] eigenvalue_verbose ........... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] elasticity_enabled ........... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] flops_profiler_config ........ {
- "enabled": false,
- "recompute_fwd_factor": 0.0,
- "profile_step": 1,
- "module_depth": -1,
- "top_modules": 1,
- "detailed": true,
- "output_file": null
- }
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] fp16_auto_cast ............... None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] fp16_enabled ................. False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] fp16_master_weights_and_gradients False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] global_rank .................. 0
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] grad_accum_dtype ............. None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] gradient_accumulation_steps .. 1
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] gradient_clipping ............ 1.0
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] gradient_predivide_factor .... 1.0
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] graph_harvesting ............. False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] initial_dynamic_scale ........ 1
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] load_universal_checkpoint .... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] loss_scale ................... 1.0
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] memory_breakdown ............. False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] mics_hierarchial_params_gather False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] mics_shard_size .............. -1
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName')
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] nebula_config ................ {
- "enabled": false,
- "persistent_storage_path": null,
- "persistent_time_interval": 100,
- "num_of_version_in_retention": 2,
- "enable_nebula_load": true,
- "load_path": null
- }
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] optimizer_legacy_fusion ...... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] optimizer_name ............... None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] optimizer_params ............. None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True}
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] pld_enabled .................. False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] pld_params ................... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] prescale_gradients ........... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] scheduler_name ............... None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] scheduler_params ............. None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] seq_parallel_communication_data_type torch.float32
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] sparse_attention ............. None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] sparse_gradients_enabled ..... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] steps_per_print .............. inf
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] timers_config ................ enabled=True synchronized=True
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] train_batch_size ............. 2
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] train_micro_batch_size_per_gpu 1
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] use_data_before_expert_parallel_ False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] use_node_local_storage ....... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] wall_clock_breakdown ......... False
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] weight_quantization_config ... None
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] world_size ................... 2
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_allow_untested_optimizer True
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='none', nvme_path=None, buffer_count=5, buffer_size=100000000, max_in_cpu=1000000000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='none', nvme_path=None, buffer_count=4, pin_memory=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_enabled ................. True
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_force_ds_cpu_optimizer .. True
- [2024-09-25 15:07:50,112] [INFO] [config.py:1003:print] zero_optimization_stage ...... 2
- [2024-09-25 15:07:50,112] [INFO] [config.py:989:print_user_config] json = {
- "train_batch_size": 2,
- "train_micro_batch_size_per_gpu": 1,
- "gradient_accumulation_steps": 1,
- "zero_optimization": {
- "stage": 2,
- "offload_optimizer": {
- "device": "none",
- "nvme_path": null
- },
- "offload_param": {
- "device": "none",
- "nvme_path": null
- },
- "stage3_gather_16bit_weights_on_model_save": false
- },
- "gradient_clipping": 1.0,
- "steps_per_print": inf,
- "bf16": {
- "enabled": true
- },
- "fp16": {
- "enabled": false
- },
- "zero_allow_untested_optimizer": true
- }
- 09/25/2024 15:07:50 - INFO - __main__ - ***** Running training *****
- 09/25/2024 15:07:50 - INFO - __main__ - Num examples = 10
- 09/25/2024 15:07:50 - INFO - __main__ - Num batches each epoch = 5
- 09/25/2024 15:07:50 - INFO - __main__ - Num Epochs = 1
- 09/25/2024 15:07:50 - INFO - __main__ - Instantaneous batch size per device = 1
- 09/25/2024 15:07:50 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8
- 09/25/2024 15:07:50 - INFO - __main__ - Gradient Accumulation steps = 4
- 09/25/2024 15:07:50 - INFO - __main__ - Total optimization steps = 2
- Steps: 0%| | 0/2 [00:00<?, ?it/s]x2-h100:38416:39020 [0] NCCL INFO Using non-device net plugin version 0
- x2-h100:38416:39020 [0] NCCL INFO Using network Socket
- x2-h100:38417:39021 [1] NCCL INFO Using non-device net plugin version 0
- x2-h100:38417:39021 [1] NCCL INFO Using network Socket
- x2-h100:38416:39020 [0] NCCL INFO comm 0x2839af40 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0x3cd46c61e3b20fdd - Init START
- x2-h100:38417:39021 [1] NCCL INFO comm 0x27531350 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0x3cd46c61e3b20fdd - Init START
- x2-h100:38417:39021 [1] NCCL INFO Setting affinity for GPU 1 to ffff,ffffff00,00000000
- x2-h100:38416:39020 [0] NCCL INFO Setting affinity for GPU 0 to ff,ffffffff
- x2-h100:38417:39021 [1] NCCL INFO comm 0x27531350 rank 1 nRanks 2 nNodes 1 localRanks 2 localRank 1 MNNVL 0
- x2-h100:38416:39020 [0] NCCL INFO comm 0x2839af40 rank 0 nRanks 2 nNodes 1 localRanks 2 localRank 0 MNNVL 0
- x2-h100:38417:39021 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
- x2-h100:38416:39020 [0] NCCL INFO Channel 00/04 : 0 1
- x2-h100:38417:39021 [1] NCCL INFO P2P Chunksize set to 131072
- x2-h100:38416:39020 [0] NCCL INFO Channel 01/04 : 0 1
- x2-h100:38416:39020 [0] NCCL INFO Channel 02/04 : 0 1
- x2-h100:38416:39020 [0] NCCL INFO Channel 03/04 : 0 1
- x2-h100:38416:39020 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
- x2-h100:38416:39020 [0] NCCL INFO P2P Chunksize set to 131072
- x2-h100:38417:39021 [1] NCCL INFO Channel 00 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38416:39020 [0] NCCL INFO Channel 00 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38417:39021 [1] NCCL INFO Channel 01 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38416:39020 [0] NCCL INFO Channel 01 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38417:39021 [1] NCCL INFO Channel 02 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38416:39020 [0] NCCL INFO Channel 02 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38417:39021 [1] NCCL INFO Channel 03 : 1[1] -> 0[0] via SHM/direct/direct
- x2-h100:38416:39020 [0] NCCL INFO Channel 03 : 0[0] -> 1[1] via SHM/direct/direct
- x2-h100:38416:39020 [0] NCCL INFO Connected all rings
- x2-h100:38417:39021 [1] NCCL INFO Connected all rings
- x2-h100:38416:39020 [0] NCCL INFO Connected all trees
- x2-h100:38417:39021 [1] NCCL INFO Connected all trees
- x2-h100:38417:39021 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
- x2-h100:38417:39021 [1] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
- x2-h100:38416:39020 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
- x2-h100:38416:39020 [0] NCCL INFO 4 coll channels, 0 collnet channels, 0 nvls channels, 4 p2p channels, 2 p2p channels per peer
- x2-h100:38416:39020 [0] NCCL INFO comm 0x2839af40 rank 0 nranks 2 cudaDev 0 nvmlDev 0 busId 100000 commId 0x3cd46c61e3b20fdd - Init COMPLETE
- x2-h100:38417:39021 [1] NCCL INFO comm 0x27531350 rank 1 nranks 2 cudaDev 1 nvmlDev 1 busId 200000 commId 0x3cd46c61e3b20fdd - Init COMPLETE
- [rank1]: Traceback (most recent call last):
- [rank1]: File "examples/dreambooth/train_dreambooth_flux.py", line 1795, in <module>
- [rank1]: main(args)
- [rank1]: File "examples/dreambooth/train_dreambooth_flux.py", line 1585, in main
- [rank1]: if transformer.config.guidance_embeds:
- [rank1]: AttributeError: 'dict' object has no attribute 'guidance_embeds'
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- [rank0]: Traceback (most recent call last):
- [rank0]: File "examples/dreambooth/train_dreambooth_flux.py", line 1795, in <module>
- [rank0]: main(args)
- [rank0]: File "examples/dreambooth/train_dreambooth_flux.py", line 1585, in main
- [rank0]: if transformer.config.guidance_embeds:
- [rank0]: AttributeError: 'dict' object has no attribute 'guidance_embeds'
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
- To disable this warning, you can either:
- - Avoid using `tokenizers` before the fork if possible
- - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
- Steps: 0%| | 0/2 [00:01<?, ?it/s]
- W0925 15:07:51.992582 140577774266176 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 38416 closing signal SIGTERM
- E0925 15:07:52.257055 140577774266176 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 1 (pid: 38417) of binary: /usr/bin/python
- Traceback (most recent call last):
- File "/usr/local/bin/accelerate", line 8, in <module>
- sys.exit(main())
- File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
- args.func(args)
- File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 1159, in launch_command
- deepspeed_launcher(args)
- File "/usr/local/lib/python3.8/dist-packages/accelerate/commands/launch.py", line 852, in deepspeed_launcher
- distrib_run.run(args)
- File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 892, in run
- elastic_launch(
- File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 133, in __call__
- return launch_agent(self._config, self._entrypoint, list(args))
- File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
- raise ChildFailedError(
- torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
- ============================================================
- examples/dreambooth/train_dreambooth_flux.py FAILED
- ------------------------------------------------------------
- Failures:
- <NO_OTHER_FAILURES>
- ------------------------------------------------------------
- Root Cause (first observed failure):
- [0]:
- time : 2024-09-25_15:07:51
- host : x2-h100.internal.cloudapp.net
- rank : 1 (local_rank: 1)
- exitcode : 1 (pid: 38417)
- error_file: <N/A>
- traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
- ============================================================
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement