'ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable MASTER_ADDR expected, but not set

I am not able to initialize the group process in PyTorch for BERT model I had tried to initialize using following code:

import torch
import datetime

torch.distributed.init_process_group(
    backend='nccl',
    init_method='env://',
    timeout=datetime.timedelta(0, 1800),
    world_size=0,
    rank=0,
    store=None,
    group_name=''
)

and tried to access the get_world_size() function:

num_train_optimization_steps = num_train_optimization_steps // torch.distributed.get_world_size()

full code:

train_examples = None
    num_train_optimization_steps = None
    if do_train:
        train_examples = processor.get_train_examples(data_dir)
        num_train_optimization_steps = int(
            len(train_examples) / train_batch_size / gradient_accumulation_steps) * num_train_epochs
        if local_rank != -1:
            import datetime
            torch.distributed.init_process_group(backend='nccl',init_method='env://', timeout=datetime.timedelta(0, 1800), world_size=0, rank=0, store=None, group_name='')
            num_train_optimization_steps = num_train_optimization_steps // torch.distributed.get_world_size()
            print(num_train_optimization_steps)


Solution 1:[1]

I solve the problem by referring https://github.com/NVIDIA/apex/issues/99. Specifically run

python -m torch.distributed.launch xxx.py

Solution 2:[2]

Just an update, instead of running:

$ python -m torch.distributed.launch --use_env train_script.py

You now only need to run:

$ torchrun train_script.py

As indicated here.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1
Solution 2 mah_mpc