Determine main process in distributed training #11707
Unanswered
PgLoLo
asked this question in
DDP / multi-GPU / multi-node
Replies: 1 comment 4 replies
-
@PgLoLo You can use the trainer's property class YourLightningModule(LightningModule):
def do_something(self, ...):
if self.trainer.is_global_zero:
... https://pytorch-lightning.readthedocs.io/en/1.5.9/common/trainer.html#is-global-zero |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello! During distributed training in DDP mode, how can I determine if the current working code is called from the main process, and not from the spawned processes (I don't want to perform my custom initialization twice)?
Beta Was this translation helpful? Give feedback.
All reactions