I noticed that they use the "Save and Load Checkpoints" to synchronize all models in different process in the PyTorch tutorial https://pytorch.org/tutorials/intermediate/ddp_tutorial.html

So, I want to know if there are some implicit synchronization mechanisms in your distributed_tutorial code.