This implementation is based on PyTorch (1.5.0) in Python (3.8).
It enables to run simulated distributed optimization with master node on any number of workers based on PyTorch SGD Optimizer with gradient compression. Communication can be compressed on both workers and master level. Error-Feedback is also enabled.
This is a fork of https://github.com/SamuelHorvath/Compressed_SGD_PyTorch
To install requirements
$ pip install -r requirements.txtTo run our code see example notebook.
If you are interested in theoretical results, you may check the keynote files in the theory folder.