Releases: MLOPTPSU/FedTorch
Releases · MLOPTPSU/FedTorch
No Tensorflow Federated
- In this release, the dependency on the Tensorflow Federated for accessing its datasets is dropped 🎉. Now we can get the dataset from their provided URL and process it there.
- Also, the EMNIST dataset using its full 62 classes is updated in all modules and makes it available for training.
Initial Public Release
This is the first public release of FedTorch. In this repository, we develop different distributed and federated learning schema for the training of machine learning models. It uses PyTorch Distributed API for running on multiple nodes and/or processes in parallel. Some of the main algorithms in this package:
- Distributed Sync-SGD
- Distributed Local SGD
- Local SGD with Adaptive Synchoronization (LUPA-SGD)
- Adaptive Personalized Federated Learning (APFL)
- Distributionally Robust Federated Learning (DRFA)
- Federated Learning with Gradient Tracking and Compression (FedGATE and FedCOMGATE)
- FedAvg
- SCAFFOLD
- Qsparse Local SGD
- AFL
- FedProx
Different models for training:
- Convex:
- Least Squared Linear Regression
- Logistic Regression
- Robust Regression
- Robust Logistic Regression
- Non-Convex:
- MLP
- CNN
- RNN
- ResNet
- DenseNet
- WideResNet
- Robust MLP
Datasets included in this package, with iid and non-iid distribution mechanisms:
- MNIST
- CIFAR10 and CIFAR100
- Fashion MNIST
- EMNIST and FEMNIST
- STL10
- Epsilon
- Year Prediction MSD
- RCV1
- Higgs
- Synthetic Dataset
- Adult
- Shakespear Dataset