Skip to content

Commit ba77afb

Browse files
authored
add README files (#71)
fix spelling SAG readme add example readmes add link to MONAI nvflare tutorial update root readme
1 parent cbc0f8f commit ba77afb

File tree

7 files changed

+265
-0
lines changed

7 files changed

+265
-0
lines changed

examples/README.md

+33
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
# NVIDIA FLARE Examples
2+
3+
[NVIDIA FLARE](https://nvidia.github.io/NVFlare) provides several examples to help you get started using federated learning for your own applications.
4+
5+
The provided examples cover different aspects of [NVIDIA FLARE](https://nvidia.github.io/NVFlare), such as using the provided [Controllers](https://nvidia.github.io/NVFlare/programming_guide/controllers.html) for "scatter and gather" or "cyclic weight transfer" workflows and example [Executors](https://nvidia.github.io/NVFlare/apidocs/nvflare.apis.html?#module-nvflare.apis.executor) to implement your own training and validation pipelines. Some examples use the provided "task data" and "task result" [Filters](https://nvidia.github.io/NVFlare/apidocs/nvflare.apis.html?#module-nvflare.apis.filter) for homomorphic encryption and decryption or differential privacy. Furthermore, we show how to use different components for FL algorithms such as [FedAvg](https://arxiv.org/abs/1602.05629), [FedProx](https://arxiv.org/abs/1812.06127), and [FedOpt](https://arxiv.org/abs/2003.00295). We also provide domain-specific examples for deep learning and medical image analysis.
6+
7+
> **_NOTE:_** To run examples, please follow the instructions for [Installation](https://nvidia.github.io/NVFlare/installation.html) and any additional steps specified in the example readmes.
8+
9+
## 1. Hello World Examples
10+
### 1.1 Workflows
11+
* [Hello Scatter and Gather](./hello-numpy-sag/README.md)
12+
* Example using "[ScatterAndGather](https://nvidia.github.io/NVFlare/apidocs/nvflare.app_common.workflows.html?#module-nvflare.app_common.workflows.scatter_and_gather)" controller workflow.
13+
* [Hello Cross-Site Validation](./hello-numpy-cross-val/README.md)
14+
* Example using [CrossSiteModelEval](https://nvidia.github.io/NVFlare/apidocs/nvflare.app_common.workflows.html#nvflare.app_common.workflows.cross_site_model_eval.CrossSiteModelEval) controller workflow.
15+
* [Hello Cyclic Weight Transfer](./hello-cyclic/README.md)
16+
* Example using [CyclicController](https://nvidia.github.io/NVFlare/apidocs/nvflare.app_common.workflows.html?#module-nvflare.app_common.workflows.cyclic_ctl) controller workflow to implement [Cyclic Weight Transfer](https://pubmed.ncbi.nlm.nih.gov/29617797/).
17+
### 1.2 Deep Learning
18+
* [Hello PyTorch](./hello-pt/README.md)
19+
* Example using [NVIDIA FLARE](https://nvidia.github.io/NVFlare) an image classifier using [FedAvg]([FedAvg](https://arxiv.org/abs/1602.05629)) and [PyTorch](https://pytorch.org/) as the deep learning training framework.
20+
* [Hello TensorFlow](./hello-tf2/README.md)
21+
* Example of using [NVIDIA FLARE](https://nvidia.github.io/NVFlare) an image classifier using [FedAvg]([FedAvg](https://arxiv.org/abs/1602.05629)) and [TensorFlow](https://tensorflow.org/) as the deep learning training framework.
22+
23+
## 2. FL algorithms
24+
* [Federated Learning with CIFAR-10](./cifar10/README.md)
25+
* Includes examples of using [FedAvg](https://arxiv.org/abs/1602.05629), [FedProx](https://arxiv.org/abs/1812.06127), [FedOpt](https://arxiv.org/abs/2003.00295), and [homomorphic encryption](https://developer.nvidia.com/blog/federated-learning-with-homomorphic-encryption/).
26+
27+
## 3. Medical Image Analysis
28+
* [Hello MONAI](./hello-monai/README.md)
29+
* Example using [NVIDIA FLARE](https://nvidia.github.io/NVFlare) to train a medical image analysis model using [FedAvg]([FedAvg](https://arxiv.org/abs/1602.05629)) and [MONAI](https://monai.io/)
30+
* [Federated Learning with Differential Privacy for BraTS18 segmentation](./brats18/README.md)
31+
* Illustrates the use of differential privacy for training brain tumor segmentation models using federated learning.
32+
* [Federated Learning for Prostate Segmentation from Multi-source Data](./prostate/README.md)
33+
* Example of training a multi-institutional prostate segmentation model using [FedAvg](https://arxiv.org/abs/1602.05629) and [FedProx](https://arxiv.org/abs/1812.06127).

examples/hello-cyclic/README.md

+41
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
# Hello Cyclic Weight Transfer
2+
3+
["Cyclic Weight Transfer"](https://pubmed.ncbi.nlm.nih.gov/29617797/
4+
) (CWT) is an alternative to the scatter-and-gather approach used in [FedAvg](https://arxiv.org/abs/1602.05629). CWT uses the [CyclicController](https://nvidia.github.io/NVFlare/apidocs/nvflare.app_common.workflows.html?#module-nvflare.app_common.workflows.cyclic_ctl) to pass the model weights from one site to the next for repeated fine-tuning.
5+
6+
> **_NOTE:_** This example uses the [MNIST](http://yann.lecun.com/exdb/mnist/) handwritten digits dataset and will load its data within the trainer code.
7+
8+
### 1. Install NVIDIA FLARE
9+
10+
Follow the [Installation](https://nvidia.github.io/NVFlare/installation.html) instructions.
11+
Install additional requirements:
12+
13+
```
14+
pip3 install tensorflow
15+
```
16+
17+
### 2. Set up your FL workspace
18+
19+
Follow the [Quickstart](https://nvidia.github.io/NVFlare/quickstart.html) instructions to set up your POC ("proof of concept") workspace.
20+
21+
### 3. Run the experiment
22+
23+
Log into the Admin client by entering `admin` for both the username and password.
24+
Then, use these Admin commands to run the experiment:
25+
26+
```
27+
set_run_number 1
28+
upload_app hello-cyclic
29+
deploy_app hello-cyclic all
30+
start_app all
31+
```
32+
33+
### 4. Shut down the server/clients
34+
35+
To shut down the clients and server, run the following Admin commands:
36+
```
37+
shutdown client
38+
shutdown server
39+
```
40+
41+
> **_NOTE:_** For more information about the Admin client, see [here](https://nvidia.github.io/NVFlare/user_guide/admin_commands.html).

examples/hello-monai/README.md

+40
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Hello MONAI
2+
3+
Example of using [NVIDIA FLARE](https://nvidia.github.io/NVFlare) to train a medical image analysis model using federated averaging ([FedAvg]([FedAvg](https://arxiv.org/abs/1602.05629))) and [MONAI](https://monai.io/), the "Medical Open Network for Artificial Intelligence", as the deep learning training framework.
4+
5+
See this [Tutorial](https://github.com/Project-MONAI/tutorials/tree/master/federated_learning/nvflare/nvflare_spleen_example) for an example of how to use this trainer for 3D spleen segmentation in computed tomography.
6+
7+
### 1. Install NVIDIA FLARE
8+
9+
Follow the [Installation](https://nvidia.github.io/NVFlare/installation.html) instructions.
10+
Install additional requirements:
11+
12+
```
13+
pip3 install monai
14+
```
15+
16+
### 2. Set up your FL workspace
17+
18+
Follow the [Quickstart](https://nvidia.github.io/NVFlare/quickstart.html) instructions to set up your POC ("proof of concept") workspace.
19+
20+
### 3. Run the experiment
21+
22+
Log into the Admin client by entering `admin` for both the username and password.
23+
Then, use these Admin commands to run the experiment:
24+
25+
```
26+
set_run_number 1
27+
upload_app hello-monai
28+
deploy_app hello-monai
29+
start_app all
30+
```
31+
32+
### 4. Shut down the server/clients
33+
34+
To shut down the clients and server, run the following Admin commands:
35+
```
36+
shutdown client
37+
shutdown server
38+
```
39+
40+
> **_NOTE:_** For more information about the Admin client, see [here](https://nvidia.github.io/NVFlare/user_guide/admin_commands.html).
+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Hello Numpy Cross-Site Validation
2+
3+
The cross-site model evaluation workflow uses the data from clients to run evaluation with the models of other clients. Data is not shared. Rather the collection of models is distributed to each client site to run local validation. The server collects the results of local validation to construct an all-to-all matrix of model performance vs. client dataset. It uses the [CrossSiteModelEval](https://nvidia.github.io/NVFlare/apidocs/nvflare.app_common.workflows.html#nvflare.app_common.workflows.cross_site_model_eval.CrossSiteModelEval) controller workflow.
4+
5+
> **_NOTE:_** This example uses a Numpy-based trainer and will generate its data within the code.
6+
7+
### 1. Install NVIDIA FLARE
8+
9+
Follow the [Installation](https://nvidia.github.io/NVFlare/installation.html) instructions.
10+
11+
### 2. Set up your FL workspace
12+
13+
Follow the [Quickstart](https://nvidia.github.io/NVFlare/quickstart.html) instructions to set up your POC ("proof of concept") workspace.
14+
15+
### 3. Run the experiment
16+
17+
Log into the Admin client by entering `admin` for both the username and password.
18+
Then, use these Admin commands to run the experiment:
19+
20+
```
21+
set_run_number 1
22+
upload_app hello-numpy-cross-val
23+
deploy_app hello-numpy-cross-val
24+
start_app all
25+
```
26+
27+
### 4. Shut down the server/clients
28+
29+
To shut down the clients and server, run the following Admin commands:
30+
```
31+
shutdown client
32+
shutdown server
33+
```
34+
35+
> **_NOTE:_** For more information about the Admin client, see [here](https://nvidia.github.io/NVFlare/user_guide/admin_commands.html).

examples/hello-numpy-sag/README.md

+36
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# Hello Numpy Scatter and Gather
2+
3+
"[Scatter and Gather](https://nvidia.github.io/NVFlare/apidocs/nvflare.app_common.workflows.html?#module-nvflare.app_common.workflows.scatter_and_gather)" is the standard workflow to implement Federated Averaging ([FedAvg](https://arxiv.org/abs/1602.05629)).
4+
This workflow follows the hub and spoke model for communicating the global model to each client for local training (i.e., "scattering") and aggregates the result to perform the global model update (i.e., "gathering").
5+
6+
> **_NOTE:_** This example uses a Numpy-based trainer and will generate its data within the code.
7+
8+
### 1. Install NVIDIA FLARE
9+
10+
Follow the [Installation](https://nvidia.github.io/NVFlare/installation.html) instructions.
11+
12+
### 2. Set up your FL workspace
13+
14+
Follow the [Quickstart](https://nvidia.github.io/NVFlare/quickstart.html) instructions to set up your POC ("proof of concept") workspace.
15+
16+
### 3. Run the experiment
17+
18+
Log into the Admin client by entering `admin` for both the username and password.
19+
Then, use these Admin commands to run the experiment:
20+
21+
```
22+
set_run_number 1
23+
upload_app hello-numpy-sag
24+
deploy_app hello-numpy-sag all
25+
start_app all
26+
```
27+
28+
### 4. Shut down the server/clients
29+
30+
To shut down the clients and server, run the following Admin commands:
31+
```
32+
shutdown client
33+
shutdown server
34+
```
35+
36+
> **_NOTE:_** For more information about the Admin client, see [here](https://nvidia.github.io/NVFlare/user_guide/admin_commands.html).

examples/hello-pt/README.md

+40
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Hello PyTorch
2+
3+
Example of using [NVIDIA FLARE](https://nvidia.github.io/NVFlare) to train an image classifier using federated averaging ([FedAvg]([FedAvg](https://arxiv.org/abs/1602.05629))) and [PyTorch](https://pytorch.org/) as the deep learning training framework.
4+
5+
> **_NOTE:_** This example uses the [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset and will load its data within the trainer code.
6+
7+
### 1. Install NVIDIA FLARE
8+
9+
Follow the [Installation](https://nvidia.github.io/NVFlare/installation.html) instructions.
10+
Install additional requirements:
11+
12+
```
13+
pip3 install torch
14+
```
15+
16+
### 2. Set up your FL workspace
17+
18+
Follow the [Quickstart](https://nvidia.github.io/NVFlare/quickstart.html) instructions to set up your POC ("proof of concept") workspace.
19+
20+
### 3. Run the experiment
21+
22+
Log into the Admin client by entering `admin` for both the username and password.
23+
Then, use these Admin commands to run the experiment:
24+
25+
```
26+
set_run_number 1
27+
upload_app hello-pt
28+
deploy_app hello-pt
29+
start_app all
30+
```
31+
32+
### 4. Shut down the server/clients
33+
34+
To shut down the clients and server, run the following Admin commands:
35+
```
36+
shutdown client
37+
shutdown server
38+
```
39+
40+
> **_NOTE:_** For more information about the Admin client, see [here](https://nvidia.github.io/NVFlare/user_guide/admin_commands.html).

examples/hello-tf2/README.md

+40
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Hello TensorFlow
2+
3+
Example of using [NVIDIA FLARE](https://nvidia.github.io/NVFlare) to train an image classifier using federated averaging ([FedAvg]([FedAvg](https://arxiv.org/abs/1602.05629))) and [TensorFlow](https://tensorflow.org/) as the deep learning training framework.
4+
5+
> **_NOTE:_** This example uses the [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) dataset and will load its data within the trainer code.
6+
7+
### 1. Install NVIDIA FLARE
8+
9+
Follow the [Installation](https://nvidia.github.io/NVFlare/installation.html) instructions.
10+
Install additional requirements:
11+
12+
```
13+
pip3 install torch
14+
```
15+
16+
### 2. Set up your FL workspace
17+
18+
Follow the [Quickstart](https://nvidia.github.io/NVFlare/quickstart.html) instructions to set up your POC ("proof of concept") workspace.
19+
20+
### 3. Run the experiment
21+
22+
Log into the Admin client by entering `admin` for both the username and password.
23+
Then, use these Admin commands to run the experiment:
24+
25+
```
26+
set_run_number 1
27+
upload_app hello-tf2
28+
deploy_app hello-tf2
29+
start_app all
30+
```
31+
32+
### 4. Shut down the server/clients
33+
34+
To shut down the clients and server, run the following Admin commands:
35+
```
36+
shutdown client
37+
shutdown server
38+
```
39+
40+
> **_NOTE:_** For more information about the Admin client, see [here](https://nvidia.github.io/NVFlare/user_guide/admin_commands.html).

0 commit comments

Comments
 (0)