Skip to content

Commit

Permalink
Cherry pick docs update to 2.3 branch (#1669)
Browse files Browse the repository at this point in the history
* Add markdown link check workflow [skip ci] (#1660)

* Add markdown link check workflow

* Fix links

* Fix links

* Check modified files only

* cherry pick docs update to 2.3 branch

---------

Co-authored-by: Yuan-Ting Hsieh (謝沅廷) <[email protected]>
Co-authored-by: Chester Chen <[email protected]>
  • Loading branch information
3 people authored Apr 7, 2023
1 parent d99c0d6 commit 9976471
Show file tree
Hide file tree
Showing 7 changed files with 165 additions and 64 deletions.
15 changes: 7 additions & 8 deletions docs/example_applications_algorithms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ The following tutorials and quickstart guides walk you through some of these exa
* `Intro to the FL Simulator <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/flare_simulator.ipynb>`_ - Shows how to use the :ref:`fl_simulator` to run a local simulation of an NVFLARE deployment to test and debug an application without provisioning a real FL project.
* `Hello FLARE API <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/flare_api.ipynb>`_ - Goes through the different commands of the :ref:`flare_api` to show the syntax and usage of each.
* `NVFLARE in POC Mode <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/setup_poc.ipynb>`_ - Shows how to use :ref:`POC mode <poc_command>` to test the features of a full FLARE deployment on a single machine.
* `Provision and Start NVFLARE <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/provision.ipynb>`_ - Shows how to provision and start a secure FL system.

3. **FL algorithms**

Expand Down Expand Up @@ -165,44 +164,44 @@ Federated Learning Algorithms
=============================

Federated Averaging
^^^^^^^^^^^^^^^^^^^
-------------------
In NVIDIA FLARE, FedAvg is implemented through the :ref:`scatter_and_gather_workflow`. In the federated averaging workflow,
a set of initial weights is distributed to client workers who perform local training. After local training, clients
return their local weights as a Shareables that are aggregated (averaged). This new set of global average weights is
redistributed to clients and the process repeats for the specified number of rounds.

FedProx
^^^^^^^
-------
`FedProx <https://arxiv.org/abs/1812.06127>`_ implements a :class:`Loss function <nvflare.app_common.pt.pt_fedproxloss.PTFedProxLoss>`
to penalize a client's local weights based on deviation from the global model. An example configuration can be found in
cifar10_fedprox of the `CIFAR-10 example <https://github.com/NVIDIA/NVFlare/tree/main/examples/cifar10>`_.

FedOpt
^^^^^^
------
`FedOpt <https://arxiv.org/abs/2003.00295>`_ implements a :class:`ShareableGenerator <nvflare.app_common.pt.pt_fedopt.PTFedOptModelShareableGenerator>`
that can use a specified Optimizer and Learning Rate Scheduler when updating the global model. An example configuration
can be found in cifar10_fedopt of `CIFAR-10 example <https://github.com/NVIDIA/NVFlare/tree/main/examples/cifar10>`_.

SCAFFOLD
^^^^^^^^
--------
`SCAFFOLD <https://arxiv.org/abs/1910.06378>`_ uses a slightly modified version of the CIFAR-10 Learner implementation,
namely the `CIFAR10ScaffoldLearner`, which adds a correction term during local training following the `implementation <https://github.com/Xtra-Computing/NIID-Bench>`_
as described in `Li et al. <https://arxiv.org/abs/2102.02079>`_

Ditto
^^^^^
-----
`Ditto <https://arxiv.org/abs/2012.04221>`_ uses a slightly modified version of the prostate Learner implementation,
namely the `ProstateDittoLearner`, which decouples local personalized model from global model via an additional model
training and a controllable prox term. See the `prostate segmentation example <https://github.com/NVIDIA/NVFlare/tree/main/examples/prostate>`_
for an example with ditto in addition to FedProx, FedAvg, and centralized training.

Federated XGBoost
^^^^^^^^^^^^^^^^^
-----------------

* `Federated XGBoost (GitHub) <https://github.com/NVIDIA/NVFlare/tree/main/examples/xgboost>`_ - Includes examples of histogram-based and tree-based algorithms. Tree-based algorithms also includes bagging and cyclic approaches

Federated Analytics
^^^^^^^^^^^^^^^^^^^
-------------------

* `Federated Statistics for medical imaging (Github) <https://github.com/NVIDIA/NVFlare/tree/main/examples/federated_statistics/image_stats/README.md>`_ - Example of gathering local image histogram to compute the global dataset histograms.
* `Federated Statistics for tabular data with DataFrame (Github) <https://github.com/NVIDIA/NVFlare/tree/main/examples/federated_statistics/df_stats/README.md>`_ - Example of gathering local statistics summary from Pandas DataFrame to compute the global dataset statistics.
Expand Down
1 change: 0 additions & 1 deletion docs/examples/tutorial_notebooks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,3 @@ Tutorial Notebooks
FL Simulator Notebook (GitHub) <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/flare_simulator.ipynb>
Hello FLARE API Notbook (GitHub) <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/flare_api.ipynb>
NVFLARE in POC Mode (GitHub) <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/setup_poc.ipynb>
Provision and Start NVFLARE (GitHub) <https://github.com/NVIDIA/NVFlare/blob/main/examples/tutorials/provision.ipynb>
5 changes: 5 additions & 0 deletions docs/programming_guide/high_availability.rst
Original file line number Diff line number Diff line change
Expand Up @@ -138,6 +138,11 @@ If I'm currently hot, and the hot SP has changed to not me, then I transition to
I will prepare to stop serving the client requests. If any requests are received during the Hot-to-Cold state, I will
tell them I am not in service. This is a transition state to the cold state.

.. note::

While trying restart unfinished jobs, users should be aware that some jobs may be in a state that contains incomplete results, i.e. some client
results may not have been received by the server during this transition. As such, users must handle such cases appropriately in the aggregation logic.

Admin Client
------------
Admin Client: No response from Overseer (connection error, etc.)
Expand Down
3 changes: 3 additions & 0 deletions docs/whats_new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -245,6 +245,9 @@ an optional boolean to determine whether or not to allow empty global weights an

Some pipelines can have empty global weights at the first round, such that clients start training from scratch without any global info.

7. Updates to the Job Scheduler Configuration
=============================================
See :ref:`job_scheduler_configuration` for information on how the Job Scheduler can be configured with different arguments.

**************************
Previous Releases of FLARE
Expand Down
Loading

0 comments on commit 9976471

Please sign in to comment.