Skip to content

Commit

Permalink
Address comments
Browse files Browse the repository at this point in the history
  • Loading branch information
YuanTingHsieh committed Jan 23, 2024
1 parent bfad201 commit 9afa665
Show file tree
Hide file tree
Showing 5 changed files with 3 additions and 63 deletions.
12 changes: 0 additions & 12 deletions docs/concepts.rst

This file was deleted.

1 change: 0 additions & 1 deletion docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ NVIDIA FLARE
example_applications_algorithms
real_world_fl
user_guide
concepts
programming_guide
best_practices
faq
Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion examples/hello-world/step-by-step/cifar10/sag/sag.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
"\n",
"## Scatter and Gather (SAG)\n",
"\n",
"Our Scatter and Gather workflow are similar to the Message Passing Interface (MPI)'s MPI Broadcast + MPI Gather. [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface) is a standardized and portable message-passing standard designed to function on parallel computing architectures. MPI consists of some [collective communication routines](https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/), such as MPI Broadcast, MPI Scatter, and MPI Gather.\n",
"FLARE's Scatter and Gather workflow is similar to the Message Passing Interface (MPI)'s MPI Broadcast + MPI Gather. [MPI](https://en.wikipedia.org/wiki/Message_Passing_Interface) is a standardized and portable message-passing standard designed to function on parallel computing architectures. MPI consists of some [collective communication routines](https://mpitutorial.com/tutorials/mpi-broadcast-and-collective-communication/), such as MPI Broadcast, MPI Scatter, and MPI Gather.\n",
"\n",
"<img src=\"mpi_scatter.png\" alt=\"scatter\" width=25% height=20% /><img src=\"mpi_gather.png\" alt=\"gather\" width=25% height=20% />\n",
"\n",
Expand Down
51 changes: 2 additions & 49 deletions nvflare/client/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,58 +61,11 @@ def _register_tensor_decomposer():


def init(
config: str = f"config/{CLIENT_API_CONFIG}",
rank: Optional[str] = None,
) -> None:
"""Initializes NVFlare Client API environment.
Note:
An example of the config file's content looks like the following:
.. code-block:: json
{
"METRICS_EXCHANGE": {
"pipe_channel_name": "metric",
"pipe": {
"CLASS_NAME": "nvflare.fuel.utils.pipe.cell_pipe.CellPipe",
"ARG": {
"mode": "ACTIVE",
"site_name": "site-1",
"token": "simulate_job",
"root_url": "tcp://0:51893",
"secure_mode": false,
"workspace_dir": "xxx"
}
}
},
"SITE_NAME": "site-1",
"JOB_ID": "simulate_job",
"TASK_EXCHANGE": {
"train_with_eval": true,
"exchange_format": "numpy",
"transfer_type": "DIFF",
"train_task_name": "train",
"eval_task_name": "evaluate",
"submit_model_task_name": "submit_model",
"pipe_channel_name": "task",
"pipe": {
"CLASS_NAME": "nvflare.fuel.utils.pipe.cell_pipe.CellPipe",
"ARG": {
"mode": "ACTIVE",
"site_name": "site-1",
"token": "simulate_job",
"root_url": "tcp://0:51893",
"secure_mode": false,
"workspace_dir": "xxx"
}
}
}
}
Args:
config (str): path to the configuration file.
rank (str): local rank of the process.
It is only useful when the training script has multiple worker processes. (for example multi GPU)
Expand All @@ -123,7 +76,7 @@ def init(
.. code-block:: python
nvflare.client.init(config="./config.json")
nvflare.client.init()
"""
Expand All @@ -136,7 +89,7 @@ def init(
print("Warning: called init() more than once. The subsequence calls are ignored")
return

client_config = _create_client_config(config=config)
client_config = _create_client_config(config=f"config/{CLIENT_API_CONFIG}")

flare_agent = None
try:
Expand Down

0 comments on commit 9afa665

Please sign in to comment.