Skip to content

Commit

Permalink
Doc cleanup for nx-cugraph: fixed typos, cleaned up various descripti…
Browse files Browse the repository at this point in the history
…ons, renamed notebook to match naming convetion. (#4478)

closes #4466 

* Fixed typos
* Changed various descriptions to match existing terminology and hopefully clarify
* Renamed notebook to match file name conventions

Authors:
  - Rick Ratzel (https://github.com/rlratzel)
  - Ralph Liu (https://github.com/nv-rliu)
  - Don Acosta (https://github.com/acostadon)

Approvers:
  - Don Acosta (https://github.com/acostadon)
  - James Lamb (https://github.com/jameslamb)
  - Erik Welch (https://github.com/eriknw)

URL: #4478
  • Loading branch information
rlratzel authored Jul 2, 2024
1 parent 380879f commit eab0460
Show file tree
Hide file tree
Showing 6 changed files with 384 additions and 264 deletions.
8 changes: 4 additions & 4 deletions dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -559,7 +559,9 @@ dependencies:
common:
- output_types: [conda, pyproject]
packages:
- &thrift thriftpy2
# this thriftpy2 entry can be removed entirely (or switched to a '!=')
# once a new release of that project resolves https://github.com/Thriftpy/thriftpy2/issues/281
- &thrift thriftpy2<=0.5.0
python_run_cugraph_service_server:
common:
- output_types: [conda, pyproject]
Expand Down Expand Up @@ -625,9 +627,7 @@ dependencies:
- output_types: [conda]
packages:
- &pylibwholegraph_conda pylibwholegraph==24.8.*,>=0.0.0a0
# this thriftpy2 entry can be removed entirely (or switched to a '!=')
# once a new release of that project resolves https://github.com/Thriftpy/thriftpy2/issues/281
- thriftpy2<=0.5.0
- *thrift
test_python_pylibcugraph:
common:
- output_types: [conda, pyproject]
Expand Down
27 changes: 13 additions & 14 deletions docs/cugraph/source/nx_cugraph/nx_cugraph.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,17 @@
### nx_cugraph


Whereas previous versions of cuGraph have included mechanisms to make it
trivial to plug in cuGraph algorithm calls. Beginning with version 24.02, nx-cuGraph
is now a [networkX backend](<https://networkx.org/documentation/stable/reference/utils.html#backends>).
The user now need only [install nx-cugraph](<https://github.com/rapidsai/cugraph/blob/branch-24.08/python/nx-cugraph/README.md#install>)
to experience GPU speedups.
nx-cugraph is a [NetworkX
backend](<https://networkx.org/documentation/stable/reference/utils.html#backends>) that provides GPU acceleration to many popular NetworkX algorithms.

Lets look at some examples of algorithm speedups comparing CPU based NetworkX to dispatched versions run on GPU with nx_cugraph.
By simply [installing and enabling nx-cugraph](<https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#install>), users can see significant speedup on workflows where performance is hindered by the default NetworkX implementation. With nx-cugraph, users can have GPU-based, large-scale performance without changing their familiar and easy-to-use NetworkX code.

Let's look at some examples of algorithm speedups comparing NetworkX with and without GPU acceleration using nx-cugraph.

Each chart has three measurements.
* NX - running the algorithm natively with networkX on CPU.
* nx-cugraph - running with GPU accelerated networkX achieved by simply calling the cugraph backend. This pays the overhead of building the GPU resident object for each algorithm called. This achieves significant improvement but stil isn't compleltely optimum.
* nx-cugraph (preconvert) - This is a bit more complicated since it involves building (precomputing) the GPU resident graph ahead and reusing it for each algorithm.
* NX - default NetworkX, no GPU acceleration
* nx-cugraph - GPU-accelerated NetworkX using nx-cugraph. This involves an internal conversion/transfer of graph data from CPU to GPU memory
* nx-cugraph (preconvert) - GPU-accelerated NetworkX using nx-cugraph with the graph data pre-converted/transferred to GPU


![Ancestors](../images/ancestors.png)
Expand Down Expand Up @@ -44,25 +43,25 @@ user@machine:/# ipython bc_demo.ipy

You will observe a run time of approximately 7 minutes...more or less depending on your cpu.

Run the command again, this time specifiying cugraph as the NetworkX backend of choice.
Run the command again, this time specifying cugraph as the NetworkX backend.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
This run will be much faster, typically around 20 seconds depending on your GPU.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
There is also an option to add caching. This will dramatically help performance when running multiple algorithms on the same graph.
There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph.
```
NETWORKX_BACKEND_PRIORITY=cugraph CACHE_CONVERTED_GRAPH=True ipython bc_demo.ipy
NETWORKX_BACKEND_PRIORITY=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
```

When running Python interactively, cugraph backend can be specified as an argument in the algorithm call.
When running Python interactively, the cugraph backend can be specified as an argument in the algorithm call.

For example:
```
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```


The latest list of algorithms that can be dispatched to nx-cuGraph for acceleration is found [here](https://github.com/rapidsai/cugraph/blob/main/python/nx-cugraph/README.md#algorithms).
The latest list of algorithms supported by nx-cugraph can be found [here](https://github.com/rapidsai/cugraph/blob/main/python/nx-cugraph/README.md#algorithms).
Loading

0 comments on commit eab0460

Please sign in to comment.