Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add nx-cugraph Docs Pages #4669

Merged
merged 18 commits into from
Oct 3, 2024
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/cugraph/source/_static/colab.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 28 additions & 0 deletions docs/cugraph/source/nx_cugraph/benchmarks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Benchmarks

## NetworkX vs. nx-cugraph
We ran several commonly used graph algorithms on both `networkx` and `nx-cugraph`. Here are the results


<figure>

<!-- ![bench-image](link) -->

<figcaption style="text-align: center;">Results of the <a
href="https://github.com/rapidsai/cugraph">Benchmark</a> including <span
class="title-ref">nx-cugraph</span></figcaption>
</figure>
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved

## Reproducing Benchmarks

Below are the steps to reproduce the results on your workstation. These are documented in <https://github.com/rapidsai/cugraph/tree/branch-24.10/benchmarks/nx-cugraph/pytest-based>.
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved

1. Clone the latest <https://github.com/rapidsai/cugraph>

2. Follow the instructions to build an environment

3. Activate the environment

4. Install the latest `nx-cugraph` by following the [guide](installation.md)

5. Follow the instructions written in the README here: `cugraph/benchmarks/nx-cugraph/pytest-based/`
5 changes: 5 additions & 0 deletions docs/cugraph/source/nx_cugraph/faqs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# FAQ

> **1. Is `nx-cugraph` able to run across multiple GPUs?**

nx-cugraph currently does not support multi-GPU. Multi-GPU support may be added to a future release of nx-cugraph, but consider [cugraph](https://docs.rapids.ai/api/cugraph/stable) for multi-GPU accelerated graph analytics in python today.
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved
113 changes: 113 additions & 0 deletions docs/cugraph/source/nx_cugraph/how-it-works.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
# How it Works

NetworkX has the ability to **dispatch function calls to separately-installed third-party backends**.

NetworkX backends let users experience improved performance and/or additional functionality without changing their NetworkX Python code. Examples include backends that provide algorithm acceleration using GPUs, parallel processing, graph database integration, and more.

While NetworkX is a pure-Python implementation with minimal to no dependencies, backends may be written in other languages and require specialized hardware and/or OS support, additional software dependencies, or even separate services. Installation instructions vary based on the backend, and additional information can be found from the individual backend project pages listed in the NetworkX Backend Gallery.


![nxcg-execution-flow](../_static/nxcg-execution-diagram.jpg)
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved

## Enabling nx-cugraph

NetworkX will use nx-cugraph as the graph analytics backend if any of the
following are used:

### `NETWORKX_BACKEND_PRIORITY` environment variable.

The `NETWORKX_BACKEND_PRIORITY` environment variable can be used to have NetworkX automatically dispatch to specified backends. This variable can be set to a single backend name, or a comma-separated list of backends ordered using the priority which NetworkX should try. If a NetworkX function is called that nx-cugraph supports, NetworkX will re-direct the function call to nx-cugraph automatically, or fall back to the next backend in the list if provided, or run using the default NetworkX implementation.
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved
For example, this setting will have NetworkX use nx-cugraph for any function called by the script supported by nx-cugraph, and the default NetworkX implementation for all others.
```
bash> NETWORKX_BACKEND_PRIORITY=cugraph python my_networkx_script.py
```

This example will have NetworkX use nx-cugraph for functions it supports, then try other_backend if nx-cugraph does not support them, and finally the default NetworkX implementation if not supported by either backend:
```
bash> NETWORKX_BACKEND_PRIORITY="cugraph,other_backend" python my_networkx_script.py
```

### `backend=` keyword argument

To explicitly specify a particular backend for an API, use the `backend=`
keyword argument. This argument takes precedence over the
`NETWORKX_BACKEND_PRIORITY` environment variable. This requires anyone
running code that uses the `backend=` keyword argument to have the specified
backend installed.

Example:
```
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```

### Type-based dispatching

NetworkX also supports automatically dispatching to backends associated with
specific graph types. Like the `backend=` keyword argument example above, this
requires the user to write code for a specific backend, and therefore requires
the backend to be installed, but has the advantage of ensuring a particular
behavior without the potential for runtime conversions.

To use type-based dispatching with nx-cugraph, the user must import the backend
directly in their code to access the utilities provided to create a Graph
instance specifically for the nx-cugraph backend.

Example:
```
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved
import networkx as nx
import nx_cugraph as nxcg

G = nx.Graph()
...
nxcg_G = nxcg.from_networkx(G) # conversion happens once here
nx.betweenness_centrality(nxcg_G, k=1000) # nxcg Graph type causes cugraph backend
# to be used, no conversion necessary
```

## Command Line Example

---

Create `bc_demo.ipy` and paste the code below.

```
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved
import pandas as pd
import networkx as nx

url = "https://data.rapids.ai/cugraph/datasets/cit-Patents.csv"
df = pd.read_csv(url, sep=" ", names=["src", "dst"], dtype="int32")
G = nx.from_pandas_edgelist(df, source="src", target="dst")

%time result = nx.betweenness_centrality(G, k=10)
```
Run the command:
```
user@machine:/# ipython bc_demo.ipy
```

You will observe a run time of approximately 7 minutes...more or less depending on your CPU.

Run the command again, this time specifying cugraph as the NetworkX backend.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
This run will be much faster, typically around 20 seconds depending on your GPU.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph. Caching is enabled by default for NetworkX versions 3.4 and later, but if using an older version, set "NETWORKX_CACHE_CONVERTED_GRAPHS=True"
```
NETWORKX_BACKEND_PRIORITY=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
```

When running Python interactively, the cugraph backend can be specified as an argument in the algorithm call.

For example:
```
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```


The latest list of algorithms supported by nx-cugraph can be found [here](https://github.com/rapidsai/cugraph/blob/main/python/nx-cugraph/README.md#algorithms) or in the next session.
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved

---
49 changes: 44 additions & 5 deletions docs/cugraph/source/nx_cugraph/index.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,48 @@
===============================
nxCugraph as a NetworkX Backend
===============================
nx-cugraph
-----------

nx-cugraph is a `NetworkX backend <https://networkx.org/documentation/stable/reference/utils.html#backends>`_ that provides **GPU acceleration** to many popular NetworkX algorithms.

By simply `installing and enabling nx-cugraph <https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#install>`_, users can see significant speedup on workflows where performance is hindered by the default NetworkX implementation. With ``nx-cugraph``, users can have GPU-based, large-scale performance **without** changing their familiar and easy-to-use NetworkX code.

.. code-block:: python

import pandas as pd
import networkx as nx

url = "https://data.rapids.ai/cugraph/datasets/cit-Patents.csv"
df = pd.read_csv(url, sep=" ", names=["src", "dst"], dtype="int32")
G = nx.from_pandas_edgelist(df, source="src", target="dst")

%time result = nx.betweenness_centrality(G, k=10)

.. figure:: ../_static/colab.png
:width: 200px
:target: https://github.com/rapidsai-community/showcase/blob/632d50068f2dec08a591c98f4c41092eab2dfae4/getting_started_tutorials/accelerated_networkx_demo.ipynb/

Try it on Google Colab!


+---------------------------------------------------------------------------------------------+
| **Zero Code Change Acceleration** |
| |
| Just ``nx.config.backend_priority=["cugraph"]`` in Jupyter, or set ``NETWORKX_BACKEND_PRIORITY=cugraph`` in the shell. |
+---------------------------------------------------------------------------------------------+
| **Run the same code on CPU or GPU** |
| |
| Nothing changes, not even your `import` statements, when going from CPU to GPU. |
+---------------------------------------------------------------------------------------------+


``nx-cugraph`` is now Generally Available (GA) as part of the ``RAPIDS`` package. See `RAPIDS
Quick Start <https://rapids.ai/#quick-start>`_ to get up-and-running with ``nx-cugraph``.
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved

.. toctree::
:maxdepth: 2
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved
:maxdepth: 1
:caption: Contents:

nx_cugraph.md
how-it-works
supported-algorithms
installation
benchmarks
faqs
50 changes: 50 additions & 0 deletions docs/cugraph/source/nx_cugraph/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Getting Started

This guide describes how to install ``nx-cugraph`` and use it in your workflows.


## System Requirements

`nx-cugraph` requires the following:

- **Volta architecture or later NVIDIA GPU, with [compute capability](https://developer.nvidia.com/cuda-gpus) 7.0+**
- **[CUDA](https://docs.nvidia.com/cuda/index.html) 11.2, 11.4, 11.5, 11.8, 12.0, 12.2, or 12.5**
- **Python >= 3.10**
- **[NetworkX](https://networkx.org/documentation/stable/install.html#) >= 3.0 (version 3.2 or higher recommended)**

More details about system requirements can be found in the [RAPIDS System Requirements Documentation](https://docs.rapids.ai/install#system-req).

## Installing nx-cugraph

Read the [RAPIDS Quick Start Guide](https://docs.rapids.ai/install) to learn more about installing all RAPIDS libraries.

`nx-cugraph` can be installed using conda or pip. It is included in the RAPIDS metapackage, or can be installed separately.

### Conda
**Nightly version**
```bash
conda install -c rapidsai-nightly -c conda-forge -c nvidia nx-cugraph
```

**Stable version**
```bash
conda install -c rapidsai -c conda-forge -c nvidia nx-cugraph
```

### pip
**Nightly version**
```bash
pip install nx-cugraph-cu11 --extra-index-url https://pypi.anaconda.org/rapidsai-wheels-nightly/simple
```

**Stable version**
```bash
pip install nx-cugraph-cu11 --extra-index-url https://pypi.nvidia.com
```

<div style="border: 1px solid #ccc; background-color: #f9f9f9; padding: 10px; border-radius: 5px;">

**Note:**
- The `pip install` examples above ar efor CUDA 11. To install for CUDA 12, replace `-cu11` with `-cu12`
nv-rliu marked this conversation as resolved.
Show resolved Hide resolved

</div>
57 changes: 3 additions & 54 deletions docs/cugraph/source/nx_cugraph/nx_cugraph.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,10 @@
### nx_cugraph


nx-cugraph is a [NetworkX
backend](<https://networkx.org/documentation/stable/reference/utils.html#backends>) that provides GPU acceleration to many popular NetworkX algorithms.

By simply [installing and enabling nx-cugraph](<https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#install>), users can see significant speedup on workflows where performance is hindered by the default NetworkX implementation. With nx-cugraph, users can have GPU-based, large-scale performance without changing their familiar and easy-to-use NetworkX code.

Let's look at some examples of algorithm speedups comparing NetworkX with and without GPU acceleration using nx-cugraph.

Each chart has three measurements.
* NX - default NetworkX, no GPU acceleration
* nx-cugraph - GPU-accelerated NetworkX using nx-cugraph. This involves an internal conversion/transfer of graph data from CPU to GPU memory
* nx-cugraph (preconvert) - GPU-accelerated NetworkX using nx-cugraph with the graph data pre-converted/transferred to GPU
`nx-cugraph` is a [networkX backend](<https://networkx.org/documentation/stable/reference/utils.html#backends>) that accelerates many popular NetworkX functions using cuGraph and NVIDIA GPUs.
Users simply [install and enable nx-cugraph](installation.md) to experience GPU speedups.

Lets look at some examples of algorithm speedups comparing CPU based NetworkX to dispatched versions run on GPU with nx_cugraph.

![Ancestors](../images/ancestors.png)
![BFS Tree](../images/bfs_tree.png)
Expand All @@ -22,46 +14,3 @@ Each chart has three measurements.
![Pagerank](../images/pagerank.png)
![Single Source Shortest Path](../images/sssp.png)
![Weakly Connected Components](../images/wcc.png)

### Command line example
Open bc_demo.ipy and paste the code below.

```
import pandas as pd
import networkx as nx

url = "https://data.rapids.ai/cugraph/datasets/cit-Patents.csv"
df = pd.read_csv(url, sep=" ", names=["src", "dst"], dtype="int32")
G = nx.from_pandas_edgelist(df, source="src", target="dst")

%time result = nx.betweenness_centrality(G, k=10)
```
Run the command:
```
user@machine:/# ipython bc_demo.ipy
```

You will observe a run time of approximately 7 minutes...more or less depending on your cpu.

Run the command again, this time specifying cugraph as the NetworkX backend.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
This run will be much faster, typically around 20 seconds depending on your GPU.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph.
```
NETWORKX_BACKEND_PRIORITY=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
```

When running Python interactively, the cugraph backend can be specified as an argument in the algorithm call.

For example:
```
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```


The latest list of algorithms supported by nx-cugraph can be found [here](https://github.com/rapidsai/cugraph/blob/main/python/nx-cugraph/README.md#algorithms).
Loading
Loading