Skip to content

Commit

Permalink
Merge pull request #2190 from rlratzel/branch-22.06-merge-22.04
Browse files Browse the repository at this point in the history
Branch 22.06 merge 22.04
  • Loading branch information
sevagh authored Apr 5, 2022
2 parents 017baab + b259a87 commit 3fd4f41
Show file tree
Hide file tree
Showing 60 changed files with 3,372 additions and 544 deletions.
47 changes: 33 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,23 @@ There are 3 ways to get cuGraph :
<br/><br/>

---
# cuGraph News

### Scaling to 1 Trillion Edges
cuGraph was recently tested on the Selene supercomputer using 2,048 GPUs and processing a graph with `1.1 Trillion edges`.

<div align="left"><img src="img/Scaling.png" width="500px" style="background-color: white;"/>&nbsp;</br>cuGraph Scaling</div>
</br></br>

### cuGraph Software Stack
cuGraph has a new multi-layer software stack that allows users and system integrators to access cuGraph at different layers.

<div align="left"><img src="img/cugraph-stack.png" width="500px" style="background-color: white;"/>&nbsp;</br>cuGraph Software Stack</div>
</br></br>




# Currently Supported Features
As of Release 21.08 - including 21.08 nightly

Expand All @@ -50,24 +67,24 @@ _Italic_ algorithms are planned for future releases.
| ------------ | -------------------------------------- | ------------ | ------------------- |
| Centrality | | | |
| | Katz | Multi-GPU | |
| | Betweenness Centrality | Single-GPU | |
| | Betweenness Centrality | Single-GPU | MG planned for 22.08 |
| | Edge Betweenness Centrality | Single-GPU | |
| | _Eigenvector Centrality_ | | _MG planned for 22.06_ |
| Community | | | |
| | EgoNet | Single-GPU | |
| | Leiden | Single-GPU | |
| | Louvain | Multi-GPU | [C++ README](cpp/src/community/README.md#Louvain) |
| | Ensemble Clustering for Graphs | Single-GPU | |
| | Spectral-Clustering - Balanced Cut | Single-GPU | |
| | Spectral-Clustering - Modularity | Single-GPU | |
| | Subgraph Extraction | Single-GPU | |
| | Triangle Counting | Single-GPU | |
| | K-Truss | Single-GPU | |
| | Triangle Counting | Single-GPU | MG planned for 22.06 |
| | K-Truss | Single-GPU | MG planned for 22.10 |
| Components | | | |
| | Weakly Connected Components | Multi-GPU | |
| | Strongly Connected Components | Single-GPU | |
| | Strongly Connected Components | Single-GPU | MG planned for 22.06 |
| Core | | | |
| | K-Core | Single-GPU | |
| | Core Number | Single-GPU | |
| | K-Core | Single-GPU | MG planned for 22.10 |
| | Core Number | Single-GPU | MG planned for 22.08 |
| _Flow_ | | | |
| | _MaxFlow_ | --- | |
| _Influence_ | | | |
Expand All @@ -79,7 +96,7 @@ _Italic_ algorithms are planned for future releases.
| Link Analysis| | | |
| | Pagerank | Multi-GPU | [C++ README](cpp/src/centrality/README.md#Pagerank) |
| | Personal Pagerank | Multi-GPU | [C++ README](cpp/src/centrality/README.md#Personalized-Pagerank) |
| | HITS | Single-GPU | Multi-GPU C code is ready, Python wrapper in 22.04 |
| | HITS | Multi-GPU | |
| Link Prediction | | | |
| | Jaccard Similarity | Single-GPU | |
| | Weighted Jaccard Similarity | Single-GPU | |
Expand All @@ -89,10 +106,12 @@ _Italic_ algorithms are planned for future releases.
| Sampling | | | |
| | Random Walks (RW) | Single-GPU | Biased and Uniform |
| | Egonet | Single-GPU | multi-seed |
| | _node2vec_ | --- | C code is ready, Python wrapper coming in 22.04 |
| | Node2Vec | Single-GPU | |
| | Neighborhood sampling | Multi-GPU | |
| Traversal | | | |
| | Breadth First Search (BFS) | Multi-GPU | with cutoff support <br/> [C++ README](cpp/src/traversal/README.md#BFS) |
| | Single Source Shortest Path (SSSP) | Multi-GPU | [C++ README](cpp/src/traversal/README.md#SSSP) |
| | _ASSP / APSP_ | | |
| Tree | | | |
| | Minimum Spanning Tree | Single-GPU | |
| | Maximum Spanning Tree | Single-GPU | |
Expand Down Expand Up @@ -164,20 +183,20 @@ Install and update cuGraph using the conda command:

```bash

# CUDA 11.0
conda install -c nvidia -c rapidsai -c numba -c conda-forge cugraph cudatoolkit=11.0

# CUDA 11.2
conda install -c nvidia -c rapidsai -c numba -c conda-forge cugraph cudatoolkit=11.2



# CUDA 11.4
conda install -c nvidia -c rapidsai -c numba -c conda-forge cugraph cudatoolkit=11.4

# CUDA 11.5
conda install -c nvidia -c rapidsai -c numba -c conda-forge cugraph cudatoolkit=11.5

For CUDA > 11.5, please use the 11.5 environment
```

Note: This conda installation only applies to Linux and Python versions 3.7/3.8.
Note: This conda installation only applies to Linux and Python versions 3.8/3.9.


## Build from Source and Contributing <a name="source"></a>
Expand Down
6 changes: 3 additions & 3 deletions ci/test.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
#!/bin/bash
# Copyright (c) 2019-2021, NVIDIA CORPORATION.
# Copyright (c) 2019-2022, NVIDIA CORPORATION.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
Expand Down Expand Up @@ -96,9 +96,9 @@ cd ${CUGRAPH_ROOT}/python/pylibcugraph/pylibcugraph
pytest --cache-clear --junitxml=${CUGRAPH_ROOT}/junit-pylibcugraph-pytests.xml -v --cov-config=.coveragerc --cov=pylibcugraph --cov-report=xml:${WORKSPACE}/python/pylibcugraph/pylibcugraph-coverage.xml --cov-report term --ignore=raft --benchmark-disable
echo "Ran Python pytest for pylibcugraph : return code was: $?, test script exit code is now: $EXITCODE"

echo "Python pytest for cuGraph..."
echo "Python pytest for cuGraph (single-GPU only)..."
cd ${CUGRAPH_ROOT}/python/cugraph/cugraph
pytest --cache-clear --junitxml=${CUGRAPH_ROOT}/junit-cugraph-pytests.xml -v --cov-config=.coveragerc --cov=cugraph --cov-report=xml:${WORKSPACE}/python/cugraph/cugraph-coverage.xml --cov-report term --ignore=raft --benchmark-disable
pytest --cache-clear --junitxml=${CUGRAPH_ROOT}/junit-cugraph-pytests.xml -v --cov-config=.coveragerc --cov=cugraph --cov-report=xml:${WORKSPACE}/python/cugraph/cugraph-coverage.xml --cov-report term --ignore=raft --ignore=tests/dask --benchmark-disable
echo "Ran Python pytest for cugraph : return code was: $?, test script exit code is now: $EXITCODE"

echo "Python benchmarks for cuGraph (running as tests)..."
Expand Down
49 changes: 0 additions & 49 deletions conda/environments/cugraph_dev_cuda11.0.yml

This file was deleted.

2 changes: 1 addition & 1 deletion conda/environments/cugraph_dev_cuda11.2.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ dependencies:
- networkx>=2.5.1
- clang=11.1.0
- clang-tools=11.1.0
- cmake>=3.20.1
- cmake>=3.20.1,<3.23
- python>=3.6,<3.9
- notebook>=0.5.0
- boost
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/cugraph_dev_cuda11.4.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ dependencies:
- networkx>=2.5.1
- clang=11.1.0
- clang-tools=11.1.0
- cmake>=3.20.1
- cmake>=3.20.1,<3.23
- python>=3.6,<3.9
- notebook>=0.5.0
- boost
Expand Down
2 changes: 1 addition & 1 deletion conda/environments/cugraph_dev_cuda11.5.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ dependencies:
- networkx>=2.5.1
- clang=11.1.0
- clang-tools=11.1.0
- cmake>=3.20.1
- cmake>=3.20.1,<3.23
- python>=3.6,<3.9
- notebook>=0.5.0
- boost
Expand Down
2 changes: 1 addition & 1 deletion conda/recipes/libcugraph/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ build:

requirements:
build:
- cmake>=3.20.1
- cmake>=3.20.1,<3.23
- doxygen>=1.8.11
- cudatoolkit {{ cuda_version }}.*
- libraft-headers {{ minor_version }}
Expand Down
2 changes: 1 addition & 1 deletion conda/recipes/libcugraph_etl/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ build:

requirements:
build:
- cmake>=3.20.1
- cmake>=3.20.1,<3.23
- doxygen>=1.8.11
- cudatoolkit {{ cuda_version }}.*
- libcudf {{ minor_version }}.*
Expand Down
21 changes: 11 additions & 10 deletions cpp/include/cugraph_c/algorithms.h
Original file line number Diff line number Diff line change
Expand Up @@ -514,20 +514,21 @@ typedef struct {
* replacement. If false selection is done without replacement.
* @param [in] do_expensive_check
* A flag to run expensive checks for input arguments (if set to true)
* @param [in] result Output from the uniform_nbr_sample call
* @param [in] result Output from the uniform_neighbor_sample call
* @param [out] error Pointer to an error object storing details of any error. Will
* be populated if error code is not CUGRAPH_SUCCESS
* @return error code
*/
cugraph_error_code_t uniform_nbr_sample(const cugraph_resource_handle_t* handle,
cugraph_graph_t* graph,
const cugraph_type_erased_device_array_view_t* start,
const cugraph_type_erased_device_array_view_t* start_label,
const cugraph_type_erased_host_array_view_t* fan_out,
bool_t without_replacement,
bool_t do_expensive_check,
cugraph_sample_result_t** result,
cugraph_error_t** error);
cugraph_error_code_t cugraph_uniform_neighbor_sample(
const cugraph_resource_handle_t* handle,
cugraph_graph_t* graph,
const cugraph_type_erased_device_array_view_t* start,
const cugraph_type_erased_device_array_view_t* start_label,
const cugraph_type_erased_host_array_view_t* fan_out,
bool_t with_replacement,
bool_t do_expensive_check,
cugraph_sample_result_t** result,
cugraph_error_t** error);

/**
* @brief Get the source vertices from the sampling algorithm result
Expand Down
16 changes: 16 additions & 0 deletions cpp/include/cugraph_c/array.h
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,22 @@ data_type_id_t cugraph_type_erased_host_array_type(const cugraph_type_erased_hos
*/
void* cugraph_type_erased_host_array_pointer(const cugraph_type_erased_host_array_view_t* p);

/**
* @brief Copy data between two type erased device array views
*
* @param [in] handle Handle for accessing resources
* @param [out] dst Pointer to type erased host array view destination
* @param [in] src Pointer to type erased host array view source
* @param [out] error Pointer to an error object storing details of any error. Will
* be populated if error code is not CUGRAPH_SUCCESS
* @return error code
*/
cugraph_error_code_t cugraph_type_erased_host_array_view_copy(
const cugraph_resource_handle_t* handle,
cugraph_type_erased_host_array_view_t* dst,
const cugraph_type_erased_host_array_view_t* src,
cugraph_error_t** error);

/**
* @brief Copy data from host to a type erased device array view
*
Expand Down
44 changes: 41 additions & 3 deletions cpp/src/c_api/array.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -150,8 +150,7 @@ extern "C" cugraph_error_code_t cugraph_type_erased_host_array_create(
size_t n_bytes = n_elems * (::data_type_sz[dtype]);

*array = reinterpret_cast<cugraph_type_erased_host_array_t*>(
new cugraph::c_api::cugraph_type_erased_host_array_t{
std::make_unique<std::byte[]>(n_bytes), n_elems, n_bytes, dtype});
new cugraph::c_api::cugraph_type_erased_host_array_t{n_elems, n_bytes, dtype});

return CUGRAPH_SUCCESS;
} catch (std::exception const& ex) {
Expand Down Expand Up @@ -223,6 +222,46 @@ extern "C" void* cugraph_type_erased_host_array_pointer(
return internal_pointer->data_;
}

extern "C" cugraph_error_code_t cugraph_type_erased_host_array_view_copy(
const cugraph_resource_handle_t* handle,
cugraph_type_erased_host_array_view_t* dst,
const cugraph_type_erased_host_array_view_t* src,
cugraph_error_t** error)
{
*error = nullptr;

try {
auto p_handle = reinterpret_cast<cugraph::c_api::cugraph_resource_handle_t const*>(handle);
auto internal_pointer_dst =
reinterpret_cast<cugraph::c_api::cugraph_type_erased_host_array_view_t*>(dst);
auto internal_pointer_src =
reinterpret_cast<cugraph::c_api::cugraph_type_erased_host_array_view_t const*>(src);

if (!handle) {
*error = reinterpret_cast<cugraph_error_t*>(
new cugraph::c_api::cugraph_error_t{"invalid resource handle"});
return CUGRAPH_INVALID_HANDLE;
}

if (internal_pointer_src->num_bytes() != internal_pointer_dst->num_bytes()) {
*error = reinterpret_cast<cugraph_error_t*>(
new cugraph::c_api::cugraph_error_t{"source and destination arrays are different sizes"});
return CUGRAPH_INVALID_INPUT;
}

raft::copy(reinterpret_cast<byte_t*>(internal_pointer_dst->data_),
reinterpret_cast<byte_t const*>(internal_pointer_src->data_),
internal_pointer_src->num_bytes(),
p_handle->handle_->get_stream());

return CUGRAPH_SUCCESS;
} catch (std::exception const& ex) {
auto tmp_error = new cugraph::c_api::cugraph_error_t{ex.what()};
*error = reinterpret_cast<cugraph_error_t*>(tmp_error);
return CUGRAPH_UNKNOWN_ERROR;
}
}

extern "C" cugraph_error_code_t cugraph_type_erased_device_array_view_copy_from_host(
const cugraph_resource_handle_t* handle,
cugraph_type_erased_device_array_view_t* dst,
Expand Down Expand Up @@ -286,7 +325,6 @@ extern "C" cugraph_error_code_t cugraph_type_erased_device_array_view_copy_to_ho
return CUGRAPH_UNKNOWN_ERROR;
}
}

extern "C" cugraph_error_code_t cugraph_type_erased_device_array_view_copy(
const cugraph_resource_handle_t* handle,
cugraph_type_erased_device_array_view_t* dst,
Expand Down
29 changes: 25 additions & 4 deletions cpp/src/c_api/array.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@ struct cugraph_type_erased_device_array_view_t {
struct cugraph_type_erased_device_array_t {
// NOTE: size must be first here because the device buffer is released
size_t size_;
// Why doesn't rmm::device_buffer support release?
rmm::device_buffer data_;
data_type_id_t type_;

Expand Down Expand Up @@ -87,15 +86,37 @@ struct cugraph_type_erased_host_array_view_t {
return reinterpret_cast<T*>(data_);
}

template <typename T>
T const* as_type() const
{
return reinterpret_cast<T const*>(data_);
}

size_t num_bytes() const { return num_bytes_; }
};

struct cugraph_type_erased_host_array_t {
std::unique_ptr<std::byte[]> data_;
size_t size_;
size_t num_bytes_;
std::unique_ptr<std::byte[]> data_{nullptr};
size_t size_{0};
size_t num_bytes_{0};
data_type_id_t type_;

cugraph_type_erased_host_array_t(size_t size, size_t num_bytes, data_type_id_t type)
: data_(std::make_unique<std::byte[]>(num_bytes)),
size_(size),
num_bytes_(num_bytes),
type_(type)
{
}

template <typename T>
cugraph_type_erased_host_array_t(std::vector<T>& vec, data_type_id_t type)
: size_(vec.size()), num_bytes_(vec.size() * sizeof(T)), type_(type)
{
data_ = std::make_unique<std::byte[]>(num_bytes_);
std::copy(vec.begin(), vec.end(), reinterpret_cast<T*>(data_.get()));
}

auto view()
{
return new cugraph_type_erased_host_array_view_t{data_.get(), size_, num_bytes_, type_};
Expand Down
Loading

0 comments on commit 3fd4f41

Please sign in to comment.