Skip to content

Commit

Permalink
Improve docs formatting and update links. (#1086)
Browse files Browse the repository at this point in the history
This PR improves a variety of small issues in the docs, including misaligned headers that cause poor rendering, outdated links, and inconsistent use of code formatting. I am hoping to make a few more passes through the docs to make them more useful and move from the deprecated `recommonmark` to MyST as was recently done in cuDF. This PR is a starting point that is probably worth merging on its own.

Authors:
  - Bradley Dice (https://github.com/bdice)

Approvers:
  - Mark Harris (https://github.com/harrism)

URL: #1086
  • Loading branch information
bdice authored Aug 15, 2022
1 parent 03013f3 commit adcfb93
Show file tree
Hide file tree
Showing 4 changed files with 26 additions and 24 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -708,7 +708,7 @@ See [here](#memoryresource-objects) for more information on changing the current
### Using RMM with Numba

You can configure Numba to use RMM for memory allocations using the
Numba [EMM Plugin](http://numba.pydata.org/numba-doc/latest/cuda/external-memory.html#setting-the-emm-plugin).
Numba [EMM Plugin](https://numba.readthedocs.io/en/stable/cuda/external-memory.html#setting-emm-plugin).

This can be done in two ways:

Expand Down
2 changes: 1 addition & 1 deletion python/docs/basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ allocations by setting the CuPy CUDA allocator to
### Using RMM with Numba

You can configure Numba to use RMM for memory allocations using the
Numba [EMM Plugin](http://numba.pydata.org/numba-doc/latest/cuda/external-memory.html#setting-the-emm-plugin).
Numba [EMM Plugin](https://numba.readthedocs.io/en/stable/cuda/external-memory.html#setting-emm-plugin).

This can be done in two ways:

Expand Down
16 changes: 10 additions & 6 deletions python/rmm/_lib/memory_resource.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -236,19 +236,20 @@ cdef class CudaMemoryResource(DeviceMemoryResource):

def __init__(self):
"""
Memory resource that uses cudaMalloc/Free for allocation/deallocation
Memory resource that uses ``cudaMalloc``/``cudaFree`` for
allocation/deallocation.
"""
pass


cdef class CudaAsyncMemoryResource(DeviceMemoryResource):
"""
Memory resource that uses cudaMallocAsync/Free for
Memory resource that uses ``cudaMallocAsync``/``cudaFreeAsync`` for
allocation/deallocation.
Parameters
----------
initial_pool_size : int,optional
initial_pool_size : int, optional
Initial pool size in bytes. By default, half the available memory
on the device is used.
release_threshold: int, optional
Expand Down Expand Up @@ -312,7 +313,7 @@ cdef class ManagedMemoryResource(DeviceMemoryResource):

def __init__(self):
"""
Memory resource that uses cudaMallocManaged/Free for
Memory resource that uses ``cudaMallocManaged``/``cudaFree`` for
allocation/deallocation.
"""
pass
Expand Down Expand Up @@ -361,7 +362,7 @@ cdef class PoolMemoryResource(UpstreamResourceAdaptor):
upstream_mr : DeviceMemoryResource
The DeviceMemoryResource from which to allocate blocks for the
pool.
initial_pool_size : int,optional
initial_pool_size : int, optional
Initial pool size in bytes. By default, half the available memory
on the device is used.
maximum_pool_size : int, optional
Expand Down Expand Up @@ -551,7 +552,7 @@ cdef class CallbackMemoryResource(DeviceMemoryResource):
integer representing the number of bytes to free.
Examples
-------
--------
>>> import rmm
>>> base_mr = rmm.mr.CudaMemoryResource()
>>> def allocate_func(size):
Expand Down Expand Up @@ -695,6 +696,9 @@ cdef class StatisticsResourceAdaptor(UpstreamResourceAdaptor):
Gets the current, peak, and total allocated bytes and number of
allocations.

The dictionary keys are ``current_bytes``, ``current_count``,
``peak_bytes``, ``peak_count``, ``total_bytes``, and ``total_count``.

Returns:
dict: Dictionary containing allocation counts and bytes.
"""
Expand Down
30 changes: 14 additions & 16 deletions python/rmm/rmm.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,24 +66,22 @@ def reinitialize(
logging : bool, default False
If True, enable run-time logging of all memory events
(alloc, free, realloc).
This has significant performance impact.
This has a significant performance impact.
log_file_name : str
Name of the log file. If not specified, the environment variable
RMM_LOG_FILE is used. A ValueError is thrown if neither is available.
A separate log file is produced for each device,
and the suffix `".dev{id}"` is automatically added to the log file
name.
``RMM_LOG_FILE`` is used. A ``ValueError`` is thrown if neither is
available. A separate log file is produced for each device, and the
suffix `".dev{id}"` is automatically added to the log file name.
Notes
-----
Note that if you use the environment variable CUDA_VISIBLE_DEVICES
with logging enabled, the suffix may not be what you expect. For
example, if you set CUDA_VISIBLE_DEVICES=1, the log file produced
will still have suffix `0`. Similarly, if you set
CUDA_VISIBLE_DEVICES=1,0 and use devices 0 and 1, the log file
with suffix `0` will correspond to the GPU with device ID `1`.
Use `rmm.get_log_filenames()` to get the log file names
corresponding to each device.
Note that if you use the environment variable ``CUDA_VISIBLE_DEVICES`` with
logging enabled, the suffix may not be what you expect. For example, if you
set ``CUDA_VISIBLE_DEVICES=1``, the log file produced will still have
suffix ``0``. Similarly, if you set ``CUDA_VISIBLE_DEVICES=1,0`` and use
devices 0 and 1, the log file with suffix ``0`` will correspond to the GPU
with device ID ``1``. Use `rmm.get_log_filenames()` to get the log file
names corresponding to each device.
"""
for func, args, kwargs in reversed(_reinitialize_hooks):
func(*args, **kwargs)
Expand All @@ -101,7 +99,7 @@ def reinitialize(

def is_initialized():
"""
Returns true if RMM has been initialized, false otherwise
Returns True if RMM has been initialized, False otherwise.
"""
return rmm.mr.is_initialized()

Expand All @@ -111,7 +109,7 @@ class RMMNumbaManager(HostOnlyCUDAMemoryManager):
External Memory Management Plugin implementation for Numba. Provides
on-device allocation only.
See http://numba.pydata.org/numba-doc/latest/cuda/external-memory.html for
See https://numba.readthedocs.io/en/stable/cuda/external-memory.html for
details of the interface being implemented here.
"""

Expand Down Expand Up @@ -206,7 +204,7 @@ def finalizer():

# Enables the use of RMM for Numba via an environment variable setting,
# NUMBA_CUDA_MEMORY_MANAGER=rmm. See:
# http://numba.pydata.org/numba-doc/latest/cuda/external-memory.html#environment-variable
# https://numba.readthedocs.io/en/stable/cuda/external-memory.html#environment-variable
_numba_memory_manager = RMMNumbaManager

try:
Expand Down

0 comments on commit adcfb93

Please sign in to comment.