[Bugfix][NCCL] Release NCCL thread_local resources in destructor #17078
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Prior to this commit, allocations performed by
ncclCommInitRank
had no corresponding call toncclCommDestroy
. WhilencclCommDestroy
does occur in theCCLThreadLocalContext::Clear
method, there are no calls into this method. On worker processes, the failure to callncclCommDestroy
typically had little effect. Any destruction would occur shortly before the process closes, and so resources would be reclaimed by the OS when the process terminates.However, worker0 of a Disco session is a separate thread, rather than a separate process. While this allows it to easily receive data from the controller thread, resources allocated by worker0 are not reclaimed by the OS until the entire process terminates. As a result, the
CCLThreadLocalContext
leaked GPU memory, as thencclCommInitRank
call at the start of eachtvm.runtime.disco.ProcessSession
was never de-allocated. The increase in GPU memory usage was about 1 gigabyte for eachProcessSession
.This commit updates
CCLThreadLocalContext
to have a destructor that calls theClear
method. For worker0, this is called when the thread is joined to the main thread.