Memory pool for all GPU devices on the host.
A memory pool preserves any allocations even if they are freed by the user. Freed memory buffers are held by the memory pool as free blocks, and they are reused for further memory allocations of the same sizes. The allocated blocks are managed for each device, so one instance of this class can be used for multiple devices.
When the allocation is skipped by reusing the pre-allocated block, it does not call
cudaMallocand therefore CPU-GPU synchronization does not occur. It makes interleaves of memory allocations and kernel invocations very fast.
The memory pool holds allocated blocks without freeing as much as possible. It makes the program hold most of the device memory, which may make other CUDA programs running in parallel out-of-memory situation.
Parameters: allocator (function) – The base CuPy memory allocator. It is used for allocating new blocks when the blocks of the required size are all in use.
Releases free blocks.
Parameters: stream (cupy.cuda.Stream) – Release free blocks in the arena of the given stream. The default releases blocks in all arenas.
free_bytes(self) → size_t¶
Gets the total number of bytes acquired but not used in the pool.
Returns: The total number of bytes acquired but not used in the pool. Return type: int
get_limit(self) → size_t¶
Gets the upper limit of memory allocation of the current device.
Returns: The number of bytes Return type: int
malloc(self, size_t size) → MemoryPointer¶
Allocates the memory, from the pool if possible.
This method can be used as a CuPy memory allocator. The simplest way to use a memory pool as the default allocator is the following code:
Also, the way to use a memory pool of Managed memory (Unified memory) as the default allocator is the following code:
Parameters: size (int) – Size of the memory buffer to allocate in bytes. Returns: Pointer to the allocated buffer. Return type: MemoryPointer
n_free_blocks(self) → size_t¶
Counts the total number of free blocks.
Returns: The total number of free blocks. Return type: int
set_limit(self, size=None, fraction=None)¶
Sets the upper limit of memory allocation of the current device.
When fraction is specified, its value will become a fraction of the amount of GPU memory that is available for allocation. For example, if you have a GPU with 2 GiB memory, you can either use
set_limit(size=1024**3)to limit the memory size to 1 GiB.
fractioncannot be specified at one time. If both of them are not specified or
0is specified, the limit will be disabled.
You can also set the limit by using
CUPY_GPU_MEMORY_LIMITenvironment variable. See Environment variables for the details. The limit set by this method supersedes the value specified in the environment variable.
Also note that this method only changes the limit for the current device, whereas the environment variable sets the default limit for all devices.
total_bytes(self) → size_t¶
Gets the total number of bytes acquired in the pool.
Returns: The total number of bytes acquired in the pool. Return type: int