Clustering
C++20 header-only: DBSCAN, HDBSCAN, k-means.
Loading...
Searching...
No Matches
clustering::math::Pool Struct Reference

Thin injection wrapper around a BS::light_thread_pool. More...

#include <clustering/math/thread.h>

Public Member Functions

std::size_t workerCount () const noexcept
 Number of worker threads available, or 1 in serial mode.
bool shouldParallelize (std::size_t totalWork, std::size_t minChunk, std::size_t minTasksPerWorker=2) const noexcept
 Decide whether totalWork warrants parallel dispatch.
bool shouldParallelizeWork (std::size_t totalOps, std::size_t minOpsPerWorker=std::size_t{1}<< 15) const noexcept
 Decide whether totalOps warrants parallel dispatch, based on work volume.

Static Public Member Functions

static std::size_t workerIndex () noexcept
 Stable index of the calling worker thread within the owning pool.

Public Attributes

BS::light_thread_pool * pool = nullptr
 Underlying pool, or nullptr to force serial execution.

Detailed Description

Thin injection wrapper around a BS::light_thread_pool.

Carries an optional pool pointer plus the helpers math kernels need to decide whether a given workload is worth fanning out. A null pool is the explicit serial mode – shouldParallelize then always reports false and the kernel runs on the calling thread without touching any pool machinery.

Note
workerIndex returns 0 outside any pool task. Callers that rely on per-worker scratch isolation must either invoke it from inside a pool task body or pass Pool{nullptr} deliberately.

Definition at line 63 of file thread.h.

Member Function Documentation

◆ shouldParallelize()

bool clustering::math::Pool::shouldParallelize ( std::size_t totalWork,
std::size_t minChunk,
std::size_t minTasksPerWorker = 2 ) const
inlinenodiscardnoexcept

Decide whether totalWork warrants parallel dispatch.

Returns true only when a pool is attached and the work splits into at least workerCount() * minTasksPerWorker chunks of size minChunk. Guards against minChunk == 0 by reporting false rather than dividing by zero.

Parameters
totalWorkTotal number of work units (e.g. matrix elements, rows).
minChunkMinimum chunk size that amortizes per-task overhead.
minTasksPerWorkerMinimum chunks per worker required to bother fanning out.
Returns
true when parallel dispatch should yield speedup, false otherwise.

Definition at line 98 of file thread.h.

◆ shouldParallelizeWork()

bool clustering::math::Pool::shouldParallelizeWork ( std::size_t totalOps,
std::size_t minOpsPerWorker = std::size_t{1} << 15 ) const
inlinenodiscardnoexcept

Decide whether totalOps warrants parallel dispatch, based on work volume.

Complements shouldParallelize by gating on total arithmetic work rather than task count. At very low per-unit cost (e.g. distance kernels at d=2) the chunk-count gate can pass while the per-worker workload is dwarfed by dispatch overhead; this check prevents fan-out when the per-worker op budget would not amortize the pool submit/wait syscalls.

Parameters
totalOpsApproximate total arithmetic operation count across all workers.
minOpsPerWorkerMinimum per-worker op budget that amortizes dispatch overhead.
Returns
true when fan-out pays, false otherwise.

Definition at line 118 of file thread.h.

◆ workerCount()

std::size_t clustering::math::Pool::workerCount ( ) const
inlinenodiscardnoexcept

Number of worker threads available, or 1 in serial mode.

Returns
pool->get_thread_count() when a pool is attached, otherwise 1.

Definition at line 72 of file thread.h.

◆ workerIndex()

std::size_t clustering::math::Pool::workerIndex ( )
inlinestaticnodiscardnoexcept

Stable index of the calling worker thread within the owning pool.

Returns
The worker id reported by BS::this_thread::get_index() when invoked from a pool task body, otherwise 0.

Definition at line 82 of file thread.h.

Member Data Documentation

◆ pool

BS::light_thread_pool* clustering::math::Pool::pool = nullptr

Underlying pool, or nullptr to force serial execution.

Definition at line 65 of file thread.h.


The documentation for this struct was generated from the following file: