Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds a new SYCL implementation of the onesweep radix sort kernel template, refactors shared ESIMD/SYCL infrastructure, and unifies the KT test harness to run against either backend.
Changes:
- Introduces SYCL onesweep radix sort implementation and integrates it into the kernel templates public header.
- Refactors ESIMD radix sort internals into shared dispatcher/submitter/kernel components with tag-based dispatch.
- Updates KT tests and CMake generation to build/run both ESIMD and SYCL variants via a unified test source set.
Reviewed changes
Copilot reviewed 22 out of 22 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| test/kt/single_pass_scan.cpp | Switches test include to unified radix sort KT test utilities header. |
| test/kt/radix_sort_utils.h | Adds backend namespace aliases and new backend-aware SLM sizing logic for test skipping. |
| test/kt/radix_sort_out_of_place.cpp | Updates tests to call backend-selected namespace (ESIMD vs SYCL) and removes local can_run_test. |
| test/kt/radix_sort_by_key_out_of_place.cpp | Updates by-key out-of-place tests to call backend-selected namespace. |
| test/kt/radix_sort_by_key.cpp | Updates by-key in-place tests to call backend-selected namespace. |
| test/kt/radix_sort.cpp | Updates key-only tests to backend-selected namespace and adds ESIMD-only deprecated-namespace coverage path. |
| test/kt/CMakeLists.txt | Generalizes sort test generation to support both ESIMD and SYCL variants with backend compile definitions. |
| include/oneapi/dpl/pstl/hetero/dpcpp/parallel_backend_sycl_radix_sort_one_wg.h | Extends subgroup radix sort to support explicit destination output range. |
| include/oneapi/dpl/pstl/hetero/dpcpp/parallel_backend_sycl_radix_sort.h | Updates callers to pass a destination range to one-work-group radix sort path. |
| include/oneapi/dpl/experimental/kt/sycl_radix_sort.h | Adds the public SYCL KT API surface for radix_sort and radix_sort_by_key (in-/out-of-place). |
| include/oneapi/dpl/experimental/kt/internal/sycl_radix_sort_kernels.h | Adds SYCL onesweep kernels (global histogram + onesweep reorder/lookback) implementation. |
| include/oneapi/dpl/experimental/kt/internal/sub_group/sub_group_scan.h | Updates sub-group scan helper to accept either lazy storage or plain types and changes backend include. |
| include/oneapi/dpl/experimental/kt/internal/radix_sort_utils.h | Adds shared SYCL scalar utilities, tags, and parameter validation used by both backends. |
| include/oneapi/dpl/experimental/kt/internal/radix_sort_submitters.h | Adds shared submitters/launch logic for ESIMD and SYCL kernels, including one-WG fallback. |
| include/oneapi/dpl/experimental/kt/internal/radix_sort_kernels.h | Adds shared forward declarations of kernel functors for tag-based compilation. |
| include/oneapi/dpl/experimental/kt/internal/radix_sort_dispatchers.h | Refactors dispatchers into shared namespace with tag dispatch for ESIMD vs SYCL. |
| include/oneapi/dpl/experimental/kt/internal/esimd_radix_sort_utils.h | Moves ESIMD internals into shared kt::gpu::__impl namespace for unified use. |
| include/oneapi/dpl/experimental/kt/internal/esimd_defs.h | Moves ESIMD constants into shared kt::gpu::__impl namespace for unified use. |
| include/oneapi/dpl/experimental/kt/internal/esimd_radix_sort_kernels.h | Converts ESIMD free functions into tagged kernel functors and unifies namespace/internals. |
| include/oneapi/dpl/experimental/kt/internal/esimd_radix_sort_submitters.h | Removes the now-replaced ESIMD-only submitters header (replaced by unified submitters). |
| include/oneapi/dpl/experimental/kt/esimd_radix_sort.h | Updates ESIMD public API to route through shared dispatchers with explicit __esimd_tag. |
| include/oneapi/dpl/experimental/kernel_templates | Exposes the new SYCL radix sort KT header in the kernel templates umbrella include. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
include/oneapi/dpl/experimental/kt/internal/sycl_radix_sort_kernels.h
Outdated
Show resolved
Hide resolved
danhoeflinger
left a comment
There was a problem hiding this comment.
For others: I suggest turning on "hide whitespace" to review. The esimd changes are badly rendered otherwise. These had to be massaged a bit to reuse the shared submitter infrastructure but in reality had minimal changes.
include/oneapi/dpl/experimental/kt/internal/radix_sort_dispatchers.h
Outdated
Show resolved
Hide resolved
include/oneapi/dpl/experimental/kt/internal/esimd_radix_sort_submitters.h
Outdated
Show resolved
Hide resolved
include/oneapi/dpl/experimental/kt/internal/radix_sort_submitters.h
Outdated
Show resolved
Hide resolved
include/oneapi/dpl/pstl/hetero/dpcpp/parallel_backend_sycl_radix_sort_one_wg.h
Outdated
Show resolved
Hide resolved
include/oneapi/dpl/pstl/hetero/dpcpp/parallel_backend_sycl_radix_sort_one_wg.h
Outdated
Show resolved
Hide resolved
Signed-off-by: Dan Hoeflinger <dan.hoeflinger@intel.com>
* Implements two component histogram: SLM -> global memory with duplicate SLM bins to reduce atomic contention Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
…ons, and fix slm test bug
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
558b76b to
f93c19d
Compare
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
| __get_num_work_groups(const sycl::kernel& __kernel, sycl::queue& __q, std::uint32_t __tile_count, | ||
| std::uint32_t __slm_size_bytes) const |
There was a problem hiding this comment.
It seems like this function a generic workaround utility which is not specific to the onesweep kernel. Do we want to put it somewhere more accessible, where other cooperative launch kernels could use it?
Also, am I correct that once the bug is fixed we would merely need the first statement of the function?
There was a problem hiding this comment.
Do we want to put it somewhere more accessible, where other cooperative launch kernels could use it?
It is a generic utility that can be used for any algorithm that needs cooperative kernels / work group forward progress. The question from my side is where to put it. I think it should stay in the kt directory probably in some generic header called kt_utils.h. Do you think something like this is worth adding now or later once we have a secondary use case (e.g. inclusive_scan)?
Also, am I correct that once the bug is fixed we would merely need the first statement of the function?
Yes, everything beyond that is manually calculating things the driver should be handling.
There was a problem hiding this comment.
I think I'd be in favor of adding it, but its your call. The risk of leaving it here is the future implementer who may want it (or may not know they need it) wouldn't know it exists.
There was a problem hiding this comment.
I decided to go ahead and add this new header and make it a free function
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
| __get_num_work_groups(const sycl::kernel& __kernel, sycl::queue& __q, std::uint32_t __tile_count, | ||
| std::uint32_t __slm_size_bytes) const |
There was a problem hiding this comment.
I think I'd be in favor of adding it, but its your call. The risk of leaving it here is the future implementer who may want it (or may not know they need it) wouldn't know it exists.
| assert(__slm_granularity_it != std::cend(__slm_granularity_table)); | ||
| const std::uint32_t __true_slm_size_bytes = *__slm_granularity_it; |
There was a problem hiding this comment.
Nitpick perhaps, but can we just check this at runtime rather than having it be an assertion?
If we go over 128kb SLM due to a user input of data per work item, then this would be UB dereferencing OOB memory.
I guess we should probably be protecting against this case at compile time using the kernel params (I don't think we currently do this).
There was a problem hiding this comment.
We currently rely on SYCL throwing an exception if the kt params reserve more SLM than possible. We could:
- Just cap
__slm_size_bytesto 128 KiB in this query and let SYCL thrown an exception later - Document and throw our own exception for an invalid SLM reservation.
My preference for now is 1.
static assert is not the best solution for SLM check in my opinion because future devices may have more SLM which would force us to check at runtime.
There was a problem hiding this comment.
Yeah, I think 1 is fine. Its the responsibility of the user to ensure that their workload fits on the card. I think for esimd we document the memory requirements (at least roughly).
There was a problem hiding this comment.
Done. It caps the provided value at 128 KiB.
| sycl::atomic_ref<_GlobOffsetT, sycl::memory_order::relaxed, sycl::memory_scope::device, | ||
| sycl::access::address_space::local_space>; |
There was a problem hiding this comment.
I think the sycl::memory_scope::work_group fits better here than device for SLM memory.
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
| // When we reorder into SLM there are indexing offsets between bins due to contiguous storage that should not be reflected in global output as any given bin's | ||
| // total global offset is defined in __slm_global_incoming. We account for this by subtracting each bin's incoming slm index offset | ||
| // from __slm_global_incoming so that later adding the reorderered key's slm index to the fixed global offset yields the correct output index in the final stage. | ||
| // | ||
| // | ||
| // The sequence of computations for the fixed global offset is shown below, showing how we yield a valid output index in __reorder_slm_to_glob. | ||
| // For demonstration, slm_global_fix is separated from slm_global_incoming which can actually be modified in-place. | ||
| // slm_global_fix[bin] = slm_global_incoming[bin] - slm_group_hist[bin] | ||
| // slm_idx[key] = slm_group_hist[bin] + key offset within bin | ||
| // out_idx[key] = slm_global_fix[bin] + slm_idx[key] | ||
| // = slm_global_incoming[bin] - slm_group_hist[bin] + slm_group_hist[bin] + key offset within bin | ||
| // = slm_global_incoming[bin] + key offset within bin | ||
| // | ||
| // The case where __slm_group_hist[_i] > __slm_global_incoming[__i] is valid resulting in | ||
| // the difference yielding a large number due to guaranteed wrap around behavior with unsigned integers in the C++ spec. | ||
| // When this global fix is added to the reordered offset index the wraparound is undone, yielding the valid output index shown above. |
There was a problem hiding this comment.
I was a bit confused by this explanation, so I did my best to rewrite my own, take from it what you wish. Also, line lengths of the current set of comments are very long, our clang format won't fix this for you to preserve the formatting of comment blocks as its sometimes intentional.
// To avoid fully scattered global writes, we write data first grouped by bin to SLM,
// then write in a partially coalesced manner from SLM to global memory.
// When writing from SLM to global memory, we dont want to have to store or recalculate
// the global offset for each item, but instead we can obtain it from its SLMIndex and
// a pre-calculated constant offset per bin.
// GlobalIndex = GlobalBaseOffset[bin] + LocalOffsetWithinBin
// SLMIndex = SLMBaseOffset[bin] + LocalOffsetWithinBin
// By isolating LocalOffsetWithinBin, we can express the GlobalIndex as:
// GlobalIndex = GlobalBaseOffset[bin] + (SLMIndex - SLMBaseOffset[bin])
// GlobalIndex = (GlobalBaseOffset[bin] - SLMBaseOffset[bin]) + SLMIndex
// To save instructions during the final global write, we pre-calculate this constant
// offset "fix" per bin. We overwrite __slm_global_incoming (GlobalBaseOffset) by subtracting
// __slm_group_hist (SLMBaseOffset).
// Later, during the global scatter, threads simply calculate:
// GlobalIndex = __slm_global_incoming[bin] + SLMIndex
//
// Note: Due to standard C++ unsigned integer guaranteed wrap-around (two's complement), this math
// works even if SLMBaseOffset > GlobalBaseOffset.
There was a problem hiding this comment.
Your explanation is better than mine :) I switched to it with some small adjustments.
| static constexpr std::uint32_t __bit_count = sizeof(_KeyT) * 8; | ||
| static constexpr std::uint32_t __stage_count = | ||
| oneapi::dpl::__internal::__dpl_ceiling_div(__bit_count, __radix_bits); | ||
| static constexpr std::uint32_t __hist_data_per_sub_group = 128; |
There was a problem hiding this comment.
Is there any situations where this hardcoded value should change?
I know that we have multiple kernels so _KernelParam::__data_per_work_item is already obligated to the onesweep kernel. Should we consider extending _KernelParams for sort to include more of these hardcoded parameters?
There was a problem hiding this comment.
This was one of the open questions with ESIMD sort (histogram tuning) if I remember. I suggest we hold onto it for when we consider some of the other KT design aspects (e.g. single work group) and just provide the same interface as ESIMD for now. My only concern to doing it now is exposing too many different parameters where only a few make a big performance difference.
It may be possible to fine-tune histogram further. However, since the initial algorithmic optimizations, histogram makes up <15% of execution time so the benefits of tuning will be small.
| assert(__slm_granularity_it != std::cend(__slm_granularity_table)); | ||
| const std::uint32_t __true_slm_size_bytes = *__slm_granularity_it; |
There was a problem hiding this comment.
Yeah, I think 1 is fine. Its the responsibility of the user to ensure that their workload fits on the card. I think for esimd we document the memory requirements (at least roughly).
| __match_bins(sycl::sub_group __sub_group, std::uint32_t __bin) | ||
| { | ||
| // start with all bits 1 | ||
| sycl::ext::oneapi::sub_group_mask __matched_bins = sycl::ext::oneapi::group_ballot(__sub_group); |
There was a problem hiding this comment.
Should we guard this with _ONEDPL_LIBSYCL_SUB_GROUP_MASK_PRESENT, and provide an easily readable error otherwise?
There was a problem hiding this comment.
I went ahead and factored this into the _ONEDPL_ENABLE_SYCL_RADIX_SORT_KT since we do not provide an alternative if it is not available. In practice with oneAPI, the forward progress extension being present means sub-group mask is present as it was added several years prior.
I also decided to add kt_defs.h as I was originally defining these macros in the utils file..
| // TODO: This exists in the ESIMD KT and was ported but are we not limiting max input size to | ||
| // 2^30 ~ 1 billion elements? We use 32-bit indexing / histogram which may already be too small | ||
| // but are then reserving the two upper bits for lookback flags. |
There was a problem hiding this comment.
In the short term for this, we probably just need a known limitation here, and perhaps an assert checking against the size of the sequence at the public interface level.
In the long term, we could consider an alternative API / setting to enable larger sized sequences which uses a separated implementation for lookback flags, but we are probably not able to fit that in at this point.
There was a problem hiding this comment.
Added this runtime check into __check_sycl_sort_params.
Yep, in the long-term we need some KT option or API to support larger types. There are two options:
- Just provide the option to use 64-bit histograms instead of 32. This would support inputs up to 2^62 (far larger than any device memory)
- Add an option to support up to 2^32 with 32-bit histogram but with separate status and lookback value flags. This could also be used to reduce the number of atomics so it could even be considered as a default path (depending on how atomic cost compares with extra traffic and larger lookback allocation).
| // but are then reserving the two upper bits for lookback flags. | ||
| constexpr std::uint32_t __global_accumulated = 0x40000000; | ||
| constexpr std::uint32_t __hist_updated = 0x80000000; | ||
| constexpr std::uint32_t __global_offset_mask = 0x3fffffff; |
There was a problem hiding this comment.
Maybe this is a good way to do it, but its a little strange to me to define these constants here and then pass them to the helpers as template arguments, rather than having them at the struct level and perhaps labeling them more clearly in the name as flags / mask.
I suppose this way it allows them to be defined close to where they are used / originating, but a little odd to me.
If you want to keep them here, perhaps add a comment that these are flags shared with the helpers via template arguments.
There was a problem hiding this comment.
Making these static constexpr is better imo and I added this change along with calling them *_mask. Originally, this was one large monolithic function in ESIMD where defining them in the function made sense. But with separate functions, they should just be class members.
| _LocOffsetT __group_incoming = __slm_group_hist[__bin]; | ||
| _LocOffsetT __offset_in_bin = | ||
| (__sub_group_id == 0) ? 0 : __slm_subgroup_hists[(__sub_group_id - 1) * __bin_count + __bin]; | ||
| _LocOffsetT __offset_across_bins = __group_incoming; |
There was a problem hiding this comment.
| _LocOffsetT __group_incoming = __slm_group_hist[__bin]; | |
| _LocOffsetT __offset_in_bin = | |
| (__sub_group_id == 0) ? 0 : __slm_subgroup_hists[(__sub_group_id - 1) * __bin_count + __bin]; | |
| _LocOffsetT __offset_across_bins = __group_incoming; | |
| _LocOffsetT __offset_across_bins = __slm_group_hist[__bin]; | |
| _LocOffsetT __offset_in_bin = | |
| (__sub_group_id == 0) ? 0 : __slm_subgroup_hists[(__sub_group_id - 1) * __bin_count + __bin]; |
Looks like an unnecessary extra variable.
|
I'm mostly just finding cosmetic stuff. I still need to do a little more looking at workgroup chained scan, and some of the intra/ inter scan bits of onesweep but its looking quite good so far. |
Signed-off-by: Matthew Michel <matthew.michel@intel.com>
This PR adds a SYCL KT based on Onesweep. It adapts the ESIMD implementation with the following changes:
sycl_ext_oneapi_forward_progressextension and iterative decoupled lookback is performed to guarantee hardware safety. This resolves catastrophic errors encountered during BMG stress testing and slightly improves performance for smaller multi-work group cases due to the removal of the work group atomic id counter.Other relevant details: