Skip to content

Implement P2300 get_scheduler bridge for parallel_executor#7239

Open
shivansh023023 wants to merge 1 commit intoTheHPXProject:masterfrom
shivansh023023:feat/parallel-executor-p2300
Open

Implement P2300 get_scheduler bridge for parallel_executor#7239
shivansh023023 wants to merge 1 commit intoTheHPXProject:masterfrom
shivansh023023:feat/parallel-executor-p2300

Conversation

@shivansh023023
Copy link
Copy Markdown
Contributor

PR Description: Implement P2300 get_scheduler Bridge for parallel_executor

Fixes

This PR continues the effort to modernize HPX executors by providing P2300 (std::execution) support for parallel execution contexts.

Proposed Changes

  • Enabled get_scheduler for parallel_executor: Added tag_invoke overloads for hpx::execution::experimental::get_scheduler in parallel_executor.hpp. This supports both the flat and hierarchical parallel policy executors.
  • Header Synchronization: Integrated hpx/executors/executor_scheduler.hpp into the parallel executor module to facilitate the bridge to P2300 schedulers.
  • Added Parallel Unit Test: Created libs/core/executors/tests/unit/parallel_executor_scheduler.cpp to verify that work dispatched via schedule(get_scheduler(parallel_exec)) correctly utilizes the HPX worker thread pool.
  • CI & Standards Compliance:
    • Verified alphabetical include ordering for clang-format.
    • Used hpx::local::init to ensure compliance with the inspect tool for core modules.
    • Integrated the test into the standard CMake test loop.

This is the second installment of the GSoC 2026 project: "Modernizing HPX Executors for P2300 Compatibility."

While the previous PR handled sequential execution, this PR addresses the multi-threaded use case. By enabling get_scheduler on the parallel_executor, we allow developers to compose complex P2300 pipelines that automatically scale across all available cores using the underlying HPX thread pool. This is a foundational step for enabling P2300 algorithms (like bulk) to run on HPX distributed systems in the future.

Checklist

  • I have added a new feature and have added tests to go along with it.
  • I have fixed a bug and have added a regression test.
  • I have added a test using random numbers; I have made sure it uses a seed, and that random numbers generated are valid inputs for the tests.

@shivansh023023 shivansh023023 requested a review from hkaiser as a code owner May 2, 2026 09:22
@codacy-production
Copy link
Copy Markdown

codacy-production Bot commented May 2, 2026

Up to standards ✅

🟢 Issues 0 issues

Results:
0 new issues

View in Codacy

NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.

@StellarBot
Copy link
Copy Markdown
Collaborator

Can one of the admins verify this patch?

@shivansh023023 shivansh023023 force-pushed the feat/parallel-executor-p2300 branch 2 times, most recently from 30e3cad to 893a621 Compare May 2, 2026 09:34
- Add executor_scheduler<Executor> adapter (header + module registration)
- Add get_scheduler_t tag_invoke to parallel_policy_executor (both flat and hierarchical)
- Add get_scheduler_t tag_invoke to sequenced_executor
- Add unit tests for both executor_scheduler and parallel_executor_scheduler
@shivansh023023 shivansh023023 force-pushed the feat/parallel-executor-p2300 branch from 893a621 to 1b061a5 Compare May 2, 2026 10:10
@hkaiser hkaiser added type: enhancement type: compatibility issue category: senders/receivers Implementations of the p0443r14 / p2300 + p1897 proposals labels May 2, 2026
@hkaiser
Copy link
Copy Markdown
Contributor

hkaiser commented May 3, 2026

@shivansh023023 Is this a duplicate for #7238?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

category: senders/receivers Implementations of the p0443r14 / p2300 + p1897 proposals type: compatibility issue type: enhancement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants