Skip to content

Add dask-cuda LocalCUDACluster support and VesselFM benchmark rule#128

Draft
Copilot wants to merge 14 commits intomainfrom
copilot/add-dask-cuda-support
Draft

Add dask-cuda LocalCUDACluster support and VesselFM benchmark rule#128
Copilot wants to merge 14 commits intomainfrom
copilot/add-dask-cuda-support

Conversation

Copy link
Contributor

Copilot AI commented Mar 10, 2026

VesselFM uses GPUs but was limited to the threaded Dask scheduler. This adds dask-cuda integration via a new "cuda" scheduler option and introduces benchmark rules to profile VesselFM performance across different chunk sizes and thread counts.

Changes

dask_setup.py

  • New "cuda" scheduler branch using dask_cuda.LocalCUDACluster(threads_per_worker=threads_per_worker) with a dask.distributed.Client
with get_dask_client("cuda", snakemake.threads):
    # workers are now GPU-backed via LocalCUDACluster

vesselfm.py

  • Uses config["dask_scheduler"] instead of hardcoded "threads", enabling GPU-accelerated inference

snakebids.yml

  • --dask_scheduler now accepts cuda as a valid choice
  • New vesselfm_benchmark config block with default chunk_sizes and thread_counts for benchmark sweeps

vessels.smk

  • run_vesselfm gains a benchmark: directive for automatic timing
  • New benchmark_run_vesselfm rule: parameterizes chunk_size and thread_count as wildcards, runs each combination 3× via repeat()
  • New all_benchmark_vesselfm aggregate target expanding over all configured chunk/thread combinations
# Run benchmark sweep across all chunk_size × thread_count combinations
snakemake all_benchmark_vesselfm --cores all

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • pixi.sh
    • Triggering command: /usr/bin/curl curl -fsSL REDACTED (dns block)

If you need me to access, download, or install something from one of these locations, you can either:


💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Co-authored-by: akhanf <11492701+akhanf@users.noreply.github.com>
Copilot AI changed the title [WIP] Add dask-cuda support for GPU cluster setup Add dask-cuda LocalCUDACluster support and VesselFM benchmark rule Mar 10, 2026
Base automatically changed from perf-updates to main March 14, 2026 20:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants