Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
a67e649
Initial commit
AhmedAtefAhmedAly Jun 17, 2025
0752985
task 2 docker files
salmasoma Jun 25, 2025
9501d0b
Delete Task 2/train_prognosis.py to move to main branch
salmasoma Jun 25, 2025
b05daf7
re-upload
salmasoma Jun 25, 2025
e03b595
fixes
salmasoma Jun 25, 2025
4bbcdc0
removed GFS
salmasoma Jun 25, 2025
1afb911
fixes
salmasoma Jun 25, 2025
381f8ad
resources
salmasoma Jun 25, 2025
e6a0355
updates attributes
salmasoma Jun 25, 2025
31c6db5
task 2 docker update
AhmedAtefAhmedAly Jun 25, 2025
d596167
fixes
salmasoma Jun 25, 2025
0b199d6
Task 3 baseline docker
AhmedAtefAhmedAly Jun 26, 2025
15499c5
Task 3 baseline docker
AhmedAtefAhmedAly Jun 26, 2025
e592500
Improves and cleaned up inference script for task 2 and updates requi…
salmasoma Jun 26, 2025
053e994
Merge branch 'docker-template' of https://github.com/BioMedIA-MBZUAI/…
salmasoma Jun 26, 2025
33feea6
cleaned up utils.py file in task 2
salmasoma Jun 26, 2025
1341d98
Update .gitignore to include specific image file patterns for test in…
salmasoma Jun 26, 2025
2fd6075
merged files
salmasoma Jun 26, 2025
e192117
fixes for lfs
salmasoma Jun 26, 2025
3f96814
fixes for lfs
salmasoma Jun 26, 2025
e23fd61
updated preprocessing
salmasoma Jun 27, 2025
e14f330
Refactor HECKTOR inference model and update preprocessing logic. Adju…
salmasoma Jun 28, 2025
d76af51
Updated Readme.md
Jun 29, 2025
5685c90
Added Baseline Inference in Readme.md
Jun 29, 2025
2bcd10d
Update Readme.md
Jun 29, 2025
a81d527
Updated Readme.md
Jun 29, 2025
29ab40d
Update Readme.md
Jun 29, 2025
3a5162e
Update Readme
Jun 29, 2025
b1bf0c8
Update Readme.md
Jun 30, 2025
7d2a778
Update Readme.md
Jun 30, 2025
a4c7a0e
Added images
Jun 30, 2025
953376e
updated paths in task 2
salmasoma Jun 30, 2025
c7bef62
Merge branch 'docker-template' of https://github.com/BioMedIA-MBZUAI/…
salmasoma Jun 30, 2025
b7fd94f
updated sample ehr with missing values
salmasoma Jun 30, 2025
934bac8
Updated resmapling and cropping for task 2
salmasoma Jun 30, 2025
b8689fd
Refined .gitignore to exclude all .tar.gz files in Task_2 and correct…
salmasoma Jun 30, 2025
a077e89
Task 3 resample update
AhmedAtefAhmedAly Jun 30, 2025
c114d9b
Task 1 docker update,
AhmedAtefAhmedAly Jul 1, 2025
03f089a
Update Readme.md
Jul 1, 2025
859af85
Merge branch 'docker-template' of https://github.com/BioMedIA-MBZUAI/…
Jul 1, 2025
6c54a75
Update Readme.md
Jul 1, 2025
2923a7d
Update Readme.md
Jul 1, 2025
2a0ee55
Added HECKTOR image
umair1221 Jul 1, 2025
5b51256
Update README.md
umair1221 Jul 1, 2025
8589103
Update README.md
umair1221 Jul 1, 2025
6ce62c0
Update README.md
umair1221 Jul 1, 2025
ed0468a
Update Readme.md
Jul 1, 2025
39e5ee3
Merge branch 'docker-template' of https://github.com/BioMedIA-MBZUAI/…
Jul 1, 2025
fbe9a72
Update Readme.md
Jul 1, 2025
201541f
Update Readme.md
Jul 1, 2025
f2347f8
Update Readme.md
Jul 1, 2025
a557c26
Update Readme.md
Jul 1, 2025
f78bd34
Updated preprocessing and postprocessing
AhmedAtefAhmedAly Jul 2, 2025
92152e5
updated inference task 1
AhmedAtefAhmedAly Jul 2, 2025
87f2623
updated preprocessing
AhmedAtefAhmedAly Jul 2, 2025
409a2ff
Update README.md
shahdhardn Jul 2, 2025
89e4ed8
Update README.md
shahdhardn Jul 2, 2025
905033f
Update README.md
shahdhardn Jul 2, 2025
545d4b2
Add doc folder from main branch
shahdhardn Jul 2, 2025
900b0eb
update docker readme
shahdhardn Jul 2, 2025
558f881
Update submission-guidelines.md
shahdhardn Jul 3, 2025
3437e9c
Update submission-guidelines.md
shahdhardn Jul 3, 2025
417406a
Update submission-guidelines.md
shahdhardn Jul 3, 2025
bfaa705
update docker readme
shahdhardn Jul 3, 2025
9803f39
update docker readme
shahdhardn Jul 3, 2025
011c8c7
update docker readme
shahdhardn Jul 3, 2025
7a95cb0
update docker readme
shahdhardn Jul 3, 2025
503f89d
update docker readme
shahdhardn Jul 3, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Task_2/*.tar.gz
224 changes: 223 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,223 @@
# HECKTOR2025
# HECKTOR2025 - Challenge

<p align="center">
<img src="/assets/images/HECKTOR-main.jpeg">
</p>

---
<!--
# <img src="assets/logos/countdown.png" width="24" alt="⏳"/> Submission Countdown
Animation TBA

---
-->

# ℹ️ About

This repository contains the submission template and instructions for the [Grand Challenge 2025](https://hecktor25.grand-challenge.org/hecktor25/) docker-based inference task. Follow this guide to install Docker, run the baseline inference, observe challenge restrictions, save your container, and prepare your submission.

---
# 📑 Table of Contents

* 🛠️ [Installation](#-installation)
* 🤖 [Baseline Inference](#-baseline-inference)
* <img src="assets/logos/restrictions.svg" width="24" alt="⚠️"/> [Restrictions and Submission Tips](#-restrictions-and-submission-tips)
* 💾 [Saving and Uploading Containers](#-saving-and-uploading-containers)

---
# 🛠️ Installation

## Windows

1. Download Docker Desktop for Windows: [https://www.docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop)
2. Run the installer and follow on-screen instructions.
3. Ensure Docker is running by opening PowerShell and executing:

```bash
docker --version
```

## macOS

1. Download Docker Desktop for Mac: [https://www.docker.com/products/docker-desktop](https://www.docker.com/products/docker-desktop)
2. Open the `.dmg` file and drag the Docker app to Applications.
3. Launch Docker and verify:

```bash
docker --version
```

## Linux

### Install prerequisites

As most of the participants might be using Linux, so we provide detailed steps to set-up the docker.
To create and test your docker setup, you will need to install [Docker Engine](https://docs.docker.com/engine/install/)
and [NVIDIA-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) (in case you need GPU computation). You can follow the instructions in the links to install the prerequisites on your system.

- **Docker Engine:** To verify that docker has been successfully installed, please run ```docker run hello-world```

- **Nvidia-container-toolkit:** To verify you can access gpu inside docker by running ```docker run --rm --gpus all nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi```

---
# 🤖 Baseline Inference

Below is a structure of the ```template-docker``` branch. We support three separate tasks: **Task1**, **Task2**, and **Task3**. For each task, you should have a separate dedicated model under its folder.

1. **Repository Structure**

```text
├── Task1/
│ ├── resources/ # Place your model files here
│ ├── requirements.txt # Modify only to add new packages
│ └── ...
├── Task2/
│ ├── resources/
| └── ...
├── Task3/
│ ├── resources/
│ ├── Dockerfile.template # Base Dockerfile for reference
│ ├── do_build.sh # Script to build container
│ ├── do_test_run.sh # Script to test container locally
│ ├── do_save.sh # Script to save container as tarball
│ └── inference.py # Entry point: loads models, runs inference
| └── ...
```

2. **Model Files and Packages**

* Place all model weights, configuration files, or auxiliary code inside the `resources/` folder of the corresponding Task directory as this is the only directory where you can place your supporting files.
* You **may** update `requirements.txt` within each Task folder to install any additional Python packages needed by your model.

3. **Working Directory**

* All input and output files during inference must be read from or written to the `/tmp/` directory inside the container.

4. **Build the Container**
* To build your container, you can run ```do_build.sh``` file.
```bash
# From repo root
./do_build.sh # builds an image tagged `challenge:<branch>`
```

5. **Local Test Run**
* The ```do_test_run.sh``` file can be used to test the container on local machines before submitting the finalized version.
```bash
# Runs inference locally mounting /tmp data
./do_test_run.sh
```

6. **Save Container**
* You can use ```do_save.sh``` file to save your docker container.
```bash
./do_save.sh
```

7. **Performing Inference**

* `inference.py` is the entry point script executed at container runtime. You can implement or call your model-loading and prediction code here.


<!-- * Inputs must mount to `/tmp/images`
* Outputs will be written to `/tmp/output` -->

<!-- # Restrictions and Submission Tips

* **No network access**: All downloads must occur before container startup.
* **No GPU**: Inference runs on CPU only.
* **I/O paths**: Read inputs only from `/tmp/images` and write outputs only to `/tmp/output`.
* **Time limit**: Entire inference must finish within **30 minutes**.
* **File writes**: Do not create or modify files outside `/tmp` -->

# <img src="assets/logos/restrictions.svg" width="24" alt="⚠️"/> Grand Challenge Restrictions & Submission Tips

This section is to guide participants through the [submission tips](https://grand-challenge.org/documentation/making-a-challenge-submission/#submission-tips) documentation. This includes important information like:

1. **Algorithm Submission**
- You do not need to create a new algorithm for each submission.
- If you update your algorithm, don't forget to make a new submission to the challenge with it, this will not happen automatically. For more guidelines on how to creat a submission on Grand-Challenge and upload your algorithm, please follow the intructions [here](submission-guidelines.md)

2. **Offline Execution Only**
Your container **must not** attempt any network access (HTTP, SSH, DNS, etc.). Any outgoing connection will cause automatic disqualification.

3. **Computational & Memory Constraints**
- **GPU**: Your code will run on NVIDIA T4 Tensor Core GPU with 16 GB VRAM. Please design the model so that it should be able to execute on this GPU.
- **Memory Limit**: Peak RAM usage must stay under **16 GB**.
- **Docker Size**: The container you upload for your algorithm cannot exceed 10GB.

4. **Filesystem Write Permissions**
All writes (models, logs, outputs) **must** go under `/tmp/`. Writing elsewhere on the filesystem will be ignored or blocked.

5. **I/O Interface**
- **Input**: read exclusively from `/input/`
- **Output**: write exclusively to `/output/`
- **No Extra Files**: do not generate caches or logs in other directories.

6. **Time Limit**
Tasks 1 and 3 have a **10-minute** wall-clock limit, while task 2 has a **15-minute** limit. Any process running longer will be force-terminated.

7. **Submission Tips**
- **Local Validation**: always run `./do_test_run.sh` before packing.
- **Save Your Container**: use `./do_save.sh` to generate a `<task>_submission.tar.gz` (max **2 GB**).
- **Naming Convention**: name archives as `submission_task1.tar.gz`, `submission_task2.tar.gz`, etc.
- **Double-Check**: ensure `TaskX/resources/` contains all model artifacts and updated `requirements.txt`.

8. **Common Error Messages**
| Error Text | Likely Cause | Fix |
|-------------------------------------|-------------------------------------------------|-------------------------------------|
| `Model file not found` | Missing weights in `TaskX/resources/` | Add your `.pth`/`.onnx` files |
| `ModuleNotFoundError: …` | Dependency not declared | Update `requirements.txt` & rebuild |
| `Permission denied: '/some/path'` | Writing outside `/tmp/` | Redirect writes to `/tmp/` |
| `Killed` or `OOM` | Exceeded memory limit | Reduce batch size or model footprint|
| `Timeout` | Exceeded runtime limit | Optimize preprocessing/inference |

---


# 💾 Saving and Uploading Containers

<!-- 1. **Commit the running container** (after testing):

```bash
CONTAINER_ID=$(docker ps -lq)
docker commit $CONTAINER_ID submission-image
``` -->

1. **Save to tarball**:

```bash
./do_save.sh
```

2. **Upload to Sanity Check**:

In the HECKTOR challenge, we have three tasks (`Task 1 - Detection and Segmentation`, `Task 2 - Prognosis`, and `Task 3 - Classification`) and for each task, participants compete in three phases. So here, the task submission is divided into 3 phases:

- **Sanity Check Phase:** Consists of 3 images to ensure participants are familiar with the Grand Challenge platform and that their dockers run without errors. All teams must make their submission to this phase and will receive feedback on any errors.
- **Validation Phase:** Consists of approximately 50 images. All teams will submit up to 2 working dockers from the sanity check to this phase. Only the top 15 teams, as ranked by the evaluation metrics displayed on the public validation leaderboard, with valid submissions will proceed to Phase 3.
- **Testing Phase:** Consists of approximately 400 images. The teams will choose 1 of their 2 dockers from the validation phase to be submitted to the testing phase. The official ranking of the teams will be based solely on the testing phase results.


> **NOTE:** The participants will not receive detailed feedback during the testing phase except for error notifications.

We also provided some of the requirements for the submission to be valid which can be found on the [Submission webpage](https://hecktor25.grand-challenge.org/submission-instructions/). To start with your submission, for each task on either phases, login to the [Grand Challenge](https://hecktor25.grand-challenge.org/) and click on the link [here](https://hecktor25.grand-challenge.org/evaluation/sanity-check-task-1/submissions/create/). In order to proceed with the submission, please do make sure to follow the guidelines given in ["Submission Guidelines"](https://github.com/BioMedIA-MBZUAI/HECKTOR2025/blob/main/doc/submission-guidelines.md) with visual examples:

<p align="center">
<img src="/assets/images/Submission-Tab-1.png">
</p>

On the top region, you can select for which phase and task you are submitting your method. Assuming that we want to test it for the Task 1 on the **Sanity Check Phase**, we select the "**Sanity Check - Task 1**" tab and select the uploaded algorithm from the drop-down list as shown below:

<p align="center">
<img src="/assets/images/Submission-Tab-2.png">
</p>

Finally, by clicking on the "**Save**" button you will submit your algorithm for evaluation on the challenges task. The process is the same for all the tasks and phases.

* Log in to the challenge portal.
* Navigate to **My Submissions** → **Upload Container**.
* Select `my_submission.tar` and submit.

---
# 🎉 Good luck with your submission!
If you need support, post questions on the [Challenge Forum](https://grand-challenge.org/forums/forum/head-and-neck-tumor-lesion-segmentation-diagnosis-and-prognosis-767/) or send an email at ```[email protected]```.
1 change: 1 addition & 0 deletions Task_1/.gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
resources/checkpoints/* filter=lfs diff=lfs merge=lfs -text
3 changes: 3 additions & 0 deletions Task_1/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
resources/checkpoints/*
!resources/checkpoints/.gitkeep

27 changes: 27 additions & 0 deletions Task_1/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Use a 'large' base container to show-case how to load pytorch (macOS)
# FROM --platform=linux/arm64 pytorch/pytorch AS example-task2-arm64

# Use a 'large' base container to show-case how to load pytorch and use the GPU (when enabled) (Linux and WSL)
FROM --platform=linux/amd64 pytorch/pytorch:2.6.0-cuda12.4-cudnn9-runtime AS example-task3-amd64

# Ensures that Python output to stdout/stderr is not buffered: prevents missing information when terminating
ENV PYTHONUNBUFFERED=1

RUN groupadd -r user && useradd -m --no-log-init -r -g user user
USER user

WORKDIR /opt/app

COPY --chown=user:user requirements.txt /opt/app/
COPY --chown=user:user resources /opt/app/resources

# You can add any Python dependencies to requirements.txt
RUN python -m pip install \
--user \
--no-cache-dir \
--no-color \
--requirement /opt/app/requirements.txt

COPY --chown=user:user inference.py /opt/app/

ENTRYPOINT ["python", "inference.py"]
18 changes: 18 additions & 0 deletions Task_1/do_build.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
#!/usr/bin/env bash

# Stop at first error
set -e

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
DOCKER_IMAGE_TAG="example-algorithm-sanity-check-task-1"


# Check if an argument is provided
if [ "$#" -eq 1 ]; then
DOCKER_IMAGE_TAG="$1"
fi

# Note: the build-arg is JUST for the workshop
docker build "$SCRIPT_DIR" \
--platform=linux/amd64 \
--tag "$DOCKER_IMAGE_TAG" 2>&1
37 changes: 37 additions & 0 deletions Task_1/do_save.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
#!/usr/bin/env bash

# Stop at first error
set -e

SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )

# Set default container name
DOCKER_IMAGE_TAG="example-algorithm-sanity-check-task-1"

# Check if an argument is provided
if [ "$#" -eq 1 ]; then
DOCKER_IMAGE_TAG="$1"
fi

echo "=+= (Re)build the container"
source "${SCRIPT_DIR}/do_build.sh" "$DOCKER_IMAGE_TAG"

# Get the build information from the Docker image tag
build_timestamp=$( docker inspect --format='{{ .Created }}' "$DOCKER_IMAGE_TAG")

if [ -z "$build_timestamp" ]; then
echo "Error: Failed to retrieve build information for container $DOCKER_IMAGE_TAG"
exit 1
fi

# Format the build information to remove special characters
formatted_build_info=$(echo $build_timestamp | sed -E 's/(.*)T(.*)\..*Z/\1_\2/' | sed 's/[-,:]/-/g')

# Set the output filename with timestamp and build information
output_filename="${SCRIPT_DIR}/${DOCKER_IMAGE_TAG}_${formatted_build_info}.tar.gz"

# Save the Docker container and gzip it
echo "Saving the container as ${output_filename}. This can take a while."
docker save "$DOCKER_IMAGE_TAG" | gzip -c > "$output_filename"

echo "Container saved as ${output_filename}"
Loading