-
Notifications
You must be signed in to change notification settings - Fork 6.2k
[Modular] Fast Tests #11937
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Modular] Fast Tests #11937
Changes from all commits
b165cf3
0fa5812
0a5c90e
d92855d
4df2739
d8fa2de
4b7a9e9
5f560d0
0998bd7
a2a9e4e
625cc8e
80702d2
54e17f3
39be374
3aabef5
a176cfd
1f0570d
6f912ab
798e0ca
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,141 @@ | ||
name: Fast PR tests for Modular | ||
|
||
on: | ||
pull_request: | ||
branches: [main] | ||
paths: | ||
- "src/diffusers/modular_pipelines/**.py" | ||
- "src/diffusers/models/modeling_utils.py" | ||
- "src/diffusers/models/model_loading_utils.py" | ||
- "src/diffusers/pipelines/pipeline_utils.py" | ||
- "src/diffusers/pipeline_loading_utils.py" | ||
- "src/diffusers/loaders/lora_base.py" | ||
- "src/diffusers/loaders/lora_pipeline.py" | ||
- "src/diffusers/loaders/peft.py" | ||
- "tests/modular_pipelines/**.py" | ||
- ".github/**.yml" | ||
- "utils/**.py" | ||
- "setup.py" | ||
push: | ||
branches: | ||
- ci-* | ||
|
||
concurrency: | ||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} | ||
cancel-in-progress: true | ||
|
||
env: | ||
DIFFUSERS_IS_CI: yes | ||
HF_HUB_ENABLE_HF_TRANSFER: 1 | ||
OMP_NUM_THREADS: 4 | ||
MKL_NUM_THREADS: 4 | ||
PYTEST_TIMEOUT: 60 | ||
|
||
jobs: | ||
check_code_quality: | ||
runs-on: ubuntu-22.04 | ||
steps: | ||
- uses: actions/checkout@v3 | ||
- name: Set up Python | ||
uses: actions/setup-python@v4 | ||
with: | ||
python-version: "3.10" | ||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
pip install .[quality] | ||
- name: Check quality | ||
run: make quality | ||
- name: Check if failure | ||
if: ${{ failure() }} | ||
run: | | ||
echo "Quality check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make style && make quality'" >> $GITHUB_STEP_SUMMARY | ||
|
||
check_repository_consistency: | ||
|
||
needs: check_code_quality | ||
runs-on: ubuntu-22.04 | ||
steps: | ||
- uses: actions/checkout@v3 | ||
- name: Set up Python | ||
uses: actions/setup-python@v4 | ||
with: | ||
python-version: "3.10" | ||
- name: Install dependencies | ||
run: | | ||
python -m pip install --upgrade pip | ||
pip install .[quality] | ||
- name: Check repo consistency | ||
run: | | ||
python utils/check_copies.py | ||
python utils/check_dummies.py | ||
python utils/check_support_list.py | ||
make deps_table_check_updated | ||
- name: Check if failure | ||
if: ${{ failure() }} | ||
run: | | ||
echo "Repo consistency check failed. Please ensure the right dependency versions are installed with 'pip install -e .[quality]' and run 'make fix-copies'" >> $GITHUB_STEP_SUMMARY | ||
|
||
run_fast_tests: | ||
|
||
needs: [check_code_quality, check_repository_consistency] | ||
strategy: | ||
fail-fast: false | ||
matrix: | ||
config: | ||
- name: Fast PyTorch Modular Pipeline CPU tests | ||
framework: pytorch_pipelines | ||
runner: aws-highmemory-32-plus | ||
image: diffusers/diffusers-pytorch-cpu | ||
report: torch_cpu_modular_pipelines | ||
Comment on lines
+82
to
+88
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Don't need a matrix here, I believe? |
||
|
||
name: ${{ matrix.config.name }} | ||
|
||
runs-on: | ||
group: ${{ matrix.config.runner }} | ||
|
||
container: | ||
image: ${{ matrix.config.image }} | ||
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ | ||
|
||
defaults: | ||
run: | ||
shell: bash | ||
|
||
steps: | ||
- name: Checkout diffusers | ||
uses: actions/checkout@v3 | ||
with: | ||
fetch-depth: 2 | ||
|
||
- name: Install dependencies | ||
run: | | ||
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" | ||
python -m uv pip install -e [quality,test] | ||
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps | ||
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps | ||
|
||
- name: Environment | ||
run: | | ||
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" | ||
python utils/print_env.py | ||
|
||
- name: Run fast PyTorch Pipeline CPU tests | ||
if: ${{ matrix.config.framework == 'pytorch_pipelines' }} | ||
run: | | ||
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH" | ||
python -m pytest -n 8 --max-worker-restart=0 --dist=loadfile \ | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We can potentially increase the number of workers here. |
||
-s -v -k "not Flax and not Onnx" \ | ||
--make-reports=tests_${{ matrix.config.report }} \ | ||
tests/modular_pipelines | ||
|
||
- name: Failure short reports | ||
if: ${{ failure() }} | ||
run: cat reports/tests_${{ matrix.config.report }}_failures_short.txt | ||
|
||
- name: Test suite reports artifacts | ||
if: ${{ always() }} | ||
uses: actions/upload-artifact@v4 | ||
with: | ||
name: pr_${{ matrix.config.framework }}_${{ matrix.config.report }}_test_reports | ||
path: reports | ||
|
||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -744,8 +744,6 @@ def prepare_latents_inpaint( | |
timestep=None, | ||
is_strength_max=True, | ||
add_noise=True, | ||
return_noise=False, | ||
return_image_latents=False, | ||
Comment on lines
-747
to
-748
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Any reason behind this change? |
||
): | ||
shape = ( | ||
batch_size, | ||
|
@@ -768,7 +766,7 @@ def prepare_latents_inpaint( | |
if image.shape[1] == 4: | ||
image_latents = image.to(device=device, dtype=dtype) | ||
image_latents = image_latents.repeat(batch_size // image_latents.shape[0], 1, 1, 1) | ||
elif return_image_latents or (latents is None and not is_strength_max): | ||
elif latents is None and not is_strength_max: | ||
image = image.to(device=device, dtype=dtype) | ||
image_latents = self._encode_vae_image(components, image=image, generator=generator) | ||
image_latents = image_latents.repeat(batch_size // image_latents.shape[0], 1, 1, 1) | ||
|
@@ -786,13 +784,7 @@ def prepare_latents_inpaint( | |
noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) | ||
latents = image_latents.to(device) | ||
|
||
outputs = (latents,) | ||
|
||
if return_noise: | ||
outputs += (noise,) | ||
|
||
if return_image_latents: | ||
outputs += (image_latents,) | ||
outputs = (latents, noise, image_latents) | ||
|
||
return outputs | ||
|
||
|
@@ -864,7 +856,7 @@ def __call__(self, components: StableDiffusionXLModularPipeline, state: Pipeline | |
block_state.height = block_state.image_latents.shape[-2] * components.vae_scale_factor | ||
block_state.width = block_state.image_latents.shape[-1] * components.vae_scale_factor | ||
|
||
block_state.latents, block_state.noise = self.prepare_latents_inpaint( | ||
block_state.latents, block_state.noise, block_state.image_latents = self.prepare_latents_inpaint( | ||
components, | ||
block_state.batch_size * block_state.num_images_per_prompt, | ||
components.num_channels_latents, | ||
|
@@ -878,8 +870,6 @@ def __call__(self, components: StableDiffusionXLModularPipeline, state: Pipeline | |
timestep=block_state.latent_timestep, | ||
is_strength_max=block_state.is_strength_max, | ||
add_noise=block_state.add_noise, | ||
return_noise=True, | ||
return_image_latents=False, | ||
) | ||
|
||
# 7. Prepare mask latent variables | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to see filtered invokations. Do we know if changing anything in the repsective modeling or pipeline implementations (SDXL, for instance) would impact modular? If so, should we consider that somehow?