chore(docker): reduce size between docker builds #7571
+33
−26
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
by adding a layer with all the pytorch dependencies that don't change most of the time.
Summary
Every time the
main
docker images rebuild and I pullmain-cuda
, it gets another 3+ GB, which seems like about a zillion times too much since most things don't change from one commit onmain
to the next.This is an attempt to follow the guidance in Using uv in Docker: Intermediate Layers so there's one layer that installs all the dependencies—including PyTorch with its bundled nvidia libraries—before the project's own frequently-changing files are copied in to the image.
Related Issues / Discussions
uv pip install
torch, but notuv sync
itQA Instructions
Hopefully the CI system building the docker images is sufficient.
But there is one change to
pyproject.toml
related to xformers, so it'd be worth checking thatpython -m xformers.info
still says it has triton on the platforms that expect it.Merge Plan
I don't expect this to be a disruptive merge. Though I did take the liberty of moving
/opt/venv
to the uv-default/opt/invokeai/.venv
, which someone might notice.Checklist
What's New
copy (if doing a release after this PR)