You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 22, 2021. It is now read-only.
A couple of testers (esp. the ostree ones) are hitting ENOSPC sporadically. I think this is probably due to each container (e.g. for ostree, that's 9 containers) creating its own buildroot:
$ curl -LO https://s3.amazonaws.com/aos-ci/ghprb/ostreedev/ostree/93457071cb5d47c08b60d3244f9632725634010a.0.1508253876562036997/output.log
$ grep 'Total download size' output.log
Total download size: 21 M
Total download size: 27 M
Total download size: 192 k
Total download size: 58 M
Total download size: 51 M
Total download size: 488 k
Total download size: 2.7 M
Total download size: 94 M
$ grep 'Installed size' output.log
Installed size: 94 M
Installed size: 454 k
Installed size: 162 M
Installed size: 197 M
Installed size: 1.1 M
Installed size: 9.4 M
Installed size: 335 M
So that's ~260M of downloaded RPMs + ~800M installed. So for 9 containers, let's estimate at least 9G total (though to be fair, the RPM files themselves should be deleted after installation).
That's for one run. If we're then testing different ostree/rpm-ostree PRs/branches in parallel, we can easily see how we can hit ENOSPC (our root disk is currently limited to 40G). There's definitely improvements to be made here, and this ties into both #53 and #10 so we can potentially unify RPM downloads and share buildroots. Though on principle, we shouldn't be limited by disk space in the first place. (Related to this is the fact that we're avoiding using Ceph volumes because concurrent write performance at these scales is poor).
The text was updated successfully, but these errors were encountered:
A couple of testers (esp. the ostree ones) are hitting
ENOSPC
sporadically. I think this is probably due to each container (e.g. for ostree, that's 9 containers) creating its own buildroot:So that's ~260M of downloaded RPMs + ~800M installed. So for 9 containers, let's estimate at least 9G total (though to be fair, the RPM files themselves should be deleted after installation).
That's for one run. If we're then testing different ostree/rpm-ostree PRs/branches in parallel, we can easily see how we can hit
ENOSPC
(our root disk is currently limited to 40G). There's definitely improvements to be made here, and this ties into both #53 and #10 so we can potentially unify RPM downloads and share buildroots. Though on principle, we shouldn't be limited by disk space in the first place. (Related to this is the fact that we're avoiding using Ceph volumes because concurrent write performance at these scales is poor).The text was updated successfully, but these errors were encountered: