List view
We could really use a simple mechanism to apply hot-fixes inside the running containers without a full restart. That would especially be useful in cases where we discover smaller bugs or security issues and want to address them swiftly without interrupting on-going user sessions and access. One way of consistently supporting such hot-fix delivery would be to: * add a bind-mounted hot-fix volume hosting hot-fix scripts and patches folders * expose the hot-fix volume e.g. in a hot-fixes folder in all containers * add an apply-hot-fixes script in the root image to traverse said hot-fix subfolders applying patches and running scripts in turn * adjust the docker entry script to first run apply-hot-fixes, then proceed with the usual launch That would effectively make the containers run on the hot-fixed system image on _every_ restart and also allow re-running any later added hot-fixes by (batch) executing apply-hot-fixes inside the containers. The latter might require script argument support or some simple markers to keep track of already applied scripts / patches. It would also allows central patch management through ansible or whatever deployment system is used to prepare, build and run the docker-migrid containers. As a bonus that approach would allow hot-fixing *any* file inside the containers rather than just the `mig` and `httpd` contents we currently expose as volumes, with nasty side-effects as explained e.g. in https://github.com/ucphhpc/docker-migrid/issues/41 . On a related note it is important to emphasize that the proposed hot-fix framework is *not* intended to replace or interfere with the long-term plans to restructure and optimize the container stack. Rather to complement it while still striving to make the stack more generic and easy to redeploy with only the few site-specific bits applied on top.
No due date•0/1 issues closedThe `production` container deployment already splits the `migrid` stack into a number of containers each handling one or more services. If possible it should be further compartmentalized for maximum granularity while of course not breaking fundamental internal service and state dependencies. This will require some adjustments and maybe even redesigning of internal migrid state and communication, which is mostly file system and ''named pipe'' based. Additional preparations are needed for completely distributed deployment of the containers onto separate hosts/VMs. Namely, it must be clarified exactly which shared state and communication takes place and how one can support the same when containers don't necessarily run on the same physical host or VM. This includes documenting and perhaps supplying an outline of such a distributed setup defining which file systems or folders must be shared or synchronized between containers. Perhaps this can in time be extended to a proper swarm-based deployment. The mig_system_run folder is one example of a folder that needs to be shared between containers for the internal account status and expiry to remain consistent across the stack. Rate limits and authentication notifications should also be considered in that context.
No due date•2/4 issues closedWe would prefer to move away from the slightly cumbersome and potentially less secure Docker setups to the native Podman container infrastructure shipped e.g. with RHEL/CentOS/Rocky. It is already possible to build and run the different container flavors as `root` with Podman on Rocky 8 and 9. The build process is [slow](https://github.com/containers/podman/issues/13226) compared to Docker even with the `overlay` driver and there may are still a few other rough edges.
No due date•2/2 issues closedWe've introduced a `rocky8` flavor of Dockerfile to provide `migrid` in Rocky8 containers. With Rocky8 moving from Python2 to Python3 and some of the dependencies we rely on gone it requires some work and testing to stabilize the set up. With CentOS7 going EoL on June 30th 2024 we need to move to Rocky so at least this 8.x series. Standard data migrid sites run well on Python2 and have been tested more with Python3. So they should mostly be no problem running on Rocky8. Sensitive data migrid sites run well on Python2 and only gained Python3 support more recently. So they may need more care and testing on the Rocky8 platform.
Overdue by 1 year(s)•Due by June 1, 2024•2/2 issues closedWe've introduced a `rocky9` flavor of Dockerfile to provide `migrid` in Rocky9 containers. With Rocky9 completely dropping Python2 support and some of the dependencies we rely on it requires some work and a lot of testing to stabilize the set up. With CentOS7 going EoL on June 30th 2024 we need to move to Rocky and if possible this 9.x series. Standard data migrid sites have been tested more with Python3, so they should mostly just run. Sensitive data migrid sites only gained Python3 support more recently, so they may need more care and testing on the Rocky9 platform.
No due date•14/17 issues closed