-
-
Notifications
You must be signed in to change notification settings - Fork 164
Description
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Hi team, I'm filing this as a bug but it could be just a docs bug if the behavior I'm describing is actually desired. When force_restart: true is set on a podman_container task and the container already exists, podman_container will ignore any parameters changed in the task.
For example, you could deploy a Ubuntu 22.04 container with force_restart set to true. The container then exists on the target host. Then you change the image to Ubuntu 24.04 and run the playbook again. The container will be restarted but the new image will be ignored. This applies at least to image and command but looking at make_started I'd assume it applies to any parameter.
The return statement here means the function will always return before any parameter changes to containers are made if restart/force_restart are true.
I also couldn't find anything in the docs describing this behavior and iirc the docker container module doesn't do it this way either.
Personally, I don't think force_restart should mean "do only a restart and literally NOTHING else". If I need force_restart for some reason that means I have to set it to false every time I want to change anything about my container or create a second task that only contains the container name, image and force_restart to have a "restart only" task and then a separate task that contains my actual container parameter. This seems very counterintuitive and might have unforeseen consequences that I don't see right now.
My proposal is to remove the return statement linked above. That means after the restart is done, the rest of make_started will still be executed and changed parameters will be reflected on the target host. There might be downsides to this particular approach that I don't see. For example, update_container_result() could be called twice in one run of make_started() - I think that will be a problem or at the very least the restart information might be lost.
If you don't agree with my understanding of force_restart and do think that the default when true should be to ignore any changed parameters, one could also leverage the recreate parameter here, check whether it's true and only then apply changed parameters when force_restart: true. Currently, even with recreate: true my container will not be recreated as long as force_restart is true as far as I could tell.
Happy to create a PR for this by the way - just wanted to discuss whether this behavior is desired and which approach would be best before I do. 😃
Steps to reproduce the issue:
-
Deploy a container with
force_restart: true -
Change a container parameter like
imageorcommandin the playbook and run it again -
Container will be restarted but your changed parameter will not be reflected;
image/commandor whatever will not be changed to what you set in step 2 until you setforce_restarttofalseand re-run the playbook
Describe the results you received:
Containers are restarted but updated parameters are not reflected when force_restart: true
Describe the results you expected:
Containers are restarted and updated parameters are reflected when force_restart: true
Additional information you deem important (e.g. issue happens only occasionally):
Version of the containers.podman collection:
Either git commit if installed from git: git show --summary
Or version from ansible-galaxy if installed from galaxy: ansible-galaxy collection list | grep containers.podman
1.15.4
Output of ansible --version:
ansible [core 2.17.2]
config file = None
configured module search path = ['/home/val/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.12/site-packages/ansible
ansible collection location = /home/val/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.12.4 (main, Jun 7 2024, 06:33:07) [GCC 14.1.1 20240522] (/usr/bin/python)
jinja version = 3.1.4
libyaml = True
Output of podman version:
Client: Podman Engine
Version: 4.9.3
API Version: 4.9.3
Go Version: go1.22.2
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
Output of podman info --debug:
host:
arch: amd64
buildahVersion: 1.33.7
cgroupControllers:
- cpu
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon_2.1.10+ds1-1build2_amd64
path: /usr/bin/conmon
version: 'conmon version 2.1.10, commit: unknown'
cpuUtilization:
idlePercent: 98.88
systemPercent: 0.84
userPercent: 0.28
cpus: 1
databaseBackend: sqlite
distribution:
codename: noble
distribution: ubuntu
version: "24.04"
eventLogger: journald
freeLocks: 2041
hostname: REDACTED
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.8.0-40-generic
linkmode: dynamic
logDriver: journald
memFree: 7835648
memTotal: 430129152
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns_1.4.0-5_amd64
path: /usr/lib/podman/aardvark-dns
version: aardvark-dns 1.4.0
package: netavark_1.4.0-4_amd64
path: /usr/lib/podman/netavark
version: netavark 1.4.0
ociRuntime:
name: crun
package: crun_1.14.1-1_amd64
path: /usr/bin/crun
version: |-
crun version 1.14.1
commit: de537a7965bfbe9992e2cfae0baeb56a08128171
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /usr/bin/pasta
package: passt_0.0~git20240220.1e6f92b-1_amd64
version: |
pasta unknown version
Copyright Red Hat
GNU General Public License, version 2 or later
<https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
remoteSocket:
exists: false
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns_1.2.1-1build2_amd64
version: |-
slirp4netns version 1.2.1
commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.5
swapFree: 0
swapTotal: 0
uptime: 38h 52m 28.00s (Approximately 1.58 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries: {}
store:
configFile: /home/val/.config/containers/storage.conf
containerStore:
number: 3
paused: 0
running: 3
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/val/.local/share/containers/storage
graphRootAllocated: 9283444736
graphRootUsed: 4377153536
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 5
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/val/.local/share/containers/storage/volumes
version:
APIVersion: 4.9.3
Built: 0
BuiltTime: Thu Jan 1 01:00:00 1970
GitCommit: ""
GoVersion: go1.22.2
Os: linux
OsArch: linux/amd64
Version: 4.9.3Package info (e.g. output of rpm -q podman or apt list podman):
podman/noble-updates,noble-security,now 4.9.3+ds1-1ubuntu0.1 amd64 [installed]
podman/noble 4.9.3+ds1-1build2 amd64
Playbok you run with ansible (e.g. content of playbook.yaml):
---
- name: Ubuntu test
hosts: "cloud"
tasks:
- name: Run Ubuntu container
containers.podman.podman_container:
name: "ubuntu-test"
image: docker.io/library/ubuntu:22.04
detach: true
hostname: "ubuntu"
pull: always
recreate: true
force_restart: true
restart_policy: always
volumes:
- "/etc/localtime:/etc/localtime:ro"
command: "sleep infinity"
state: startedCommand line and output of ansible run with high verbosity
Please NOTE: if you submit a bug about idempotency, run the playbook with --diff option, like:
ansible-playbook -i inventory --diff -vv playbook.yml
Omitted because I think the problem/cause is already identified in the make_started function linked above.
Additional environment details (AWS, VirtualBox, physical, etc.):