Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detaching from an attached container results in an error #24895

Open
job79 opened this issue Dec 23, 2024 · 0 comments · May be fixed by #25083
Open

Detaching from an attached container results in an error #24895

job79 opened this issue Dec 23, 2024 · 0 comments · May be fixed by #25083
Assignees
Labels
jira kind/bug Categorizes issue or PR as related to a bug.

Comments

@job79
Copy link

job79 commented Dec 23, 2024

Issue Description

Detaching using the detach keys (ctrl-p ,ctrl-q) from a container entered using podman exec -it results in an error.

Steps to reproduce the issue

  1. Create a new container
    podman run --name container -d alpine sleep infinity
  2. Attach to the newly created container
    podman exec -it container ash
  3. Detach from the container by pressing ctrl-p+ctrl-q

Describe the results you received

The container does detach, but it takes several seconds and gives back the following error:

ERRO[0005] Container 4cea188839ea3627df1226058ac9f9f97a5fdfc852f87ac6b0a70db9a7420298 exec session 182c1e4c3ded8d9f2b318bd524edcf7b41e95bcad996fc688b9efb78bac59648 error: detached from container 
Error: timed out waiting for file /var/home/job/.local/share/containers/storage/overlay-containers/4cea188839ea3627df1226058ac9f9f97a5fdfc852f87ac6b0a70db9a7420298/userdata/182c1e4c3ded8d9f2b318bd524edcf7b41e95bcad996fc688b9efb78bac59648/exit/4cea188839ea3627df1226058ac9f9f97a5fdfc852f87ac6b0a70db9a7420298

Full terminal session:

fedora $ podman run --name container -d alpine sleep infinity
4cea188839ea3627df1226058ac9f9f97a5fdfc852f87ac6b0a70db9a7420298
fedora $ podman exec -it container ash
/ # ERRO[0005] Container 4cea188839ea3627df1226058ac9f9f97a5fdfc852f87ac6b0a70db9a7420298 exec session 182c1e4c3ded8d9f2b318bd524edcf7b41e95bcad996fc688b9efb78bac59648 error: detached from container 
Error: timed out waiting for file /var/home/job/.local/share/containers/storage/overlay-containers/4cea188839ea3627df1226058ac9f9f97a5fdfc852f87ac6b0a70db9a7420298/userdata/182c1e4c3ded8d9f2b318bd524edcf7b41e95bcad996fc688b9efb78bac59648/exit/4cea188839ea3627df1226058ac9f9f97a5fdfc852f87ac6b0a70db9a7420298
fedora $

Describe the results you expected

Detaching the container should not take several seconds, and should not result in an error.

podman info output

host:
  arch: amd64
  buildahVersion: 1.38.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-3.fc41.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 98.16
    systemPercent: 0.49
    userPercent: 1.35
  cpus: 16
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: silverblue
    version: "41"
  eventLogger: journald
  freeLocks: 2038
  hostname: fedora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 524288
      size: 65536
  kernel: 6.12.5-200.fc41.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 6038581248
  memTotal: 14477561856
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.fc41.x86_64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.fc41.x86_64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19.1-1.fc41.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1
      commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20241211.g09478d5-1.fc41.x86_64
    version: |
      pasta 0^20241211.g09478d5-1.fc41.x86_64
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc41.x86_64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 0h 56m 22.00s
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
store:
  configFile: /var/home/job/.config/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 3
    stopped: 2
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/job/.local/share/containers/storage
  graphRootAllocated: 1022488477696
  graphRootUsed: 22809714688
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 6
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /var/home/job/.local/share/containers/storage/volumes
version:
  APIVersion: 5.3.1
  Built: 1732147200
  BuiltTime: Thu Nov 21 01:00:00 2024
  GitCommit: ""
  GoVersion: go1.23.3
  Os: linux
  OsArch: linux/amd64
  Version: 5.3.1

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

Yes

@job79 job79 added the kind/bug Categorizes issue or PR as related to a bug. label Dec 23, 2024
@Luap99 Luap99 self-assigned this Jan 21, 2025
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 22, 2025
podman exec support detaching early via the detach key sequence. In that
case the podman process should exit successfully but the container exec
process keeps running.

Given that I could not find any existing test for the detach key
functionality not even for exec I added some. This seems to reval more
issues with on podman-remote.

Fixes containers#24895

Signed-off-by: Paul Holzinger <[email protected]>
@Luap99 Luap99 linked a pull request Jan 22, 2025 that will close this issue
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 22, 2025
podman exec support detaching early via the detach key sequence. In that
case the podman process should exit successfully but the container exec
process keeps running.

Given that I could not find any existing test for the detach key
functionality not even for exec I added some. This seems to reval more
issues with on podman-remote, podman-remote run detach was broken which
I fixed here as well but for podman-remote exec something bigger is
needed. While I thought I fixed most problems there there was a strange
raqce condition which caused the process to jung hang.

Thus I skipped the remote exec test for now and filled containers#25089 to track
that.

Fixes containers#24895

Signed-off-by: Paul Holzinger <[email protected]>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 22, 2025
podman exec support detaching early via the detach key sequence. In that
case the podman process should exit successfully but the container exec
process keeps running.

Given that I could not find any existing test for the detach key
functionality not even for exec I added some. This seems to reveal more
issues with on podman-remote, podman-remote run detach was broken which
I fixed here as well but for podman-remote exec something bigger is
needed. While I thought I fixed most problems there there was a strange
race condition which caused the process to just hang.

Thus I skipped the remote exec test for now and filled containers#25089 to track
that.

Fixes containers#24895

Signed-off-by: Paul Holzinger <[email protected]>
@Luap99 Luap99 added the jira label Jan 22, 2025
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 22, 2025
podman exec support detaching early via the detach key sequence. In that
case the podman process should exit successfully but the container exec
process keeps running.

Given that I could not find any existing test for the detach key
functionality not even for exec I added some. This seems to reveal more
issues with on podman-remote, podman-remote run detach was broken which
I fixed here as well but for podman-remote exec something bigger is
needed. While I thought I fixed most problems there there was a strange
race condition which caused the process to just hang.

Thus I skipped the remote exec test for now and filled containers#25089 to track
that.

Fixes containers#24895

Signed-off-by: Paul Holzinger <[email protected]>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 22, 2025
podman exec support detaching early via the detach key sequence. In that
case the podman process should exit successfully but the container exec
process keeps running.

Given that I could not find any existing test for the detach key
functionality not even for exec I added some. This seems to reveal more
issues with on podman-remote, podman-remote run detach was broken which
I fixed here as well but for podman-remote exec something bigger is
needed. While I thought I fixed most problems there there was a strange
race condition which caused the process to just hang.

Thus I skipped the remote exec test for now and filled containers#25089 to track
that.

Fixes containers#24895

Signed-off-by: Paul Holzinger <[email protected]>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 22, 2025
podman exec support detaching early via the detach key sequence. In that
case the podman process should exit successfully but the container exec
process keeps running.

Given that I could not find any existing test for the detach key
functionality not even for exec I added some. This seems to reveal more
issues with on podman-remote, podman-remote run detach was broken which
I fixed here as well but for podman-remote exec something bigger is
needed. While I thought I fixed most problems there there was a strange
race condition which caused the process to just hang.

Thus I skipped the remote exec test for now and filled containers#25089 to track
that.

Fixes containers#24895

Signed-off-by: Paul Holzinger <[email protected]>
Luap99 added a commit to Luap99/libpod that referenced this issue Jan 24, 2025
podman exec support detaching early via the detach key sequence. In that
case the podman process should exit successfully but the container exec
process keeps running.

Given that I could not find any existing test for the detach key
functionality not even for exec I added some. This seems to reveal more
issues with on podman-remote, podman-remote run detach was broken which
I fixed here as well but for podman-remote exec something bigger is
needed. While I thought I fixed most problems there there was a strange
race condition which caused the process to just hang.

Thus I skipped the remote exec test for now and filled containers#25089 to track
that.

Fixes containers#24895

Signed-off-by: Paul Holzinger <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants