Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

uyuni server 2024.xx , where xx >08 #9461

Open
fritz0011 opened this issue Nov 12, 2024 · 12 comments
Open

uyuni server 2024.xx , where xx >08 #9461

fritz0011 opened this issue Nov 12, 2024 · 12 comments
Labels
bug Something isn't working kubernetes Kubernetes-related P4

Comments

@fritz0011
Copy link

Problem description

Please, continue to support uyuni server release as rpm based, or make the container/helm compatible with rancher/k8s.
So far, in case of clean install, is a damn pain to set/run it on a kubernetes cluster.

Steps to reproduce

1.See problem description
...

Uyuni version

2024.08

Uyuni proxy version (if used)

No response

Useful logs

No response

Additional information

No response

@fritz0011 fritz0011 added bug Something isn't working P5 labels Nov 12, 2024
@rjmateus
Copy link
Member

Did you try the podman installation method?

@fritz0011
Copy link
Author

# mgradm install kubernetes uyuni-ORG.apps.DOMAIN --organization ORG --helm-uyuni-namespace uyuni-master --logLevel debug
4:48PM INF mgradm/cmd/cmd.go:66 > Welcome to mgradm
4:48PM INF mgradm/cmd/cmd.go:67 > Executing command: kubernetes
4:48PM DBG shared/utils/exec.go:66 > Running: timedatectl show --value -p Timezone
Administrator password:
Confirm the password:
4:48PM DBG shared/utils/exec.go:66 > Running: kubectl get node -o jsonpath={.items[0].status.nodeInfo.kubeletVersion}
4:48PM DBG shared/utils/exec.go:40 > Running: kubectl explain ingressroutetcp
4:48PM DBG shared/kubernetes/kubernetes.go:76 > No ingressroutetcp resource deployed error="exit status 1"
4:48PM DBG shared/utils/exec.go:66 > Running: kubectl get pod -A -o jsonpath={range .items[*]}{.spec.containers[*].args[0]}{.spec.containers[*].command}{end}
4:48PM DBG shared/utils/exec.go:66 > Running: kubectl get -o jsonpath={.items[?(@.metadata.name=="cert-manager")].status.readyReplicas} deploy -A
4:48PM DBG shared/utils/exec.go:66 > Running: kubectl get pod -o jsonpath={.items[?(@.metadata.labels.app=="webhook")].metadata.name} -A
4:48PM INF shared/kubernetes/utils.go:74 > Waiting for image of cert-manager-webhook-7d4c676646-spsk4 pod in  namespace to be pulled
4:48PM DBG shared/utils/exec.go:66 > Running: kubectl get event -o jsonpath={range .items[?(@.reason=="Failed")]}{.message}{"\n"}{end} --field-selector involvedObject.name=cert-manager-webhook-7d4c676646-spsk4 -A
4:48PM DBG shared/utils/exec.go:66 > Running: kubectl get event -o jsonpath={.items[?(@.reason=="Pulled")].message} --field-selector involvedObject.name=cert-manager-webhook-7d4c676646-spsk4 -A
⠋ kubectl get event -o jsonpath={range .items[?(@.reason=="Failed")]}{.message}{"\n"}{end} --field-selector involvedObject.name=cert-manager-webhook-7d4c676646-spsk4 -A
4:48PM DBG shared/utils/exec.go:66 > Running: kubectl get event -o jsonpath={range .items[?(@.reason=="Failed")]}{.message}{"\n"}{end} --field-selector involvedObject.name=cert-manager-webhook-7d4c676646-spsk⠋ kubectl get event -o jsonpath={.items[?(@.reason=="Pulled")].message} --field-selector involvedObject.name=cert-manager-webhook-7d4c676646-spsk4 -A
⠋ kubectl get event -o jsonpath={range .items[?(@.reason=="Failed")]}{.message}{"\n"}{end} --field-selector involvedObject.name=cert-manager-webhook-7d4c676646-spsk4 -A

...and is getting into an infinite loop.....

P.S. cert-manager is installed and functional to , NS: cert-manager

@mcalmer mcalmer added P4 and removed P5 labels Jan 3, 2025
@cbosdo
Copy link
Contributor

cbosdo commented Jan 6, 2025

@fritz0011 Kubernetes support is a work in progress and the problem in your case seem to be that the cert-manager start detection code isn't working correctly. I have more plans for Kubernetes, but those involve refactoring the Uyuni setup scripts and I have no idea when I'll be able to do it.

@cbosdo
Copy link
Contributor

cbosdo commented Jan 7, 2025

@fritz0011 I have recently refactored this part. Did you try with the latest version of mgradm?

@cbosdo cbosdo added the kubernetes Kubernetes-related label Jan 7, 2025
@fritz0011
Copy link
Author

fritz0011 commented Jan 7, 2025

@cbosdo I have recently refactored this part. Did you try with the latest version of mgradm?

way better,
so, with new mgradm + lattest uyuni image::

mgradm install kubernetes uyuni-dev.apps.domain.local --organization <org> --volumes-database-size 150Gi --volumes-cache-size 100Gi --volumes-www-size 250Gi --volumes-packages-size 350Gi --kubernetes-uyuni-namespace uyuni

  • deployment completed
    -- pvc created according to their designated size, uyuni pod is starting, but fails to get into running state ::
11:27PM ??? pod/ran-setup-check configured
11:27PM ??? Asserting correct java version...
11:27PM ??? postconf: fatal: open /etc/postfix/main.cf for reading: No such file or directory
11:27PM ??? Job for postfix.service failed because the control process exited with error code.
See "systemctl status postfix.service" and "journalctl -xeu postfix.service" for details.
11:27PM ??? /usr/lib/susemanager/bin/mgr-setup: line 150: /etc/sysconfig/postgresql: No such file or directory
11:27PM ??? ===============================================================================
!
! This shell operates within a container environment, meaning that not all
! modifications will be permanently saved in volumes.
!
! Please exercise caution when making changes, as some alterations may not
! persist beyond the current session.
!
===============================================================================
11:27PM ??? CREATE ROLE
11:27PM ??? cat: /pg_hba.conf: No such file or directory
11:27PM ??? mv: cannot stat '/pg_hba.conf': No such file or directory
11:27PM ??? sed: can't read /etc/apache2/conf.d/zz-spacewalk-www.conf: No such file or directory
11:27PM ??? sed: can't read /etc/apache2/listen.conf: No such file or directory
11:27PM ??? * Loading answer file: /root/spacewalk-answers.
11:27PM ??? ** Database: Setting up database connection for PostgreSQL backend.
11:27PM ??? ** Database: Populating database.
** Database: --clear-db option used.  Clearing database.
11:27PM ??? ** Database: Shutting down spacewalk services that may be using DB.
11:27PM ??? ** Database: Services stopped.  Clearing DB.
11:27PM ??? Running spacewalk-sql --select-mode-direct /usr/share/susemanager/db/postgres/deploy.sql
11:27PM ??? *** Progress: #
11:27PM ???
11:27PM ??? * Performing initial configuration.
11:27PM ??? There was a problem deploying the satellite configuration.  Exit value: 2.
Please examine /var/log/rhn/rhn_installation.log for more information.
11:27PM ??? CA Cert for OS Images: Packaging /etc/pki/trust/anchors/LOCAL-RHN-ORG-TRUSTED-SSL-CERT into /srv/susemanager/salt/images/rhn-org-trusted-ssl-cert-osimage-1.0-1.noarch.rpm
11:27PM ??? ERROR: spacewalk-setup failed
11:27PM ??? command terminated with exit code 2
Error: error running the setup script: exit status 2

However, a very annoying thing during uninstall...

mgradm uninstall --backend kubectl
10:05PM INF Welcome to mgradm
10:05PM INF Executing command: uninstall
10:05PM INF Would run kubectl delete -n uyuni job,deploy,svc,ingress,pvc,cm,secret,issuers,certificates -l app.kubernetes.io/part-of=uyuni
10:05PM INF Would remove file /var/lib/rancher/rke2/server/manifests/uyuni-ingress-nginx-config.yaml

10:05PM WRN Nothing has been uninstalled, run with --force to actually uninstall

Cluster becomes un-operational because of: Would remove file /var/lib/rancher/rke2/server/manifests/uyuni-ingress-nginx-config.yaml

@cbosdo
Copy link
Contributor

cbosdo commented Jan 8, 2025

@cbosdo I have recently refactored this part. Did you try with the latest version of mgradm?

way better, so, with new mgradm + lattest uyuni image::

good!

11:27PM ??? pod/ran-setup-check configured
11:27PM ??? Asserting correct java version...
11:27PM ??? postconf: fatal: open /etc/postfix/main.cf for reading: No such file or directory
11:27PM ??? Job for postfix.service failed because the control process exited with error code.
See "systemctl status postfix.service" and "journalctl -xeu postfix.service" for details.
11:27PM ??? /usr/lib/susemanager/bin/mgr-setup: line 150: /etc/sysconfig/postgresql: No such file or directory

Strange. It seems that the initContainer somehow didn't populate the empty PVs with the files from the image.
Did you start from empty PVs?

However, a very annoying thing during uninstall...

mgradm uninstall --backend kubectl 10:05PM INF Welcome to mgradm 10:05PM INF Executing command: uninstall 10:05PM INF Would run kubectl delete -n uyuni job,deploy,svc,ingress,pvc,cm,secret,issuers,certificates -l app.kubernetes.io/part-of=uyuni 10:05PM INF Would remove file /var/lib/rancher/rke2/server/manifests/uyuni-ingress-nginx-config.yaml

10:05PM WRN Nothing has been uninstalled, run with --force to actually uninstall

Cluster becomes un-operational because of: Would remove file /var/lib/rancher/rke2/server/manifests/uyuni-ingress-nginx-config.yaml

Having to use --force to effectively uninstall is by design... I wonder how that could break the cluster though as it's supposed to change nothing. Do you have any additional info to help me? Ideally running uninstall with --logLevel=debug would tell us what commands are actually done. Do you still have the uyuni-ingress-nginx-config.yaml file? Note that the file has been renamed during the refactoring...

@fritz0011
Copy link
Author

@cbosdo

So,
here it is:

  • restore cluster from snapshot => clean install for uyuni
    same error ::

about this file :uyuni-ingress-nginx-config.yaml

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      config:
        hsts: "false"
    tcp:
      80: "uyuni/uyuni-tcp:80"
      5432: "uyuni/uyuni-tcp:5432"
      9187: "uyuni/uyuni-tcp:9187"
      4505: "uyuni/uyuni-tcp:4505"
      4506: "uyuni/uyuni-tcp:4506"
      25151: "uyuni/uyuni-tcp:25151"
      5556: "uyuni/uyuni-tcp:5556"
      9800: "uyuni/uyuni-tcp:9800"
      5557: "uyuni/uyuni-tcp:5557"
    udp:
      69: "uyuni/uyuni-udp:69"

++ suggestion:
etc-apache2 Bound pvc-db44858c-1368-4960-a98d-e19d42f32a7c 10Mi RWO longhorn 9h
etc-cobbler Bound pvc-a1b9b7c1-90ef-466f-b822-1d88809a9abd 10Mi RWO longhorn 9h
etc-postfix Bound pvc-b6a32dee-5e8d-44af-a6f3-c78cadfaf6a7 10Mi RWO longhorn 9h
etc-tomcat Bound pvc-fbe1d8dc-34ed-4a23-ad93-1784a6d1f3f1 10Mi RWO longhorn 9h

to use one PVC like etc-configs to be mounted inside container /etc/configs

  • apache2 => customized rpm install to /etc/configs/apache2
  • tomcat => customized rpm install to /etc/configs/tomcat
  • postfix => customized rpm install to /etc/configs/tomcat

@cbosdo
Copy link
Contributor

cbosdo commented Jan 8, 2025

@cbosdo

So, here it is:

* restore cluster from  snapshot => clean install for uyuni
  same error ::

You mean you still have the No such file or directory errors?
You should have a setup job that has been started. This job's pod should have an initContainer filling the volumes with the files that are in the image before mounting them in the final container. It seems this script is failing somehow.

k get pod -A -lapp.kubernetes.io/component=server will give you the name of the setup pod.
Then run something like k logs -n <yourNS> uyuni-setup-<timestamp>-<ID> -c init-volumes to get the logs of that container.

about this file :uyuni-ingress-nginx-config.yaml

apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
  name: rke2-ingress-nginx
  namespace: kube-system
spec:
  valuesContent: |-
    controller:
      config:
        hsts: "false"
    tcp:
      80: "uyuni/uyuni-tcp:80"
      5432: "uyuni/uyuni-tcp:5432"
      9187: "uyuni/uyuni-tcp:9187"
      4505: "uyuni/uyuni-tcp:4505"
      4506: "uyuni/uyuni-tcp:4506"
      25151: "uyuni/uyuni-tcp:25151"
      5556: "uyuni/uyuni-tcp:5556"
      9800: "uyuni/uyuni-tcp:9800"
      5557: "uyuni/uyuni-tcp:5557"
    udp:
      69: "uyuni/uyuni-udp:69"

At least the file looks correct. I don't understand what is broken in your cluster after uninstall? Is there some errors / log to help me?

++ suggestion: etc-apache2 Bound pvc-db44858c-1368-4960-a98d-e19d42f32a7c 10Mi RWO longhorn 9h etc-cobbler Bound pvc-a1b9b7c1-90ef-466f-b822-1d88809a9abd 10Mi RWO longhorn 9h etc-postfix Bound pvc-b6a32dee-5e8d-44af-a6f3-c78cadfaf6a7 10Mi RWO longhorn 9h etc-tomcat Bound pvc-fbe1d8dc-34ed-4a23-ad93-1784a6d1f3f1 10Mi RWO longhorn 9h

to use one PVC like etc-configs to be mounted inside container /etc/configs

* apache2 => customized rpm install to /etc/configs/apache2

* tomcat    => customized rpm install to /etc/configs/tomcat

* postfix    => customized rpm install to /etc/configs/tomcat

I don't really understand your suggestion. Could you please phrase it completely? We have several mounts for those folders to avoid persisting files we don't care that much about.

@fritz0011
Copy link
Author

Back online @cbosdo

Here it is ::

root@kaps-rke2-m01:~# k get pod -n uyuni -lapp.kubernetes.io/component=server
NAME                     READY   STATUS    RESTARTS         AGE
uyuni-5b977847fc-9d6df   0/1     Running   976 (5m6s ago)   5d19h

root@kaps-rke2-m01:~# k logs -n uyuni uyuni-5b977847fc-9d6df -c init-volumes

+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/var/lib/cobbler /mnt/var/lib/cobbler
+ chmod --reference=/var/lib/cobbler /mnt/var/lib/cobbler
++ ls -A /mnt/var/lib/cobbler
+ '[' -z '.defaults_checksums
.pk
backup
distro_signatures.json
grub_config
kickstarts
lost+found
scripts
snippets
templates' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/var/lib/salt /mnt/var/lib/salt
+ chmod --reference=/var/lib/salt /mnt/var/lib/salt
++ ls -A /mnt/var/lib/salt
+ '[' -z '.ssh
lost+found' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/var/lib/pgsql /mnt/var/lib/pgsql
+ chmod --reference=/var/lib/pgsql /mnt/var/lib/pgsql
++ ls -A /mnt/var/lib/pgsql
+ '[' -z '.bash_profile
data
initlog
lost+found' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/var/cache /mnt/var/cache
+ chmod --reference=/var/cache /mnt/var/cache
++ ls -A /mnt/var/cache
+ '[' -z 'ldconfig
lost+found
private
zypp' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/var/log /mnt/var/log
+ chmod --reference=/var/log /mnt/var/log
++ ls -A /mnt/var/log
+ '[' -z 'alternatives.log
btmp
lastlog
lost+found
private
rhn
suseconnect.log
wtmp
wtmp-20250114.xz
wtmp-20250119.xz
zypp
zypper.log' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/srv/salt /mnt/srv/salt
+ chmod --reference=/srv/salt /mnt/srv/salt
++ ls -A /mnt/srv/salt
+ '[' -z lost+found ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/srv/www /mnt/srv/www
+ chmod --reference=/srv/www /mnt/srv/www
++ ls -A /mnt/srv/www
+ '[' -z '.defaults_checksums
.pk
cobbler
lost+found' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/srv/tftpboot /mnt/srv/tftpboot
+ chmod --reference=/srv/tftpboot /mnt/srv/tftpboot
++ ls -A /mnt/srv/tftpboot
+ '[' -z '.pk
lost+found' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/srv/formula_metadata /mnt/srv/formula_metadata
+ chmod --reference=/srv/formula_metadata /mnt/srv/formula_metadata
++ ls -A /mnt/srv/formula_metadata
+ '[' -z lost+found ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/srv/pillar /mnt/srv/pillar
+ chmod --reference=/srv/pillar /mnt/srv/pillar
++ ls -A /mnt/srv/pillar
+ '[' -z lost+found ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/srv/susemanager /mnt/srv/susemanager
+ chmod --reference=/srv/susemanager /mnt/srv/susemanager
++ ls -A /mnt/srv/susemanager
+ '[' -z 'lost+found
salt' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/srv/spacewalk /mnt/srv/spacewalk
+ chmod --reference=/srv/spacewalk /mnt/srv/spacewalk
++ ls -A /mnt/srv/spacewalk
+ '[' -z lost+found ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/root /mnt/root
+ chmod --reference=/root /mnt/root
chmod: changing permissions of '/mnt/root': Read-only file system
++ ls -A /mnt/root
+ '[' -z '.MANAGER_SETUP_COMPLETE
.bash_history
.gnupg
.ssh
lost+found
spacewalk-answers' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/apache2 /mnt/etc/apache2
+ chmod --reference=/etc/apache2 /mnt/etc/apache2
++ ls -A /mnt/etc/apache2
+ '[' -z '.defaults_checksums
.pk
charset.conv
conf.d
default-server.conf
errors.conf
global.conf
httpd.conf
listen.conf
loadmodule.conf
lost+found
magic
mod_autoindex-defaults.conf
mod_cgid-timeout.conf
mod_info.conf
mod_log_config.conf
mod_mime-defaults.conf
mod_reqtimeout.conf
mod_status.conf
mod_userdir.conf
mod_usertrack.conf
protocols.conf
server-tuning.conf
ssl-global.conf
ssl.crl
ssl.crt
ssl.csr
ssl.key
ssl.prm
uid.conf
vhosts.d' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/rhn /mnt/etc/rhn
+ chmod --reference=/etc/rhn /mnt/etc/rhn
++ ls -A /mnt/etc/rhn
+ '[' -z '.defaults_checksums
.pk
lost+found
rhn.conf
rhn.conf.2025-01-07_23:27:38.401341
rhn.conf.2025-01-07_23:27:38.449002
rhn.conf.2025-01-07_23:27:56.131218
rhn.conf.rpmnew
signing.conf
spacewalk-common-channels.ini
spacewalk-repo-sync
taskomatic.conf
websockify.key' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/systemd/system/multi-user.target.wants /mnt/etc/systemd/system/multi-user.target.wants
+ chmod --reference=/etc/systemd/system/multi-user.target.wants /mnt/etc/systemd/system/multi-user.target.wants
++ ls -A /mnt/etc/systemd/system/multi-user.target.wants
+ '[' -z 'apache2.service
billing-data-service.service
cobblerd.service
lost+found
postfix.service
postgresql.service
rhn-search.service
salt-api.service
salt-master.service
spacewalk.target
taskomatic.service
tomcat.service
wicked.service' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/systemd/system/sockets.target.wants /mnt/etc/systemd/system/sockets.target.wants
chown: failed to get attributes of '/etc/systemd/system/sockets.target.wants': No such file or directory
+ chmod --reference=/etc/systemd/system/sockets.target.wants /mnt/etc/systemd/system/sockets.target.wants
chmod: failed to get attributes of '/etc/systemd/system/sockets.target.wants': No such file or directory
++ ls -A /mnt/etc/systemd/system/sockets.target.wants
+ '[' -z lost+found ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/salt /mnt/etc/salt
+ chmod --reference=/etc/salt /mnt/etc/salt
++ ls -A /mnt/etc/salt
+ '[' -z '.defaults_checksums
.pk
lost+found
master
master.d
roster' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/tomcat /mnt/etc/tomcat
+ chmod --reference=/etc/tomcat /mnt/etc/tomcat
++ ls -A /mnt/etc/tomcat
+ '[' -z '.defaults_checksums
.pk
allowLinking.xslt
catalina.policy
catalina.properties
conf.d
context.xml
jaspic-providers.xml
logging.properties
lost+found
server.xml
server.xml.2025-01-08T09:06:41.058
tomcat-users.xml
tomcat.conf
valve.xslt
web.xml' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/cobbler /mnt/etc/cobbler
+ chmod --reference=/etc/cobbler /mnt/etc/cobbler
++ ls -A /mnt/etc/cobbler
+ '[' -z '.defaults_checksums
.pk
auth.conf
boot_loader_conf
cheetah_macros
dhcp.template
dhcp6.template
dnsmasq.template
genders.template
import_rsync_whitelist
iso
logging_config.conf
lost+found
modules.conf
mongodb.conf
named.template
ndjbdns.template
reporting
rsync.exclude
rsync.template
secondary.template
settings.d
settings.yaml
users.conf
users.digest
version
windows
zone.template
zone_templates' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/sysconfig /mnt/etc/sysconfig
+ chmod --reference=/etc/sysconfig /mnt/etc/sysconfig
++ ls -A /mnt/etc/sysconfig
+ '[' -z '.defaults_checksums
.pk
SuSEfirewall2.d
apache2
billing-data-service
lost+found
mail
network
postfix
postgresql
rhn
ssh
tftp
tomcat
uyuni-configfiles-sync' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/postfix /mnt/etc/postfix
+ chmod --reference=/etc/postfix /mnt/etc/postfix
++ ls -A /mnt/etc/postfix
+ '[' -z '.defaults_checksums
.pk
access
access.lmdb
aliases
aliases.lmdb
bounce.cf.default
canonical
canonical.lmdb
generic
header_checks
helo_access
helo_access.lmdb
lost+found
main.cf
main.cf.default
master.cf
openssl_postfix.conf.in
relay
relay.lmdb
relay_ccerts
relay_ccerts.lmdb
relay_recipients
relay_recipients.lmdb
relocated
relocated.lmdb
sasl_passwd
sasl_passwd.lmdb
sender_canonical
sender_canonical.lmdb
transport
transport.lmdb
virtual
virtual.lmdb' ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/sssd /mnt/etc/sssd
+ chmod --reference=/etc/sssd /mnt/etc/sssd
++ ls -A /mnt/etc/sssd
+ '[' -z lost+found ']'
+ for vol in /var/lib/cobbler /var/lib/salt /var/lib/pgsql /var/cache /var/log /srv/salt /srv/www /srv/tftpboot /srv/formula_metadata /srv/pillar /srv/susemanager /srv/spacewalk /root /etc/apache2 /etc/rhn /etc/systemd/system/multi-user.target.wants /etc/systemd/system/sockets.target.wants /etc/salt /etc/tomcat /etc/cobbler /etc/sysconfig /etc/postfix /etc/sssd /etc/pki/tls
+ chown --reference=/etc/pki/tls /mnt/etc/pki/tls
+ chmod --reference=/etc/pki/tls /mnt/etc/pki/tls
++ ls -A /mnt/etc/pki/tls
+ '[' -z lost+found ']'

@fritz0011
Copy link
Author

@cbosdo
for this one::

to use one PVC like etc-configs to be mounted inside container /etc/configs

  • apache2 => customized rpm install to /etc/configs/apache2

  • tomcat => customized rpm install to /etc/configs/tomcat

  • postfix => customized rpm install to /etc/configs/postfix

so, instead of having 3 mount points (pvc) /etc/pache2; /etc/tomcat; /etc/postgres => can customize the package install for instance, instead of using for apache2 /etc/apache2, to use /etc/configs/apache2; tomcat, instead of using /etc/tomcat => /etc/configs/tomcat :)
so, in this case is just only one pvc the maps inside container to /etc/configs

@cbosdo
Copy link
Contributor

cbosdo commented Jan 30, 2025

so, instead of having 3 mount points (pvc) /etc/pache2; /etc/tomcat; /etc/postgres => can customize the package install for instance, instead of using for apache2 /etc/apache2, to use /etc/configs/apache2; tomcat, instead of using /etc/tomcat => /etc/configs/tomcat :) so, in this case is just only one pvc the maps inside container to /etc/configs

We could do that with symlinks in /etc indeed. I'm not sure that's worth the effort

@cbosdo
Copy link
Contributor

cbosdo commented Jan 30, 2025

Back online @cbosdo

Here it is ::

Do you still have the error from #9461 (comment) after this? It seems to me that the volumes are properly populated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working kubernetes Kubernetes-related P4
Projects
None yet
Development

No branches or pull requests

4 participants