Skip to content

Conversation

haseebsyed12
Copy link
Contributor

@haseebsyed12 haseebsyed12 commented Aug 29, 2025

1. Check all pods are running

kubectl get pods -n openstack | grep mariadb # Should show 3 pods in Running state

mariadb-0                                                         2/2     Running     0             34m
mariadb-1                                                         2/2     Running     0             34m
mariadb-2                                                         2/2     Running     0             34m
mariadb-metrics-556d66666-chzfs                                   1/1     Running     0             33m

2. Check storage usage in all 3 replicas mariadb-0, mariadb-1 and mariadb-2

kubectl exec -n openstack mariadb-0 -- df -h

Filesystem                         Size  Used Avail Use% Mounted on
overlay                            148G   79G   63G  56% /
tmpfs                               64M     0   64M   0% /dev
tmpfs                              126G   20K  126G   1% /etc/pki
/dev/mapper/ubuntu--vg-ubuntu--lv  148G   79G   63G  56% /etc/hosts
shm                                 64M     0   64M   0% /dev/shm
tmpfs                              126G   12K  126G   1% /etc/mysql/conf.d
/dev/rbd6                          9.8G  465M  9.3G   5% /var/lib/mysql
/dev/rbd5                           89M   15K   87M   1% /etc/mysql/mariadb.conf.d
tmpfs                              126G   12K  126G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                               63G     0   63G   0% /proc/acpi
tmpfs                               63G     0   63G   0% /proc/scsi
tmpfs                               63G     0   63G   0% /sys/firmware
tmpfs                               63G     0   63G   0% /sys/devices/virtual/powercap

3. Check persistence volumes supporting it

kubectl get pvc -l app.kubernetes.io/instance=mariadb -n openstack

galera-mariadb-0                Bound    pvc-eedb7e21-d396-4ccf-9f62-51b7e4c68b85   10Gi       RWO            ceph-block-single       <unset>                 50m
galera-mariadb-1                Bound    pvc-7707e245-a4c9-4515-9160-b77849e7c6f4   10Gi       RWO            ceph-block-single       <unset>                 50m
galera-mariadb-2                Bound    pvc-cf72590c-ccd3-4674-a232-1ea03d0a7f97   10Gi       RWO            ceph-block-single       <unset>                 50m
storage-mariadb-0               Bound    pvc-5ed5fd20-edec-4cbc-81ce-08749ad4541c   10Gi       RWO            ceph-block-single       <unset>                 50m
storage-mariadb-1               Bound    pvc-69ebece2-b144-4294-999c-21c51c669069   10Gi       RWO            ceph-block-single       <unset>                 50m
storage-mariadb-2               Bound    pvc-147963ba-fece-4d33-a696-a9385aa888fd   10Gi       RWO            ceph-block-single       <unset>                 50m
test-mariadb-restore-pvc        Bound    pvc-6954fcff-81f8-4dad-9ba0-90a8119fd4fd   1Gi        RWO            ceph-block-single       <unset>                 113m

4. Check Galera cluster status

kubectl exec -n openstack mariadb-0 -- mariadb -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size';"

Variable_name	Value
wsrep_cluster_size	3

Verify all 3 nodes are in sync

kubectl exec -n openstack mariadb-0 -- mariadb -u root -p -e "SELECT VARIABLE_VALUE FROM information_schema.GLOBAL_STATUS WHERE VARIABLE_NAME='wsrep_ready';"

Variable_name	Value
wsrep_ready	ON

5. Check active DB connections

kubectl exec -n openstack mariadb-0 -- mariadb -u root -p -e "SHOW PROCESSLIST;"

+------+-------------+--------------------+-----------+---------+------+-------------------------+------------------+----------+
| Id   | User        | Host               | db        | Command | Time | State                   | Info             | Progress |
+------+-------------+--------------------+-----------+---------+------+-------------------------+------------------+----------+
|    1 | system user |                    | NULL      | Sleep   | 2177 | wsrep aborter idle      | NULL             |    0.000 |
|    2 | system user |                    | NULL      | Sleep   |    3 | wsrep applier committed | NULL             |    0.000 |
|  200 | nova        | 10.64.50.243:33506 | nova      | Sleep   |    3 |                         | NULL             |    0.000 |
|  201 | nova        | 10.64.50.243:33520 | nova      | Sleep   |    0 |                         | NULL             |    0.000 |
|  204 | cinder      | 10.64.49.87:50416  | cinder    | Sleep   |    8 |                         | NULL             |    0.000 |
|  206 | ironic      | 10.64.49.168:36202 | ironic    | Sleep   |    0 |                         | NULL             |    0.000 |
|  207 | keystone    | 10.64.50.115:55886 | keystone  | Sleep   |  214 |                         | NULL             |    0.000 |
|  208 | keystone    | 10.64.48.181:40190 | keystone  | Sleep   |  214 |                         | NULL             |    0.000 |
|  218 | ironic      | 10.64.49.168:38268 | ironic    | Sleep   |    0 |                         | NULL             |    0.000 |
|  225 | octavia     | 10.64.50.136:48862 | octavia   | Sleep   |   16 |                         | NULL             |    0.000 |
|  276 | neutron     | 10.64.49.159:44110 | neutron   | Sleep   |   91 |                         | NULL             |    0.000 |
|  395 | ironic      | 10.64.49.168:46984 | ironic    | Sleep   |    0 |                         | NULL             |    0.000 |
|  396 | ironic      | 10.64.49.168:46988 | ironic    | Sleep   |    0 |                         | NULL             |    0.000 |
|  402 | ironic      | 10.64.48.155:56070 | ironic    | Sleep   |   63 |                         | NULL             |    0.000 |
|  403 | placement   | 10.64.49.45:59992  | placement | Sleep   |   24 |                         | NULL             |    0.000 |
|  433 | ironic      | 10.64.48.155:36186 | ironic    | Sleep   |   65 |                         | NULL             |    0.000 |
|  464 | nova        | 10.64.50.239:33882 | nova_api  | Sleep   |  197 |                         | NULL             |    0.000 |
|  465 | cinder      | 10.64.50.164:42640 | cinder    | Sleep   |  211 |                         | NULL             |    0.000 |
|  466 | ironic      | 10.64.48.155:38832 | ironic    | Sleep   |  928 |                         | NULL             |    0.000 |
|  467 | octavia     | 10.64.48.216:45830 | octavia   | Sleep   |  453 |                         | NULL             |    0.000 |
|  468 | neutron     | 10.64.49.159:44300 | neutron   | Sleep   |  213 |                         | NULL             |    0.000 |
|  469 | neutron     | 10.64.49.159:44306 | neutron   | Sleep   |  213 |                         | NULL             |    0.000 |
|  470 | nova        | 10.64.49.246:57952 | nova_api  | Sleep   |  211 |                         | NULL             |    0.000 |
|  478 | nova        | 10.64.49.246:59036 | nova      | Sleep   |  212 |                         | NULL             |    0.000 |
|  498 | ironic      | 10.64.50.241:54270 | ironic    | Sleep   | 1033 |                         | NULL             |    0.000 |
|  499 | ironic      | 10.64.50.241:54276 | ironic    | Sleep   | 1030 |                         | NULL             |    0.000 |
|  504 | ironic      | 10.64.48.155:59606 | ironic    | Sleep   | 1030 |                         | NULL             |    0.000 |
|  505 | ironic      | 10.64.48.155:59612 | ironic    | Sleep   | 1035 |                         | NULL             |    0.000 |
|  534 | neutron     | 10.64.50.98:58330  | neutron   | Sleep   |  213 |                         | NULL             |    0.000 |
|  535 | neutron     | 10.64.50.98:58340  | neutron   | Sleep   |  213 |                         | NULL             |    0.000 |
|  536 | neutron     | 10.64.50.98:58350  | neutron   | Sleep   |  213 |                         | NULL             |    0.000 |
|  543 | neutron     | 10.64.50.98:49508  | neutron   | Sleep   | 1255 |                         | NULL             |    0.000 |
|  569 | neutron     | 10.64.50.98:49652  | neutron   | Sleep   |   12 |                         | NULL             |    0.000 |
|  570 | neutron     | 10.64.50.98:49656  | neutron   | Sleep   |   12 |                         | NULL             |    0.000 |
|  801 | ironic      | 10.64.50.241:60010 | ironic    | Sleep   |  692 |                         | NULL             |    0.000 |
| 1078 | root        | localhost          | NULL      | Query   |    0 | starting                | SHOW PROCESSLIST |    0.000 |
+------+-------------+--------------------+-----------+---------+------+-------------------------+------------------+----------+

6. finally test with few OpenStack cli commands

openstack image list
openstack network list

@haseebsyed12 haseebsyed12 requested a review from a team August 29, 2025 01:57
@haseebsyed12 haseebsyed12 changed the title Update MariaDB Operator and enable HA using Galera feat: Update MariaDB Operator and enable HA using Galera Aug 29, 2025
@haseebsyed12 haseebsyed12 marked this pull request as ready for review August 29, 2025 02:06
@haseebsyed12 haseebsyed12 force-pushed the mariadb-operator-gradual-upgrade branch 2 times, most recently from 9a4e032 to ed5112c Compare August 29, 2025 12:13
@haseebsyed12 haseebsyed12 requested a review from skrobul August 29, 2025 14:54
@haseebsyed12 haseebsyed12 force-pushed the mariadb-operator-gradual-upgrade branch 3 times, most recently from 280c9e1 to bc0c5ae Compare September 1, 2025 08:20
Copy link
Collaborator

@skrobul skrobul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, now that the storageClass has been created, can you please test if it works correctly?

It will probably require the instances to be recreated one at the time.
It currently uses what was the default before:

❯ kubectl get pvc -l app.kubernetes.io/instance=mariadb
NAME                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            VOLUMEATTRIBUTESCLASS   AGE
galera-mariadb-0    Bound    pvc-15a91885-1428-4950-ba3e-58c8d8b5bf77   100Mi      RWO            ceph-block-replicated   <unset>                 3d10h
galera-mariadb-1    Bound    pvc-fcf1a471-5635-438c-962f-e150f667af48   100Mi      RWO            ceph-block-replicated   <unset>                 3d10h
galera-mariadb-2    Bound    pvc-78fb665f-540b-4e25-b639-1b8f83999ab7   100Mi      RWO            ceph-block-replicated   <unset>                 3d10h

@haseebsyed12 haseebsyed12 force-pushed the mariadb-operator-gradual-upgrade branch from c13481b to 86ae447 Compare September 1, 2025 16:34
@haseebsyed12 haseebsyed12 force-pushed the mariadb-operator-gradual-upgrade branch 2 times, most recently from 38788b5 to 947e626 Compare September 2, 2025 01:32
@haseebsyed12 haseebsyed12 force-pushed the mariadb-operator-gradual-upgrade branch from 0336b1d to d999f7d Compare September 2, 2025 04:46
@haseebsyed12 haseebsyed12 requested a review from skrobul September 2, 2025 04:47
Copy link
Collaborator

@skrobul skrobul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when this gets deployed in staging/production?
Are there any manual steps needed for data migration?

- ReadWriteOnce
resources:
requests:
storage: 10Gi
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PVC is used only to store text configuration files - 10Gi is little bit too much - can we make it 1Gi please?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure !!

@haseebsyed12 haseebsyed12 added this pull request to the merge queue Sep 2, 2025
Merged via the queue into main with commit 5ac1672 Sep 2, 2025
17 checks passed
@haseebsyed12 haseebsyed12 deleted the mariadb-operator-gradual-upgrade branch September 2, 2025 07:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants