Skip to content

Data Persistence #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 12 additions & 5 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,20 @@ MAINTAINER [email protected]

RUN apt-get update && apt-get upgrade -y \
&& rm -rf /var/lib/apt/lists/*

COPY scripts/ /docker-entrypoint-initdb.d/.

COPY ["scripts/", "/docker-entrypoint-initdb.d/"]
COPY ["bin/initialize.sh", "bin/preflight.sh", "bin/persistence-cleanup.sh", "/usr/local/bin/"]
RUN chmod 755 /usr/local/bin/persistence-cleanup.sh; \
chmod 755 /usr/local/bin/initialize.sh; \
chmod 755 /usr/local/bin/preflight.sh; \
sed -i '/bin\/bash/a /usr/local/bin/initialize.sh' /usr/local/bin/docker-entrypoint.sh; \
sed -i '/exec "$@"/i /usr/local/bin/preflight.sh' /usr/local/bin/docker-entrypoint.sh

# we need to touch and chown config files, since we cant write as mysql user
RUN touch /etc/mysql/conf.d/galera.cnf \
touch /etc/mysql/conf.d/cust.cnf \
&& chown mysql.mysql /etc/mysql/conf.d/galera.cnf \
&& chown mysql.mysql /etc/mysql/conf.d/cust.cnf \
&& chown mysql.mysql /docker-entrypoint-initdb.d/*.sql

# we expose all Cluster related Ports
Expand All @@ -22,9 +30,8 @@ EXPOSE 3306 4444 4567 4568
ENV GALERA_USER=galera \
GALERA_PASS=galerapass \
MAXSCALE_USER=maxscale \
MAXSCALE_PASS=maxscalepass \
MAXSCALE_PASS=maxscalepass \
CLUSTER_NAME=docker_cluster \
MYSQL_ALLOW_EMPTY_PASSWORD=1

CMD ["mysqld"]

CMD ["mysqld"]
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,4 +112,5 @@ The result should report the cluster up and running:
10.0.0.5 | 10.0.0.5 | 3306 | 0 | Master, Synced, Running
-------------------+-----------------+-------+-------------+--------------------


### Data persistance
If you need data persistance; Mount a volume to /data in the container (using --mount). A subfolder of /data will be created for each container (by ip) and the mysql datadir will be redirected here. Ensure /data is owned by 999:999.
23 changes: 23 additions & 0 deletions bin/initialize.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
#!/bin/bash
echo "[mysqld]" > /etc/mysql/conf.d/cust.cnf

#We check to see if /data folder exists. Assume we need to change mariadb datadir if so (for data persistence).
if [ -d "/data" ]; then

#Get the ip address for this container. There may be multiple. We'll cross reference it with the $DB_SERVICE_NAME.
for interface in $(ip add|grep global|awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); ip = substr($0,RSTART,RLENGTH); print ip}'); do
for swarm_service in $(getent hosts tasks.$DB_SERVICE_NAME|awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); ip = substr($0,RSTART,RLENGTH); print ip}'); do
if [ "$interface" == "$swarm_service" ]; then
THIS_DB_SERVICE_IP=$interface
fi
done
done

#Set the data dir to /data + the IP of this container.
mkdir -p /data/$THIS_DB_SERVICE_IP
echo "datadir = /data/$THIS_DB_SERVICE_IP" >> /etc/mysql/conf.d/cust.cnf
#echo "socket = /data/$THIS_DB_SERVICE_IP/mysql.sock" >> /etc/mysql/conf.d/cust.cnf

#Cleanup - clean up data folders for nodes no longer in the cluster. Don't wait for this.
/usr/local/bin/persistence-cleanup.sh &
fi
40 changes: 40 additions & 0 deletions bin/persistence-cleanup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#!/bin/bash

#Wait until the service is up and running on this host.
healthcheck(){
/usr/local/bin/docker-healthcheck 2> /dev/null
}
count=0
healthcheck
while [ $? != 0 ]; do
sleep 1
((count=count+1))
if [ "$count" -ge 100 ]; then
break
fi
healthcheck
done


/usr/local/bin/docker-healthcheck 2> /dev/null
if [ $? -eq 0 ]; then
#If there is a /data/ip directory that doesn't match an active swarm cluster IP address, it should be cleaned up.
#Will only clean up a single such directory on a run.

#Get the ip address for this container. There may be multiple. We'll cross reference it with the $DB_SERVICE_NAME.
for dir in `find /data -type d| awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); ip = substr($0,RSTART,RLENGTH); print ip}'`; do
ip_match=0
for swarm_service in $(getent hosts tasks.$DB_SERVICE_NAME| awk '{match($0,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); ip = substr($0,RSTART,RLENGTH); print ip}'); do
if [ "$dir" == "$swarm_service" ]; then
ip_match=1
fi
done

#Delete this extra folder structure
if [ $dir ] && [ "$ip_match" != 1 ]; then
echo "Removing stale persistence dir /data/$dir"
rm -rf "/data/$dir"
exit
fi
done
fi
9 changes: 9 additions & 0 deletions bin/preflight.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
#!/bin/bash
# Check our configs
# If a new container was launched with an old data volume the full docker-entrypoint.sh doesn't run.
# The result is that galera.cnf is not created in the new container.

#Check for galera.cnf
if [ ! -s /etc/mysql/conf.d/galera.cnf ]; then
/docker-entrypoint-initdb.d/init_cluster_conf.sh
fi
2 changes: 1 addition & 1 deletion scripts/init_cluster_conf.sh
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,4 @@ default-storage-engine = InnoDB
innodb-doublewrite = 1
innodb-autoinc-lock-mode = 2
innodb-flush-log-at-trx-commit = 2
EOF
EOF