Skip to content
This repository was archived by the owner on Dec 12, 2023. It is now read-only.

Commit c17d44a

Browse files
committed
Created voripos-domain-data
This has been split from the voripos repo, and the README updated to include info on Homebrew.
0 parents  commit c17d44a

10 files changed

+235
-0
lines changed

.dockerignore

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
*
2+
!docker-build-resources/

.env

+1
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
VORIPOS_DATA_DIR="~/Library/Containers/com.vori.VoriPOS/Data/Library/Application Support"

.gitignore

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
.idea/
2+
data/
3+
*.tar.gz

Dockerfile

+31
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
FROM alpine:latest AS fswatch
2+
3+
RUN apk add --no-cache autoconf alpine-sdk
4+
5+
RUN rm /usr/include/sys/inotify.h
6+
RUN wget https://github.com/emcrisostomo/fswatch/releases/download/1.17.1/fswatch-1.17.1.tar.gz \
7+
&& tar -xzvf fswatch-1.17.1.tar.gz \
8+
&& cd fswatch-1.17.1 \
9+
&& ./configure \
10+
&& make \
11+
&& make install \
12+
&& rm -rf /fswatch-1.17.1
13+
14+
FROM alpine:latest
15+
16+
# Install fswatch
17+
COPY --from=fswatch /usr/local/bin/fswatch /usr/local/bin/fswatch
18+
COPY --from=fswatch /usr/local/lib/libfswatch.so* /usr/local/lib/
19+
20+
# LiteFS setup
21+
RUN apk add --no-cache autoconf alpine-sdk bash fuse3 sqlite ca-certificates
22+
COPY --from=flyio/litefs:0.5 /usr/local/bin/litefs /usr/local/bin/litefs
23+
COPY docker-build-resources/etc/litefs.yml /etc/litefs.yml
24+
25+
ENV LITEFS_DB_DIRECTORY=/litefs
26+
ENV LITEFS_INTERNAL_DATA_DIRECTORY=/var/lib/litefs
27+
28+
WORKDIR /
29+
COPY docker-build-resources/sync.sh /sync.sh
30+
31+
ENTRYPOINT litefs mount

README.md

+62
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# VoriPOS Domain Sync
2+
3+
Domain data (e.g., products, tax rates) is data that is mutated on Vori's servers (typically by Dashboard users). This
4+
data is consumed by the POS to facilitate shopper checkout. We use [LiteFS Cloud](https://fly.io/docs/litefs/) to
5+
automatically replicate a SQLite database generated by Vori.
6+
7+
LiteFS requires a virtual filesystem that does not run on macOS, so we run it in a Docker container. Things get more
8+
complicated as we need to make the data on the virtual filesystem in the Docker container visible to the host. We cannot
9+
simply expose a host volume because it is not compatible with the virtual filesystem. Thus, we have a somewhat-hacky
10+
solution. `fswatch` polls the directory where LiteFS tracks internal state, and copies the database to a shared host
11+
volume. This is not ideal, but it works well enough for our purposes.
12+
13+
## Installation
14+
This service is distributed via Homebrew.
15+
16+
```shell
17+
brew tap voriteam/voripos
18+
brew install voripos-domain-sync.sh
19+
brew services start voripos-domain-sync.sh
20+
```
21+
22+
## Local development
23+
A LiteFS Cloud token is required to set up syncing. This can be pulled from Fly.io, or ask
24+
in [#engineering](https://voriworkspace.slack.com/archives/CS49ASVEU).
25+
26+
Add the token to the `.env` file, and start Docker Compose with:
27+
28+
```shell
29+
docker compose up
30+
```
31+
32+
This will connect to LiteFS Cloud and download the database to `~/Library/Containers/com.vori.VoriPOS/Data/Library/Application Support/Domain.sqlite3`.
33+
When the database is synced, a query is run to output the DB metadata and row counts. If you don't see this after a few
34+
seconds, something may be wrong with the configuration.
35+
36+
You may see other database names when connecting to the `vori-demo` cluster. These can be ignored.
37+
LiteFS Cloud does not currently support complete database deletion, and we aren't copying these to the host volume.
38+
39+
Need to start from scratch? Kill the containers and restart.
40+
41+
```shell
42+
docker compose down
43+
```
44+
45+
## Distribution
46+
The Docker image is pulled by the POS machines. If you change the image, push it!
47+
48+
You may need to set up a builder:
49+
50+
```shell
51+
docker buildx create --name mybuilder --bootstrap --use
52+
```
53+
54+
This command will build _and push_ the images for both dev and production:
55+
56+
```shell
57+
docker buildx build --platform linux/amd64,linux/arm64 -t us-docker.pkg.dev/vori-dev/pos/domain-data:latest -t us-docker.pkg.dev/vori-1bdf0/pos/domain-data:latest --push .
58+
```
59+
60+
### Homebrew
61+
The POS machines run a service installed via Homebrew. Create a release on GitHub, and follow the instructions at
62+
https://github.com/voriteam/homebrew-voripos to update the tap with the latest version.

docker-build-resources/etc/litefs.yml

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# This directory is where your application will access the database.
2+
fuse:
3+
dir: "${LITEFS_DB_DIRECTORY}"
4+
5+
# This directory is where LiteFS will store internal data.
6+
# You must place this directory on a persistent volume.
7+
data:
8+
dir: "${LITEFS_INTERNAL_DATA_DIRECTORY}"
9+
10+
lease:
11+
type: "static"
12+
13+
# Required. The URL for the primary node's LiteFS API.
14+
# Note: replace `primary` with the appropriate hostname for your primary node!
15+
advertise-url: "http://litefs:20202"
16+
17+
# Specifies whether the node can become the primary. If using
18+
# "static" leasing, this should be set to true on the primary
19+
# and false on the replicas.
20+
candidate: $IS_PRIMARY
21+
22+
exec: "sh sync.sh"

docker-build-resources/sync.sh

+69
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,69 @@
1+
#!/bin/bash
2+
3+
set -e
4+
set +v
5+
set +x
6+
7+
Color_Off='\033[0m' # Text Reset
8+
9+
# Regular Colors
10+
Red='\033[0;31m' # Red
11+
Green='\033[0;32m' # Green
12+
Yellow='\033[0;33m' # Yellow
13+
14+
15+
echo "Started sync.sh"
16+
17+
# NOTE (clintonb): This is a hack, but it works!
18+
# I tried lsyncd + rsync, but the changes were never properly reflected on the
19+
# host without reloading the entire database with `.load`. Also, lsyncd uses inotify
20+
# events, which don't work with LiteFS + FUSE. This solution works because we
21+
# execute `.restore` to properly update the DB.
22+
sync_database() {
23+
name="$1"
24+
path="$LITEFS_DB_DIRECTORY/$name"
25+
26+
echo "Attempting to sync $path..."
27+
28+
if test -f $path; then
29+
sourcePath="$LITEFS_DB_DIRECTORY/$name"
30+
destPath="/host-data/$name"
31+
32+
stat "$sourcePath"
33+
34+
if test -s $sourcePath; then
35+
# NOTE: We execute the restore in the container since it is more performant than restoring on a host volume.
36+
echo -e "${Yellow}Copying+restoring $sourcePath to $destPath...${Color_Off}"
37+
time sqlite3 $destPath ".restore $sourcePath"
38+
39+
echo -e "${Green}Successfully synced ${name} to host$Color_Off"
40+
sqlite3 $destPath ".mode table" "SELECT * FROM metadata;"
41+
sqlite3 $destPath "SELECT COUNT(*) || ' departments' FROM departments;"
42+
sqlite3 $destPath "SELECT COUNT(*) || ' tax rates' FROM tax_rates;"
43+
sqlite3 $destPath "SELECT COUNT(*) || ' products' FROM products;"
44+
sqlite3 $destPath "SELECT COUNT(*) || ' product barcodes' FROM product_barcodes;"
45+
sqlite3 $destPath "SELECT COUNT(*) || ' promotions' FROM promotions;"
46+
sqlite3 $destPath "SELECT COUNT(*) || ' offers' FROM offers;"
47+
sqlite3 $destPath "SELECT COUNT(*) || ' offer benefits' FROM offer_benefits;"
48+
sqlite3 $destPath "SELECT COUNT(*) || ' offer conditions' FROM offer_conditions;"
49+
sqlite3 $destPath "SELECT COUNT(*) || ' product ranges' FROM product_ranges;"
50+
sqlite3 $destPath ".mode table" "ANALYZE; SELECT * FROM sqlite_stat1;"
51+
else
52+
echo -e "$Red$sourcePath has not been fully replicated from LiteFS Cloud$Color_Off"
53+
fi
54+
else
55+
echo -e "$Red$path does not yet exist$Color_Off"
56+
fi
57+
}
58+
59+
sync_databases() {
60+
sync_database $DOMAIN_DB_NAME
61+
}
62+
63+
sync_databases;
64+
65+
# NOTE: We watch the internal data directory because the DB directory uses FUSE,
66+
# and cannot be easily monitored with fswatch.
67+
watched_path="$LITEFS_INTERNAL_DATA_DIRECTORY/dbs/$DOMAIN_DB_NAME"
68+
echo "Running fswatch for $watched_path"
69+
fswatch -or "$watched_path" | while read f; do echo "Change detected in $f files" && sync_databases; done

docker-compose.yml

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
version: "3.3"
2+
services:
3+
litefs:
4+
privileged: true
5+
restart: always
6+
image: us-docker.pkg.dev/vori-1bdf0/pos/domain-data:latest
7+
build:
8+
context: .
9+
environment:
10+
# NOTE (clintonb): This is a total hack since we are running in a scenario where we do not have a true primary.
11+
# The generator job will take over as primary and write data. There is zero expectation of data being written
12+
# from this Docker container, and any writes will be overwritten by the generator.
13+
IS_PRIMARY: true
14+
LITEFS_CLOUD_TOKEN: ${LITEFS_CLOUD_TOKEN}
15+
DOMAIN_DB_NAME: Domain.sqlite3
16+
volumes:
17+
- ${VORIPOS_DATA_DIR}:/host-data

voripos-domain-data.rb

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
class VoriposDomainData < Formula
2+
url "file:///Users/clintonb/workspace/vori/vori-pos/data-synchronization/domain-data/domain-data.tar.gz"
3+
version "0.0.1"
4+
sha256 "09066ceae4e8f4241b10690037864def81439017db80812becc99ae5af6355ad"
5+
def install
6+
# Move everything under #{libexec}/
7+
libexec.install Dir["*"]
8+
9+
# Then write executables under #{bin}/
10+
bin.write_exec_script (libexec/"voripos-domain-sync.sh")
11+
end
12+
13+
service do
14+
run opt_bin/"voripos-domain-sync.sh"
15+
keep_alive true
16+
end
17+
end

voripos-domain-sync.sh

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
#!/bin/bash
2+
3+
# This is needed to locate the Docker executable
4+
export PATH="/usr/local/bin:$PATH"
5+
6+
# NOTE: Bash must be given Full Disk Access in order to read the user defaults.
7+
# See https://www.kith.org/jed/2022/02/15/launchctl-scheduling-shell-scripts-on-macos-and-full-disk-access/.
8+
export LITEFS_CLOUD_TOKEN=$(defaults read com.vori.VoriPOS litefsCloudToken)
9+
export VORIPOS_DATA_DIR="~/Library/Containers/com.vori.VoriPOS/Data/Library/Application Support"
10+
docker compose -f $( dirname -- "$0"; )/docker-compose.yml down
11+
docker compose -f $( dirname -- "$0"; )/docker-compose.yml up

0 commit comments

Comments
 (0)