|
| 1 | +--- |
| 2 | +title: 'Advanced Usage' |
| 3 | +description: 'Configure the Dreadnode Platform for remote deployments and custom environments' |
| 4 | +public: true |
| 5 | +--- |
| 6 | + |
| 7 | +The `dreadnode` Platform can be configured for advanced deployment scenarios such as remote databases, proxy hosts, and external ClickHouse clusters. |
| 8 | +These options are managed via the environment files (`.dreadnode-api.env` and `.dreadnode-ui.env`). |
| 9 | + |
| 10 | +<Warning> |
| 11 | +Modifying these files will impact your local instance. Always back up your configurations before making changes. |
| 12 | +</Warning> |
| 13 | + |
| 14 | +--- |
| 15 | +### Using a Proxy Host (Remote UI Access) |
| 16 | + |
| 17 | +If you are running the UI on a remote host (not `localhost`), configure the **proxy host** (the domain/IP that external users will access): |
| 18 | + |
| 19 | +```bash |
| 20 | +# .dreadnode-ui.env |
| 21 | +PROXY_HOST=platform.example.com |
| 22 | +ALLOWED_HOSTS="platform.example.com" |
| 23 | +``` |
| 24 | + |
| 25 | +* `PROXY_HOST` defines the hostname where users will access the UI. |
| 26 | +* `ALLOWED_HOSTS` must include the same value to pass and is passed to the Content Security Policy. |
| 27 | +* Update DNS or reverse proxy settings (e.g., Nginx, Caddy) to point traffic to the correct container. |
| 28 | + |
| 29 | +--- |
| 30 | + |
| 31 | +### Configuring a Remote Database |
| 32 | + |
| 33 | +By default, the API service connects to a local Postgres database running in Docker. |
| 34 | +To use a **remote database**, update the following variables in `.dreadnode-api.env`: |
| 35 | + |
| 36 | +```bash |
| 37 | +# .dreadnode-api.env |
| 38 | +DATABASE_USER=myuser |
| 39 | +DATABASE_PASSWORD=secure-password |
| 40 | +DATABASE_NAME=platform |
| 41 | +DATABASE_PORT=5432 |
| 42 | +DATABASE_HOST=db.example.com |
| 43 | +```` |
| 44 | + |
| 45 | +* `DATABASE_HOST` should be the hostname or IP of your Postgres server. |
| 46 | +* Ensure that the Postgres server accepts external connections, is reachable from your Docker containers, and your firewall/security groups allow access. |
| 47 | + |
| 48 | +<Warning> |
| 49 | +Changing the database configuration will start with a blank schema unless you migrate existing data. Make sure you back up your local database if you need persistence. |
| 50 | +</Warning> |
| 51 | + |
| 52 | +--- |
| 53 | + |
| 54 | +By default, ClickHouse is run locally in Docker. To use a **remote ClickHouse cluster**, adjust `.dreadnode-api.env`: |
| 55 | + |
| 56 | +```bash |
| 57 | +# .dreadnode-api.env |
| 58 | +STRIKES_CLICKHOUSE_HOST=clickhouse.example.com |
| 59 | +STRIKES_CLICKHOUSE_PORT=8443 |
| 60 | +STRIKES_CLICKHOUSE_USER=clickuser |
| 61 | +STRIKES_CLICKHOUSE_PASSWORD=secure-password |
| 62 | +STRIKES_CLICKHOUSE_DATABASE=platform |
| 63 | +``` |
| 64 | + |
| 65 | +* `STRIKES_CLICKHOUSE_HOST` should be your remote server hostname. |
| 66 | +* Ensure the cluster is reachable from the host running the platform. |
| 67 | + |
| 68 | +--- |
| 69 | + |
| 70 | +## Back up & Restore Docker Volumes (Best Practices) |
| 71 | + |
| 72 | +The volumes used by Docker will persist on disk, but a reliable backup strategy is recommended such as: |
| 73 | + |
| 74 | +**Logical backups (application-aware)** — recommended for databases (e.g., `pg_dump` for Postgres, `BACKUP` for ClickHouse). |
| 75 | + |
| 76 | +Use logical backups for consistency and faster point-in-time recovery; use volume snapshots for belt-and-suspenders disaster recovery. |
| 77 | + |
| 78 | +### Identify Your Volumes |
| 79 | + |
| 80 | +```bash |
| 81 | +# Show compose resources (project must be in this directory) |
| 82 | +docker compose ls |
| 83 | +docker compose config | grep -A2 volumes: |
| 84 | +docker volume ls | grep <your-project-prefix> |
| 85 | +``` |
| 86 | + |
| 87 | +> Compose typically prefixes volumes using the project name (e.g., `dreadnode_api-data`, `dreadnode_db-data`). |
| 88 | + |
| 89 | +--- |
| 90 | + |
| 91 | +Logical Backups |
| 92 | + |
| 93 | +#### Postgres (pg\_dump) |
| 94 | + |
| 95 | +Back up using `pg_dump` from the running DB container or a throwaway client: |
| 96 | + |
| 97 | +```bash |
| 98 | +# From host, run a temporary Postgres client container to dump the remote/local DB |
| 99 | +docker run --rm \ |
| 100 | + -e PGPASSWORD=$DATABASE_PASSWORD \ |
| 101 | + postgres:16 \ |
| 102 | + pg_dump -h $DATABASE_HOST -p ${DATABASE_PORT:-5432} \ |
| 103 | + -U $DATABASE_USER -d $DATABASE_NAME \ |
| 104 | + -Fc -f - > postgres_$(date +%Y%m%d_%H%M%S).dump |
| 105 | +``` |
| 106 | + |
| 107 | +Restore: |
| 108 | + |
| 109 | +```bash |
| 110 | +# Create target DB first if needed: |
| 111 | +# createdb -h $DATABASE_HOST -p ${DATABASE_PORT:-5432} -U $DATABASE_USER $DATABASE_NAME |
| 112 | +
|
| 113 | +cat postgres_YYYYMMDD_HHMMSS.dump | docker run -i --rm \ |
| 114 | + -e PGPASSWORD=$DATABASE_PASSWORD \ |
| 115 | + postgres:16 \ |
| 116 | + pg_restore -h $DATABASE_HOST -p ${DATABASE_PORT:-5432} \ |
| 117 | + -U $DATABASE_USER -d $DATABASE_NAME --clean --if-exists |
| 118 | +``` |
| 119 | + |
| 120 | +<Info> |
| 121 | +Use the same major version of `postgres` in your dump/restore container as your server for best compatibility. |
| 122 | +</Info> |
| 123 | + |
| 124 | +#### ClickHouse (native BACKUP) |
| 125 | + |
| 126 | +If you use ClickHouse 21.8+ (typical), run native backups to S3/MinIO: |
| 127 | + |
| 128 | +```sql |
| 129 | +-- From a clickhouse-client shell or HTTP API: |
| 130 | +BACKUP DATABASE platform TO S3( |
| 131 | + 's3://my-clickhouse-backups/platform/{timestamp}', |
| 132 | + 'YOUR_KEY_ID', 'YOUR_SECRET', |
| 133 | + 'us-east-1', 'https://s3.amazonaws.com' |
| 134 | +); |
| 135 | +``` |
| 136 | + |
| 137 | +Restore: |
| 138 | + |
| 139 | +```sql |
| 140 | +RESTORE DATABASE platform FROM S3( |
| 141 | + 's3://my-clickhouse-backups/platform/<backup-timestamp>', |
| 142 | + 'YOUR_KEY_ID', 'YOUR_SECRET', |
| 143 | + 'us-east-1', 'https://s3.amazonaws.com' |
| 144 | +); |
| 145 | +``` |
| 146 | + |
| 147 | +<Warning> |
| 148 | +For consistent backups, prefer **logical** methods (pg_dump / ClickHouse BACKUP) rather than copying live database files. |
| 149 | +</Warning> |
| 150 | + |
| 151 | +--- |
| 152 | + |
| 153 | + |
| 154 | +### Example: Hybrid Deployment |
| 155 | + |
| 156 | +For a resilient hybrid deployment, you might: |
| 157 | + |
| 158 | +* Run **API & UI** services in Docker on a cloud VM. |
| 159 | +* Point the **database** to a managed Postgres (e.g., AWS RDS). |
| 160 | +* Use a **remote ClickHouse cluster** (e.g., AWS ClickHouse Cloud). |
| 161 | +* Store artifacts in **AWS S3** (with a **CloudFront** CDN set as `S3_AWS_EXTERNAL_ENDPOINT_URL`). |
| 162 | +* Expose the UI via a **proxy host** with TLS termination. |
| 163 | +* Schedule nightly **pg\_dump** & ClickHouse **BACKUP** to S3, weekly volume snapshots, and test restores monthly. |
| 164 | + |
| 165 | +--- |
| 166 | + |
| 167 | +<Info> |
| 168 | +Whenever you make changes to `.dreadnode-api.env` or `.dreadnode-ui.env`, restart the platform with: |
| 169 | + |
| 170 | +```bash |
| 171 | +dreadnode platform stop |
| 172 | +dreadnode platform start |
| 173 | +``` |
| 174 | + |
| 175 | +</Info> |
0 commit comments