@@ -22,7 +22,7 @@ following setup:
22
22
* All docker containers are started via a docker-compose.yml file. Each
23
23
of those gets their own subdirectory with the docker-compose.yml
24
24
file alongside any additional configuration and data volumes needed.
25
- * I run caddy with the docker-proxy and caddy-tls -redis plugins in a
25
+ * I run caddy with the docker-proxy and caddy-storage -redis plugins in a
26
26
container as front end proxy.
27
27
* Individual containers for the services use caddy docker proxy label
28
28
fragments for configuration in the individual docker-compose.yml
@@ -46,9 +46,6 @@ override.conf like this:
46
46
```
47
47
[Unit]
48
48
After=tailscaled.service
49
-
50
- [Service]
51
- Environment="GOOGLE_APPLICATION_CREDENTIALS=/home/adminuser/.serviceaccts/hosting-XXXXXX-XXXXXXXXXXXX.json"
52
49
```
53
50
54
51
The After= section makes sure that docker starts after tailscale is
@@ -85,31 +82,51 @@ ExecStart=/usr/bin/sh -c "/usr/bin/tailscale up; echo tailscale-up"
85
82
Experimenting with systemd-resolved might also reduce the number of
86
83
overwrites to the resolv.conf file.
87
84
88
- GOOGLE_APPLICATION_CREDENTIALS injects the credentials of
89
- a service account that has log and error reporting permissions on a
90
- Google Cloud project. I modify the docker daemon config in
91
- /etc/docker/dameon.json like this:
85
+ ## Logging to Google Cloud Logging (Stackdriver)
86
+
87
+ The Google Cloud configuration is optional if you like to use journalctl
88
+ on the individual hosts.
89
+
90
+ I used to use the gcplogs log driver built into docker, but I am really
91
+ switching all my projects to structured json based logging and was looking
92
+ for ways to directly feed that into google cloud logging. The docker gpclogs driver does not do this, but I found the project
93
+ [ ngcplogs] ( https://github.com/nanoandrew4/ngcplogs )
94
+ that modified the gcplogs driver driver to extract the structured log info.
95
+
96
+ This driver is a docker plgin and is installed like this:
97
+
98
+ ````
99
+ docker plugin install nanoandrew4/ngcplogs:linux-arm64-v1.3.0
100
+ ````
101
+
102
+ The driver is configured as usual in /etc/docker/dameon.json
103
+ like this:
92
104
93
105
```
94
106
{
95
- "log-driver": "gcplogs ",
107
+ "log-driver": "nanoandrew4/ngcplogs:linux-arm64-v1.3.0 ",
96
108
"log-opts": {
109
+ "exclude-timestamp" : "true",
97
110
"gcp-project": "hosting-XXXXXX",
98
111
"gcp-meta-name": "myservername"
112
+ "credentials-json" : "your_json_escaped_credentials.json_file_content"
99
113
}
100
114
}
101
115
```
102
116
103
- The Google Cloud configuration is optional if you like to use journalctl
104
- on the individual hosts.
117
+ The escaped json string for the Google service account with log writing permissions can be gnerated with the json-escape.go program like this:
118
+
119
+ ```
120
+ go run json-escape.go </path/to/my-service-acct.json
121
+ ```
105
122
106
123
## Caddy
107
124
108
125
The root directory of this repo contains the Dockerfile and a
109
126
build-docker.sh script to build the container that runs caddy with the
110
- docker-proxy, tls- redis and caddy-dns/cloudflare plugins. I do build both
111
- AMD64 and ARM64 versions of each of my containers as my linux systems
112
- use both of these architectures.
127
+ docker-proxy, caddy-storage- redis and caddy-dns/cloudflare plugins. I do
128
+ build both AMD64 and ARM64 versions of each of my containers as my linux
129
+ systems use both of these architectures.
113
130
114
131
The caddy subdirectory showcases a typical caddy configuration. I do run
115
132
caddy in its container with ports forwarded for port 80 and 443 TCP and
0 commit comments