Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help: Name resolution from Gluetun or stack sharing container to other containers on network does not work #281

Open
3 of 4 tasks
networkprogrammer opened this issue Oct 30, 2020 · 109 comments

Comments

@networkprogrammer
Copy link

networkprogrammer commented Oct 30, 2020

TLDR: Unable to resolve containers on the same user-defined network using built-in docker dns.

  1. Is this urgent?

    • Yes
    • No
  2. What VPN service provider are you using?

    • PIA
  3. What's the version of the program?

    You are running on the bleeding edge of latest!

  4. What are you using to run the container?

    • Docker Compose
  5. Extra information

Logs:

Working example from container: alpine

$ docker  exec -it alpine /bin/sh
/ # host jackett
jackett has address 172.18.0.2
/ # host gluetun
gluetun has address 172.18.0.5
/ # 

Example from container: gluetun where dns fails

$ docker  exec -it gluetun /bin/sh
/ # host sonarr
Host sonarr not found: 3(NXDOMAIN)
/ # host jackett
Host jackett not found: 3(NXDOMAIN)
/ # host google.com
google.com has address 172.217.14.238

Configuration file:

version: "3.7"
services:
  gluetun:
    image: qmcgaw/private-internet-access
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    networks:
      - frontend
    ports:
      - 8000:8000/tcp # Built-in HTTP control server
      - 8080:8080/tcp #Qbittorrent
    # command:
    volumes:
      - /configs/vpn:/gluetun
    environment:
      # More variables are available, see the readme table
      - VPNSP=private internet access

      # Timezone for accurate logs times
      - TZ=America/Los_Angeles

      # All VPN providers
      - USER=username

      # All VPN providers but Mullvad
      - PASSWORD=pwd

      # All VPN providers but Mullvad
      - REGION=CA Vancouver
      
      - PORT_FORWARDING=on
      - PORT_FORWARDING_STATUS_FILE="/gluetun/forwarded_port"
      - PIA_ENCRYPTION=normal
      - GID=1000
      - UID=1000
      - FIREWALL_OUTBOUND_SUBNETS=192.168.1.0/24
    restart: always
    
   qbittorrent:
    image: linuxserver/qbittorrent
    container_name: qbittorrent
    network_mode: "service:gluetun"
    volumes:
      - /configs/qbt:/config
      - /media:/media
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - UMASK_SET=000
    restart: unless-stopped

  jackett:
    image: linuxserver/jackett
    container_name: jackett
    networks:
      - frontend
    ports:
      - 9117:9117/tcp #Jackett
    volumes:
      - /configs/jackett:/config
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/Los_Angeles
      - UMASK_SET=000
    restart: unless-stopped

  alpine:
    image: alpine
    networks:
      - frontend
    container_name: alpine
    command: tail -f /dev/null

networks:
  frontend:
    name: custom_net
    ipam:
      config:
        - subnet: "172.18.0.0/16"     

Host OS: Ubuntu 20.04 LTS

Hello,
I am trying to setup my containers such that I can call them with names. My setup consists of Gluetun on the "frontend" network. Qbittorrent shares the network stack with Gluetun. 2 additional containers exist, Jackett and alpine. As you can see from the logs, from the alpine (test) container, I am able to resolve the names of jackett and gluetun containers.

I am however unable to do this the other way, i.e. resolve the names of jackett or alpine from the gluetun container. I am sure this has something to do with the DOT setup, but I have tried various things to no avail.

192.168.1.0/24 is my local lan. I left it in there so that traffic an talk to local LAN services.
Any assistance would be appreciated.

@networkprogrammer networkprogrammer changed the title Help: Name resolution from Gluetun or stack sharing container to other containers on network Help: Name resolution from Gluetun or stack sharing container to other containers on network does not work Oct 30, 2020
@qdm12
Copy link
Owner

qdm12 commented Oct 31, 2020

Hello there! I'll dig more/test it myself tomorrow, but does it work when DOT=off? And indeed, it's most likely due to the DNS over TLS interfering.

@networkprogrammer
Copy link
Author

@qdm12 , yes, I have tried it with DOT=off and setting DNS_PLAINTEXT_ADDRESS=<local_lan_IP> to no avail. Thanks.

@networkprogrammer
Copy link
Author

networkprogrammer commented Oct 31, 2020

So I did some more digging. On the alpine container that is not sharing the stack with Gluetun, I checked its /etc/resolv.conf config. It points to docker's embedded dns server 127.0.0.11

$ docker exec -it alpine /bin/sh
/ # cat /etc/resolv.conf 
search local
nameserver 127.0.0.11
options ndots:0
/ # host jackett
jackett has address 172.18.0.3
/ # exit

I then ran the same test on the container that shares the network stack with gluetun and the name resolutions worked. So it seems the DNS server change is causing this change in behavior

$ docker exec -it alpine_vpn /bin/sh
/ # host jackett  127.0.0.11
Using domain server:
Name: 127.0.0.11
Address: 127.0.0.11#53
Aliases: 

jackett has address 172.18.0.3

#searching using DOT
/ # host jackett  
Host jackett not found: 3(NXDOMAIN)

I am not that familiar with Go or the way the code for gluetun works, to help with code changes. Can you do any config in unbound to send non-FQDN queries towards the built-in dns servers and everything else to DOT ?

Thanks.

@qdm12
Copy link
Owner

qdm12 commented Nov 1, 2020

Hello @networkprogrammer sorry I went short on time; Anyway, thanks for digging.

The problem I think of is that if you use the Docker network DNS resolver, it will use it for resolving anything instead of Unbound (i.e. nslookup google.com 127.0.0.11). Under the hood, the program still uses Unbound so any Unbound configuration option you can find (from here) can be added. I quickly searched through them but I'm not sure if there is a way to split DNS traffic i.e. depending on the hostname being resolved.

Let me know if you find anything, I'll be happy to add it to the Go code so you can use it through an env variable.

@networkprogrammer
Copy link
Author

networkprogrammer commented Nov 1, 2020

Hi @qdm12 ,
So I feel like I am very close, but seems like there are many moving parts. From this issue, it seems that we can use the dns command to have queries forwarded from the embedded dns server to unbound.

services:
  gluetun:
    image: myvpn
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    networks:
      frontend:
        ipv4_address: 172.18.0.100
    dns: 172.18.0.100
    ports:
      - 8000:8000/tcp # Built-in HTTP control server
      - 8080:8080/tcp #Qbittorrent
      - 53:53/udp
      - 53:53/tcp

Disabling systemd-resolved on ubuntu. This is needed as port 53 is used by the systemd resolver. I cannot map it to gluetun without disabling the resolver.

So /etc/resolv.conf for gluetun would point to the embedded dns server 127.0.0.11. Then based on the dns option, it would query the embedded server first and the server in turn would point back to unbound for internet queries. I validated that from gluetun, I can query the embedded dns server for service names and can hit unbound at 127.0.0.1 for internet names.

However I am failing to get unbound respond to queries from anything outside localhost.

14:32:28.483746 IP 172.18.0.1.58826 > 172.18.0.100.53: 12465+ A? sonarr.local. (30)
14:32:28.483775 IP 172.18.0.1.58826 > 172.18.0.100.53: 12465+ A? sonarr.local. (30)
14:32:28.483868 IP 172.18.0.100.53 > 172.18.0.1.58826: 12465 Refused- [0q] 0/0/0 (12)
14:32:28.483868 IP 172.18.0.100.53 > 172.18.0.1.58826: 12465 Refused- [0q] 0/0/0 (12)

With the little Go knowledge and without digging too deep into Gluetun I keep failing when building the image manually. Unbound does not listen to outside queries by default, adding the access-control will allow it to respond to queries from outside.

Step 17/31 : RUN go test ./...
 ---> Running in 8dcc5058e905
?       github.com/qdm12/gluetun        [no test files]
?       github.com/qdm12/gluetun/internal/alpine        [no test files]
?       github.com/qdm12/gluetun/internal/cli   [no test files]
ok      github.com/qdm12/gluetun/internal/constants     0.010s
--- FAIL: Test_generateUnboundConf (0.00s)
    conf_test.go:93: 
                Error Trace:    conf_test.go:93
                Error:          Not equal: 
                                expected: "\nserver:\n  cache-max-ttl: 9000\n  cache-min-ttl: 3600\n  do-ip4: yes\n  do-ip6: yes\n  harden-algo-downgrade: yes\n  harden-below-nxdomain: yes\n  harden-referral-path: yes\n  hide-identity: yes\n  hide-version: yes\n  interface: 0.0.0.0\n  key-cache-size: 16m\n  key-cache-slabs: 4\n  msg-cache-size: 4m\n  msg-cache-slabs: 4\n  num-threads: 1\n  port: 53\n  prefetch-key: yes\n  prefetch: yes\n  root-hints: \"/etc/unbound/root.hints\"\n  rrset-cache-size: 4m\n  rrset-cache-slabs: 4\n  rrset-roundrobin: yes\n  tls-cert-bundle: \"/etc/ssl/certs/ca-certificates.crt\"\n  trust-anchor-file: \"/etc/unbound/root.key\"\n  use-syslog: no\n  username: \"nonrootuser\"\n  val-log-level: 3\n  verbosity: 2\n  local-zone: \"b\" static\n  local-zone: \"c\" static\n  private-address: 9.9.9.9\n  private-address: c\n  private-address: d\nforward-zone:\n  forward-no-cache: no\n  forward-tls-upstream: yes\n  name: \".\"\n  forward-addr: 1.1.1.1@853#cloudflare-dns.com\n  forward-addr: 1.0.0.1@853#cloudflare-dns.com\n  forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com\n  forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com\n  forward-addr: 9.9.9.9@853#dns.quad9.net\n  forward-addr: 149.112.112.112@853#dns.quad9.net\n  forward-addr: 2620:fe::fe@853#dns.quad9.net\n  forward-addr: 2620:fe::9@853#dns.quad9.net"
                                actual  : "\nserver:\n  access-control: 172.18.0.0/16\n  cache-max-ttl: 9000\n  cache-min-ttl: 3600\n  do-ip4: yes\n  do-ip6: yes\n  harden-algo-downgrade: yes\n  harden-below-nxdomain: yes\n  harden-referral-path: yes\n  hide-identity: yes\n  hide-version: yes\n  interface: 0.0.0.0\n  key-cache-size: 16m\n  key-cache-slabs: 4\n  msg-cache-size: 4m\n  msg-cache-slabs: 4\n  num-threads: 1\n  port: 53\n  prefetch-key: yes\n  prefetch: yes\n  root-hints: \"/etc/unbound/root.hints\"\n  rrset-cache-size: 4m\n  rrset-cache-slabs: 4\n  rrset-roundrobin: yes\n  tls-cert-bundle: \"/etc/ssl/certs/ca-certificates.crt\"\n  trust-anchor-file: \"/etc/unbound/root.key\"\n  use-syslog: no\n  username: \"nonrootuser\"\n  val-log-level: 3\n  verbosity: 2\n  local-zone: \"b\" static\n  local-zone: \"c\" static\n  private-address: 9.9.9.9\n  private-address: c\n  private-address: d\nforward-zone:\n  forward-no-cache: no\n  forward-tls-upstream: yes\n  name: \".\"\n  forward-addr: 1.1.1.1@853#cloudflare-dns.com\n  forward-addr: 1.0.0.1@853#cloudflare-dns.com\n  forward-addr: 2606:4700:4700::1111@853#cloudflare-dns.com\n  forward-addr: 2606:4700:4700::1001@853#cloudflare-dns.com\n  forward-addr: 9.9.9.9@853#dns.quad9.net\n  forward-addr: 149.112.112.112@853#dns.quad9.net\n  forward-addr: 2620:fe::fe@853#dns.quad9.net\n  forward-addr: 2620:fe::9@853#dns.quad9.net"
                            
                                Diff:
                                --- Expected
                                +++ Actual
                                @@ -2,2 +2,3 @@
                                 server:
                                +  access-control: 172.18.0.0/16
                                   cache-max-ttl: 9000
                Test:           Test_generateUnboundConf
FAIL
FAIL    github.com/qdm12/gluetun/internal/dns   0.008s

Any guidance on how to get past the test step in the image build? I also initially added multiple access-control directives, one for 127.0.0.1/8 and one for the lan, but the testing complained about duplicate keys.

Thanks again.

@qdm12
Copy link
Owner

qdm12 commented Nov 1, 2020

If you feel like fiddling a bit with Go and gluetun:

  1. See https://github.com/qdm12/gluetun/wiki/Developement-setup#using-vscode-and-docker so you can easily have everything setup and throw it away too
  2. Modify https://github.com/qdm12/gluetun/blob/master/internal/dns/conf_test.go#L46 to match the actual configuration you get from running the test (you can click on run test above the Go test function in VSCode).

I'm afk right now, but I'll add you as maintainer so you can easily make a branch/PR and I can help fixing it up.

@qdm12
Copy link
Owner

qdm12 commented Nov 1, 2020

I'm still trying to find a zero-config change solution though.

For now the following should work right?.,..

  1. Specify the DNS with dns: 172.18.0.100 (it should work without having to publish port 53 and conflict with the host)
  2. Leave the /etc/resolv.conf of the container untouched so it relies on Docker to route the DNS queries back to Unbound

Although that requires to add a dns entry to your docker configuration. I can always add an env variable to enable this different behavior, but it's not ideal.

Maybe an alternative would be to tell Unbound to use the Docker network DNS only for private addresses, but I'm not sure that's possible. If it is, the Go program could detect the original DNS address (before overriding it) and set it in the Unbound configuration. That may solve #188 as well. I'll dig into the Unbound configuration options.

@networkprogrammer
Copy link
Author

networkprogrammer commented Nov 1, 2020

Hey @qdm12,
I think I got it. I have very limited GIT and GO knowledge. But setting up the Dev Container seemed to have helped a lot.
Here are the changes that helped me get this working.

$ git diff
diff --git a/internal/dns/conf.go b/internal/dns/conf.go
index 2156bc8..8c730e0 100644
--- a/internal/dns/conf.go
+++ b/internal/dns/conf.go
@@ -63,10 +63,11 @@ func generateUnboundConf(ctx context.Context, settings settings.DNS,
                "harden-below-nxdomain": "yes",
                "harden-referral-path":  "yes",
                "harden-algo-downgrade": "yes",
+               "access-control":        "172.18.0.0/16 allow",
                // Network
                "do-ip4":    "yes",
                "do-ip6":    doIPv6,
-               "interface": "127.0.0.1",
+               "interface": "0.0.0.0",
                "port":      "53",
                // Other
                "username": "\"nonrootuser\"",
diff --git a/internal/dns/conf_test.go b/internal/dns/conf_test.go
index a166300..db955fe 100644
--- a/internal/dns/conf_test.go
+++ b/internal/dns/conf_test.go
@@ -45,6 +45,7 @@ func Test_generateUnboundConf(t *testing.T) {
        require.Len(t, warnings, 0)
        expected := `
 server:
+  access-control: 172.18.0.0/16 allow
   cache-max-ttl: 9000
   cache-min-ttl: 3600
   do-ip4: yes
@@ -54,7 +55,7 @@ server:
   harden-referral-path: yes
   hide-identity: yes
   hide-version: yes
-  interface: 127.0.0.1
+  interface: 0.0.0.0
   key-cache-size: 16m
   key-cache-slabs: 4
   msg-cache-size: 4m
:

Then I build my docker image
docker build -t myvpn .

After that used the image in my dockerfile. Skipping all the irrelevant parts from the service definition.

version: "3.7"
services:
  gluetun:
    image: myvpn
    container_name: gluetun
    cap_add:
      - NET_ADMIN
    networks:
      frontend:
        ipv4_address: 172.18.0.100
    dns: 172.18.0.100
    environment:
      - DNS_KEEP_NAMESERVER=on

So what that gives me is the ability to not only query the local services, but also use DOT.
This is what my /etc/resolv.conf now looks like.

/ # cat /etc/resolv.conf 
search local
nameserver 127.0.0.11
options ndots:0
nameserver 1.1.1.1
nameserver 127.0.0.1

So the big thing here is to allow query from the subnet tied to the main/default interface. In my case, I statically assigned the network in the conf.go files. Ideally it would be nice to do it dynamically if possible, maybe get the ip/netmask from the docker container at runtime and update ubound?

I also noticed that depending on where I place the access-control directive, the build-test failed.

Thanks,
Let me know if this is helpful in anyway. I tried looking at the code itself, but it looked nothing like the Python that I am familiar with.

@qdm12
Copy link
Owner

qdm12 commented Nov 2, 2020

No problem, thanks a ton for stretching this out in all the directions! I can definitely test it myself too, so it should be easy to integrate nicely. Allow me 1 to 2 days so I can get to it, I'm a bit over-busy currently unfortunately, but I can't wait to fix this up! Plus this should be how it is behaves natively imo.

@networkprogrammer
Copy link
Author

Thank you for looking into this.

@qdm12
Copy link
Owner

qdm12 commented Nov 3, 2020

I'm still testing things out, I would ideally like it to work without having to specify the DNS at the Docker configuration level.

Plus, since Unbound blocks i.e. malicious hostnames, I cannot just add the local DNS below Unbound as this would resolve blocked hostnames.

Maybe I'm asking for too much 😅 I'll let you know what I find.

@networkprogrammer
Copy link
Author

networkprogrammer commented Nov 3, 2020

So the way I am thinking of solving this for myself is to just allow unbound to listen on the default interface and localhost. This is the key to get this working. Ideally this would be done programmatically during run-time.

The rest of the config is already provided by Gluetun's env variables or docker-compose directives.

    networks:
      frontend:
        ipv4_address: 172.18.0.100
    dns: 172.18.0.100
    environment:
      - DNS_KEEP_NAMESERVER=on

For those who are OK with the way things are, nothing needs to change.
If users need local services + name resolution via unbound, set gluetun to have a static IP, and assign the dns directive manually with that same IP. So no code-change in gluetun needed for this part.

To add this as a feature, we can provide users with some sort of env variable based switch. This would be a code change.

That is all we need to get local and internet names resolve.

The additional feature enabled with this config would be that other containers and hosts on the network (not just the docker network) can now use gluetun as a DOT resolution host. So Gluetun also becomes a local DNS server that provides DOT over VPN :)

Let me know if this helps.

@qdm12
Copy link
Owner

qdm12 commented Nov 4, 2020

I have a (convoluted) solution in mind which relies 'less' on the OS:

  1. Detect the Docker DNS address at start, i.e. 127.0.0.11
  2. Run a DNS UDP proxy (coded from scratch in Go) listening on port 53 so that it can hook into the queries and:
    • resolve local hostnames (no dot .) using 127.0.0.11 (and also check the returned address is private)
    • otherwise proxy the query to unbound listening on port 1053 for example

I'm still playing around with /etc/resolv.conf and options, as well as searching through Unbound's configuration options, for now though. But otherwise the solution above solves the problems, and could be a first step towards moving away from Unbound (#137)

@networkprogrammer Thanks for the suggestions! Let me change that interface Unbound is listening on to the default interface, having a DNS over TLS server through the tunnel is definitely interesting 😄

@networkprogrammer
Copy link
Author

2\. Run a DNS UDP proxy (coded from scratch in Go) listening on port 53 so that it can hook into the queries and:

Nice work!
Very nice. Let me run a test and will let you know. Awesome work.

@networkprogrammer
Copy link
Author

So I tested and everything seems to work as expected. To get this to work, I have to set the - DNS_KEEP_NAMESERVER=on environment variable in the Gluetun service definition.

I'm ok with closing this issue.

Thank you again, for the resolution and also getting this awesome project going!

@networkprogrammer
Copy link
Author

@qdm12 , btw where is the code you did for the DNS server? I am interested is learning go so wanted to see what the code looks like.

@qdm12
Copy link
Owner

qdm12 commented Nov 5, 2020

I'm ok with closing this issue.

Let me finish (and start haha) that DNS proxy to solve the issue properly. It's good we have workarounds for now, but I would definitely like to fix it properly.

btw where is the code you did for the DNS server

Nowhere yet! I'll get to it in the coming days, I'll tag you and comment here once I have a start of a branch going if you want to pull request review/ask questions 😉 Although that will likely just be a UDP proxy inspecting DNS queries and routing them accordingly (I did a UDP proxy but never fiddled with DNS either).

@qdm12
Copy link
Owner

qdm12 commented Nov 5, 2020

This is blocked by #289 I think. Do you guys manage to reach other containers from Gluetun in the same Docker network using their IP addresses?

@networkprogrammer
Copy link
Author

I did a quick test. My setup involves Gluetun and qBittorrent(qbt) sharing the network stack. All other containers are on the same network, but do not share network stacks.

From sonarr/radarr etc I can connect to qbt as expected.
From qbt I could not connect to jackett

So I got on the Gluetun container and as a quick test, flushed iptables and then set the default policy to accept. This let me talk from Jackett to qbt. So iptables is stopping communications.

@networkprogrammer
Copy link
Author

So Gluetun/qbt -> other containers is not working. Iptables is blocking
Other containers -> qbt is working.

@networkprogrammer
Copy link
Author

OK, so the problem is with Chain OUTPUT (policy DROP). I understand that we want this to block traffic if there is no VPN and we should keep it that way.
I added the following line iptables -A OUTPUT -d 172.18.0.0/16 -j ACCEPT since 172.18.0.0/16 is my local docker network. That fixed my issue.

Now Gluetun/qbt can talk to other containers on the network. So we need to allow traffic to the local network.

@qdm12
Copy link
Owner

qdm12 commented Nov 6, 2020

Nice thanks! That did the trick. I'll get to the DNS proxy this weekend, will let you know.

@qdm12
Copy link
Owner

qdm12 commented Nov 7, 2020

I asked on Reddit's HomeNetworking subreddit here to see if there is a way to do this natively. Let's wait if there is a solution coming up in the next few hours / days before adding (yet another) server to Gluetun haha (we have 5 so far: http proxy, shadowsocks, control server, unbound and healthcheck server).

@networkprogrammer
Copy link
Author

So I pulled the latest image and see that DNS has stopped working. Something must have changed.

2020-11-07T13:46:36.191-0700    INFO    dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:46:41.193-0700    ERROR   port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:46:41.193-0700    INFO    port forwarding: Trying again in 10s
2020-11-07T13:46:51.192-0700    WARN    dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:46:51.192-0700    INFO    dns over tls: attempting restart in 10 seconds
2020-11-07T13:47:01.195-0700    INFO    dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:47:16.198-0700    WARN    dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:47:16.198-0700    INFO    dns over tls: attempting restart in 10 seconds
2020-11-07T13:47:21.199-0700    ERROR   port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:47:21.199-0700    INFO    port forwarding: Trying again in 10s
2020-11-07T13:47:26.199-0700    INFO    dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:47:41.201-0700    WARN    dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:47:41.201-0700    INFO    dns over tls: attempting restart in 10 seconds
2020-11-07T13:47:51.204-0700    INFO    dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:48:01.207-0700    ERROR   port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:01.207-0700    INFO    port forwarding: Trying again in 10s
2020-11-07T13:48:06.210-0700    WARN    dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:06.210-0700    INFO    dns over tls: attempting restart in 10 seconds
2020-11-07T13:48:16.213-0700    INFO    dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:48:31.219-0700    WARN    dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:31.219-0700    INFO    dns over tls: attempting restart in 10 seconds
2020-11-07T13:48:41.226-0700    INFO    dns configurator: downloading root hints from https://raw.githubusercontent.com/qdm12/files/master/named.root.updated
2020-11-07T13:48:41.226-0700    ERROR   port forwarding: cannot bind port: Get "https://10.63.112.1:19999/bindPort?payload=eyJ0b2tlbiI6Ild3eFZqVExMSnhhYlAwVkVGdzgraFEzMnVsNXZQZjVmOWwwNG1kUGpjNkJidDVUTzlia0JHYkdieXFTRW84UmtUaFBOSXhxSjZ2ZDdKdmV5bzFkVUFDMUNubGk0Z0VNVzhHeDJSRWlPdnF1QjVobThNMFd4VmEyMXdnTT0iLCJwb3J0Ijo1Mjk2MywiZXhwaXJlc19hdCI6IjIwMjEtMDEtMDNUMDk6NDQ6MTAuMjg0MDM3MTU5WiJ9&signature=slra9KxY4fyEBgWJKYxGT3841HdSgNdCDDQQB%2BdFRzcNyC4GHY%2FYElap8kxvFQ5CkYXjaMaROdkURatI28L8Dg%3D%3D": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:41.226-0700    INFO    port forwarding: Trying again in 10s
2020-11-07T13:48:56.229-0700    WARN    dns over tls: Get "https://raw.githubusercontent.com/qdm12/files/master/named.root.updated": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
2020-11-07T13:48:56.230-0700    INFO    dns over tls: attempting restart in 10 seconds

Top used to show unbound. so looks like unbound stopped running.

  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    1     0 root     S     699m  18%   4   0% /entrypoint
   32     1 nonrootu S     5048   0%   7   0% openvpn --config /etc/openvpn/target.ovpn
   50     0 root     S     1648   0%   6   0% /bin/sh
   55    50 root     R     1576   0%   7   0% top

Did I pull a docker image that was in development?

Running version unknown built on an unknown date (commit unknown)

@networkprogrammer
Copy link
Author

I get this when I try running unbound manually.

/ # unbound
[1604782560] unbound[79:0] error: Could not open /etc/unbound/unbound.conf: No such file or directory
[1604782560] unbound[79:0] warning: Continuing with default config settings
[1604782560] unbound[79:0] error: can't bind socket: Address not available for ::1 port 53
[1604782560] unbound[79:0] fatal error: could not open ports

@mpsarakis
Copy link

mpsarakis commented Sep 20, 2024

Hello @Maxattax97,

The workaround I have detailed there works:
#281 (comment)

Not necessarily ideal, because you have to use FQDN names of containers you want to resolve from gluetun attached containers, ie. instead of "whatever" you need to use "whatever.docker-network-name" but it has been working for me for months flawlessly.

Hope that helps...

@andrewvaughan
Copy link

andrewvaughan commented Oct 21, 2024

For anyone interested, this is my working workaround (with one small caveat, see at the end):

Basically, I alter gluetun's unbound service configuration with a bind file inside the docker container, for example inside the gluetun docker compose section: - ${DOCKER_BINDS_BASE_DIR}/gluetun/include.conf:/etc/unbound/include.conf:ro

The include.conf file contains this:

forward-zone:                          
  name: "<name of the docker network with containers you want to resolve>"
  forward-addr: 127.0.0.11
  forward-tls-upstream: no
server:
  do-not-query-localhost: no
  private-domain: "<name of the docker network with containers you want to resolve>"
  domain-insecure: "<name of the docker network with containers you want to resolve>"

Basically, this instructs gluetun's unbound to forward queries and resolve containers names using the actual docker dns server.

The main caveat is the following: you need to use the FQDN for the containers names, eg. my-container.<name of the docker network with containers you want to resolve> depending on your context/type of app, you may be able to do that in the other containers needing name resolution or not... In my case I can, so 100% working for me, and it has been stable for months regardless of gluetun's updates.

This caveat is mandatory in the sense you only want container names resolution to "leak" to the docker dns server, all the rest should go through the VPN DNS and the differentiation factor used is the docker network name domain.

In case you have multiple docker networks, I think the method is still valid, you simply needs to add relevant blocks to the include.conf... not tested here.

Hope that helps...

You're the best. Thank you for this, it worked great.

For me, I didn't need a FQDN to access my container once I added the named network; not sure why it was different.

@mpsarakis
Copy link

For anyone interested, this is my working workaround (with one small caveat, see at the end):
Basically, I alter gluetun's unbound service configuration with a bind file inside the docker container, for example inside the gluetun docker compose section: - ${DOCKER_BINDS_BASE_DIR}/gluetun/include.conf:/etc/unbound/include.conf:ro
The include.conf file contains this:

forward-zone:                          
  name: "<name of the docker network with containers you want to resolve>"
  forward-addr: 127.0.0.11
  forward-tls-upstream: no
server:
  do-not-query-localhost: no
  private-domain: "<name of the docker network with containers you want to resolve>"
  domain-insecure: "<name of the docker network with containers you want to resolve>"

Basically, this instructs gluetun's unbound to forward queries and resolve containers names using the actual docker dns server.
The main caveat is the following: you need to use the FQDN for the containers names, eg. my-container.<name of the docker network with containers you want to resolve> depending on your context/type of app, you may be able to do that in the other containers needing name resolution or not... In my case I can, so 100% working for me, and it has been stable for months regardless of gluetun's updates.
This caveat is mandatory in the sense you only want container names resolution to "leak" to the docker dns server, all the rest should go through the VPN DNS and the differentiation factor used is the docker network name domain.
In case you have multiple docker networks, I think the method is still valid, you simply needs to add relevant blocks to the include.conf... not tested here.
Hope that helps...

You're the best. Thank you for this, it worked great.

For me, I didn't need a FQDN to access my container once I added the named network; not sure why it was different.

Thanks a lot for your message, I was quite stuck by this so I struggled a bit to find the solution! :)

Did you put the exact same config as me in the include.conf?

By forcing using the domain in the FQDN, I was also making sure no DNS leaks could occur and only the "host.docker-network" FQDN were escaping from the VPN name resolution. Maybe there is a more elegant way to do this like you may have done? One that would only resolve hostname with no FQDN through the docker DNS...

@mpsarakis
Copy link

For anyone interested, this is my working workaround (with one small caveat, see at the end):
Basically, I alter gluetun's unbound service configuration with a bind file inside the docker container, for example inside the gluetun docker compose section: - ${DOCKER_BINDS_BASE_DIR}/gluetun/include.conf:/etc/unbound/include.conf:ro
The include.conf file contains this:

forward-zone:                          
  name: "<name of the docker network with containers you want to resolve>"
  forward-addr: 127.0.0.11
  forward-tls-upstream: no
server:
  do-not-query-localhost: no
  private-domain: "<name of the docker network with containers you want to resolve>"
  domain-insecure: "<name of the docker network with containers you want to resolve>"

Basically, this instructs gluetun's unbound to forward queries and resolve containers names using the actual docker dns server.
The main caveat is the following: you need to use the FQDN for the containers names, eg. my-container.<name of the docker network with containers you want to resolve> depending on your context/type of app, you may be able to do that in the other containers needing name resolution or not... In my case I can, so 100% working for me, and it has been stable for months regardless of gluetun's updates.
This caveat is mandatory in the sense you only want container names resolution to "leak" to the docker dns server, all the rest should go through the VPN DNS and the differentiation factor used is the docker network name domain.
In case you have multiple docker networks, I think the method is still valid, you simply needs to add relevant blocks to the include.conf... not tested here.
Hope that helps...

You're the best. Thank you for this, it worked great.
For me, I didn't need a FQDN to access my container once I added the named network; not sure why it was different.

Thanks a lot for your message, I was quite stuck by this so I struggled a bit to find the solution! :)

Did you put the exact same config as me in the include.conf?

By forcing using the domain in the FQDN, I was also making sure no DNS leaks could occur and only the "host.docker-network" FQDN were escaping from the VPN name resolution. Maybe there is a more elegant way to do this like you may have done? One that would only resolve hostname with no FQDN through the docker DNS...

FYI, I have just done the test and in my environment hostnames alone do not work:

root@gluetun:/# ping dozzle
ping: bad address 'dozzle'

root@gluetun:/# ping dozzle.bridge-secure-backend
PING dozzle.bridge-secure-backend (172.29.8.45): 56 data bytes
64 bytes from 172.29.8.45: seq=0 ttl=64 time=0.531 ms
64 bytes from 172.29.8.45: seq=1 ttl=64 time=0.122 ms

@EpicOfficer
Copy link

EpicOfficer commented Oct 26, 2024

For anybody still struggling... here is my fully working config for this case. Using this, I can access services such as qbittorrent outside the gluetun network via name, and I can also access containers outside of gluetun from within containers such as qbittorrent via name (I hope that makes sense). Essentially doing it like this has allowed me to configure everything just as I would if I wasn't using gluetun.

The aliases allow external access to qbittorrent etc from the same network, and I'm not 100% certain but I beleive it was the FIREWALL_OUTBOUND_SUBNETS and DNS_KEEP_NAMESERVER variables that fixed my issue with accessing external containers from gluetun. I hope this helps somebody!

EDIT: As @mpsarakis mentions below, using DNS_KEEP_NAMESERVER: On will likely mean that all DNS queries including internet queries could be leaked to your ISP. If this is a concern for you, then you can:
a) Remove this option and find another solution for resolving DNS for containers that are not bound to gluetun with network_mode: service:gluetun, such as the workaround described in #281 (comment)
b) Do like me, and properly configure DoH network-wide so that ALL DNS queries outside your network remain hidden from your ISP. I personally use my own resolvers, with cloudflare DoH as a fallback

---

services:
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
    volumes:
      - /dev/net/tun:/dev/net/tun
    networks:
      default:
        aliases:
          - qbittorrent
          - sabnzbd
          - bazarr
          - prowlarr
    environment:
      <<: *other-environment-variables
      FIREWALL_OUTBOUND_SUBNETS: 172.16.0.0/12,192.168.1.0/24
      DNS_KEEP_NAMESERVER: on

@mpsarakis
Copy link

By using DNS_KEEP_NAMESERVER=on, doesn't that mean you keep the DNS configuration untouched and therefore all the name resolution goes through your docker DNS (and not only for container names) so you are in effect leaking all your DNS queries to your internet connection DNS instead of those being protected by your VPN connection.

If privacy is not an issue then it should not matter.

This assumes I have understood the effect of this actual parameter...

@EpicOfficer
Copy link

EpicOfficer commented Oct 26, 2024

By using DNS_KEEP_NAMESERVER=on, doesn't that mean you keep the DNS configuration untouched and therefore all the name resolution goes through your docker DNS (and not only for container names) so you are in effect leaking all your DNS queries to your internet connection DNS instead of those being protected by your VPN connection.

If privacy is not an issue then it should not matter.

This assumes I have understood the effect of this actual parameter...

Potentially, I'm honestly not 100% sure, but yes that could be a concern for some. In my case, this is a non-issue, because all my DNS goes through my own DNS servers which forward all DNS queries on my network through DNS over HTTPS

Also, if you just need to access containers INSIDE gluetun using dns names, you can of course omit this variable without any issues

@mpsarakis
Copy link

Or you can also do as I have done and simply tweak gluetun unbound DNS server from outside the container (using a bind file) and you will have both functionalities: you will be able to resolve your container names AND all the rest will go through the VPN DNS as it should be.
The only constraint is to use the docker network name with the container name (as a FQDN) because that's how unbound knows where to direct the name resolution.
More details here:
#281 (comment)

@EpicOfficer
Copy link

Or you can also do as I have done and simply tweak gluetun unbound DNS server from outside the container (using a bind file) and you will have both functionalities: you will be able to resolve your container names AND all the rest will go through the VPN DNS as it should be. The only constraint is to use the docker network name with the container name (as a FQDN) because that's how unbound knows where to direct the name resolution. More details here: #281 (comment)

I actually missed this, nice solution! I might give that a try later, although I tend to avoid bind mounts in favour of docker volumes for everything, but I'm sure I could make it work with a persistant volume

@EpicOfficer
Copy link

For anyone interested, this is my working workaround (with one small caveat, see at the end):
Basically, I alter gluetun's unbound service configuration with a bind file inside the docker container, for example inside the gluetun docker compose section: - ${DOCKER_BINDS_BASE_DIR}/gluetun/include.conf:/etc/unbound/include.conf:ro
The include.conf file contains this:

forward-zone:                          
  name: "<name of the docker network with containers you want to resolve>"
  forward-addr: 127.0.0.11
  forward-tls-upstream: no
server:
  do-not-query-localhost: no
  private-domain: "<name of the docker network with containers you want to resolve>"
  domain-insecure: "<name of the docker network with containers you want to resolve>"

@mpsarakis I think there's a small typo, should the forward-addr be 127.0.0.1?

@mpsarakis
Copy link

no that's the docker internal DNS 127.0.0.11

@EpicOfficer
Copy link

no that's the docker internal DNS 127.0.0.11

Ah, I wasn't aware of that, thank you for clarifying!

@FlorentLM
Copy link

FlorentLM commented Oct 27, 2024

For anyone interested, this is my working workaround (with one small caveat, see at the end):

Basically, I alter gluetun's unbound service configuration with a bind file inside the docker container, for example inside the gluetun docker compose section: - ${DOCKER_BINDS_BASE_DIR}/gluetun/include.conf:/etc/unbound/include.conf:ro

The include.conf file contains this:

forward-zone:                          
  name: "<name of the docker network with containers you want to resolve>"
  forward-addr: 127.0.0.11
  forward-tls-upstream: no
server:
  do-not-query-localhost: no
  private-domain: "<name of the docker network with containers you want to resolve>"
  domain-insecure: "<name of the docker network with containers you want to resolve>"

Basically, this instructs gluetun's unbound to forward queries and resolve containers names using the actual docker dns server.

The main caveat is the following: you need to use the FQDN for the containers names, eg. my-container.<name of the docker network with containers you want to resolve> depending on your context/type of app, you may be able to do that in the other containers needing name resolution or not... In my case I can, so 100% working for me, and it has been stable for months regardless of gluetun's updates.

This caveat is mandatory in the sense you only want container names resolution to "leak" to the docker dns server, all the rest should go through the VPN DNS and the differentiation factor used is the docker network name domain.

In case you have multiple docker networks, I think the method is still valid, you simply needs to add relevant blocks to the include.conf... not tested here.

Hope that helps...

I tried this and I still can't access containers by name from gluetun or from containers connected to it...

Here is my docker compose:

networks:
  proxy:
    external: true
  backend:
    internal: true

services:
  gluetun:
    image: qmcgaw/gluetun:latest
    container_name: gluetun
    hostname: gluetun
    cap_add:
      - NET_ADMIN
      - SYS_MODULE  # to load kernel-level wireguard
    devices:
      - /dev/net/tun:/dev/net/tun
    networks:
      - proxy
      - backend
    environment:
      - TZ=xxxxxxx/xxxxxxx
      - PUID=1001
      - PGID=1001
      - WIREGUARD_IMPLEMENTATION=kernelspace
      - VPN_SERVICE_PROVIDER=xxxxxxxxxxxxxxxx
      - VPN_TYPE=wireguard
      - VPN_PORT_FORWARDING=on
      - VPN_PORT_FORWARDING_PROVIDER=xxxxxxxxxxxxxxxx
      - SERVER_COUNTRIES=xxxxxxxxxxxxxxxx
      - PORT_FORWARD_ONLY=on
      - WIREGUARD_PRIVATE_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      - SHADOWSOCKS=off
      - HTTPPROXY=off
      - FIREWALL_OUTBOUND_SUBNETS=172.18.0.0/16
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /lib/modules:/lib/modules:ro  # to load kernel level wireguard
      - xxxxxx/gluetun:/gluetun
      - xxxxxx/ip_port:/tmp/gluetun
      - xxxxxx/include.conf:/etc/unbound/include.conf:ro
    labels:
      - traefik.enable=true
      - traefik.docker.network=proxy
      - traefik.http.routers.qbittorrent.tls=true
      - traefik.http.routers.qbittorrent.rule=Host(`qbittorrent.xxxxxx.xx`)
      - traefik.http.routers.qbittorrent.entrypoints=websecure
      - traefik.http.routers.qbittorrent.tls.certResolver=letsencrypt
      - traefik.http.routers.qbittorrent.tls.options=modern@file
      - traefik.http.routers.qbittorrent.middlewares=internal@file
      - traefik.http.routers.qbittorrent.service=qbittorrent
      - traefik.http.services.qbittorrent.loadbalancer.server.port=8080
    restart: always
    security_opt:
      - no-new-privileges:true

  qbittorrent:
    image: ghcr.io/hotio/qbittorrent:latest
    container_name: qbittorrent
    network_mode: "service:gluetun"
    environment:
      - PUID=1001
      - PGID=1001
      - UMASK=002
      - TZ=xxxxxxx/xxxxxxx
    volumes:
      - xxxxxx/qbittorrent:/config
      - xxxxxx/qbittorrent/scripts:/scripts
      - yyyyyyyyyyyy/downloads:/downloads
    depends_on:
      gluetun:
        condition: service_healthy
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true

  port-updater:
    build: https://github.com/tcj-one/qbittorrent-port-forward-file.git
    container_name: port-updater
    user: 1001:1001
    networks:
      - backend
    environment:
      - QBT_ADDR=http://gluetun:8080
      - QBT_USERNAME=xxxxxx
      - QBT_PASSWORD=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
      - PORT_FILE=/config/forwarded_port
    volumes:
      - xxxxxx/gluetun/ip_port:/config:ro
    depends_on:
      - qbittorrent
      - gluetun
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true

the include.conf file is:

forward-zone:
  name: "proxy"
  forward-addr: 127.0.0.11
  forward-tls-upstream: no
server:
  do-not-query-localhost: no
  private-domain: "proxy"
  domain-insecure: "proxy"

I would like to access an endpoint on a Betanin container, defined in another docker compose file, and that is also attached to the proxy network.

Then:

docker exec -it gluetun /bin/sh:

/ # ping 172.18.0.20
PING 172.18.0.20 (172.18.0.20): 56 data bytes
64 bytes from 172.18.0.20: seq=0 ttl=64 time=0.268 ms

(172.18.0.20 is the IP for the container I want to access)

/ # ping betanin
ping: bad address 'betanin'

and

/ # ping betanin.proxy
ping: bad address 'betanin.proxy'

... What am I missing? :(

@yorah
Copy link

yorah commented Dec 12, 2024

I I tried following the solution from @mpsarakis , but I also still can't access containers by name from containers inside gluetun.
I am using the default network for my compose stack, which, if I understand correctly is named "_default".

My include.conf is

forward-zone:                          
  name: "stack_default"
  forward-addr: 127.0.0.11
  forward-tls-upstream: no
server:
  do-not-query-localhost: no
  private-domain: "stack_default"
  domain-insecure: "stack_default"

Connecting to gluetng with docker exec -it, and using ping cross.stack_default just gives me a ping: bad address.
I also verified that on gluetun, the file include.conf is correctly mapped to /etc/unbound/include.conf.

Any help would be appreciated :)

@mpsarakis
Copy link

mpsarakis commented Dec 12, 2024

@yorah
Based on your message, check the name of the network where your containers that needs to communicate together belong to.

You can see all existing network names with
docker network ls
and then all the containers that belong to one of the network with
docker network inspect <network_name> | grep Name

Hope that helps...

@yorah
Copy link

yorah commented Dec 12, 2024

Thank you for following up!

I checked, and the name of my network is indeed stack_default, and I can see my cross container as being part of it.
In your own setup, do you use a specific network instead of the default one?

@yorah
Copy link

yorah commented Dec 12, 2024

Additional info, just in case it gives you more ideas:

  • nslookup cross.stack_default 127.0.0.11 from inside gluetun works correctly (returns the address 172.18.x.x of the cross container)
  • I also executed unbound-checkconf from inside gluetun, and the conf seems to be ok

Am I doing something wrong? When I do ping, does it not use 127.0.0.11, as set in the include.conf file?

@mpsarakis
Copy link

you're welcome, yes I use a custom one (bridge type), but I am not sure that's the reason.

Maybe I have something to say but my explanations are maybe not using the right technical words :)

Is your "cross" container inside the gluetun "stack" ie. using "network_mode: service:gluetun" ?
If yes, since they all share the single gluetun network stack, that's why they cannot communicate together using a different IP address because they share only one IP (gluetun's one). So for example, if you have a service in "cross" container on port 1234, you need to use gluetun:1234 from the actual gluetun container to contact this service, you don't need the unbound trick for this.

Unbound trick is for gluetun stack containers to contact a container outside the actual gluetun "stack". If "cross" is outside, a ping cross.network_name should work from gluetun, if not there is indeed a problem...

@yorah
Copy link

yorah commented Dec 12, 2024

My cross container is NOT inside the gluetun stack. I want to contact it from inside gluetun.

I am wondering: how is the include.conf file included in unbound? I just checked the /etc/unbound/unbound.conf file, and there is no include line for it.

@yorah
Copy link

yorah commented Dec 12, 2024

Allright, I found out what the problem was... I was using the image qmcgaw/gluetun:latest
That latest image has the default unbound.conf file, which is basically empty (full of commented stuff).

I tried with the qmcgaw/gluetun:v3 image, and the unbound.conf file has nothing to do with it. And it includes the include.conf file by default.

@qdm12 sorry if this is wrong to ping you, but it seems the latest is broken in that aspect.

Thanks @mpsarakis for helping out!

@mpsarakis
Copy link

Sorry, I see, I am also using qmcgaw/gluetun:v3 so that must be why I also don't have the problem. My unbound.conf is indeed normal and including the include.conf file.

I don't exactly understand what is the actual version for the "latest" tag, I "think" at some point I found it was pointing to something older than "v3" and that's why I have switched to this tag instead a while ago...

@yorah
Copy link

yorah commented Dec 29, 2024

Latest image pulled with the v3 tag does not use unbound anymore. This seems to be linked to #1742, but it breaks the workaround proposed in the comments of this PR.

@yorah
Copy link

yorah commented Dec 29, 2024

Latest version to support unbound is v3.39. Workaround with latest version seem to involve having a static IP address for the container you want to access that it outside of the gluetun network (https://github.com/qdm12/gluetun-wiki/blob/main/setup/inter-containers-networking.md).

@qdm12 is my understanding correct?

@mpsarakis
Copy link

@yorah, that's a pity, hopefully something will be integrated in the image to allow gluetun stack external communication without fixed IPs...
In the meantime, I will try to find a workaround if possible with the new architecture (without using fixed IPs as I find this too rigid)

@mpsarakis
Copy link

As far as I understand, gluetun v3.40 now integrates qdm12/[email protected] and this application allows these settings to be used:

`MIDDLEWARE_LOCALDNS_ENABLED` | `on` | Enable or disable the local DNS middleware |
`MIDDLEWARE_LOCALDNS_RESOLVERS` | Local DNS servers | Comma separated list of local DNS resolvers to use for local names DNS requests |
`MIDDLEWARE_SUBSTITUTER_SUBSTITUTIONS` | | JSON encoded list of substitutions. For example `[{"name":"github.com","ips":["1.2.3.4"]}]`. You can also specify the `type`, `class` and `ttl`, where they default respectively to `A`/`AAAA`, `IN` and `300`.

So based on this and the previous workaround logic, we could have something like the following defined in gluetun using env variables:

MIDDLEWARE_LOCALDNS_ENABLED='true'
MIDDLEWARE_LOCALDNS_RESOLVERS='127.0.0.11'
MIDDLEWARE_SUBSTITUTER_SUBSTITUTIONS='[{"name":"<docker-network-name>","ips":["127.0.0.11"]}]'

I don't exactly know if there is a mandatory relationship between MIDDLEWARE_SUBSTITUTER_SUBSTITUTIONS and the other two variables, ie. if we can use MIDDLEWARE_SUBSTITUTER_SUBSTITUTIONS alone by itself or not...

I have tested both ways and they don't work...

Maybe @qdm12 could kindly give us some additional info?

@mpsarakis
Copy link

mpsarakis commented Dec 30, 2024

I did a bit of research and using qdm12/[email protected] in standalone (ie. in its container) works perfectly to resolve container names, all MIDDLEWARE_* env variables seems to be handled correctly by the container.

However, what I have noticed in the latest gluetun v3.40 container which is supposed to contain qdm12/[email protected] components:

  • MIDDLEWARE_* env variables are ignored/not handled (I cannot also find those in the source code anywhere)
  • external container names cannot be resolved (as before)

So it seems that some features of qdm12/[email protected] are missing and/or are partially implemented in gluetun v3.40 release for reasons I am unfortunately not able to understand, if they were there I think it would fix the issue we have...

@qdm12 is apparently unavailable/slowed down for several understandable reasons so for the time being I will stick to release v3.39 which still uses unbound and therefore allows my fix to work. We'll see in the future what happens...

CMarcJoubert pushed a commit to FideresDev/gluetun that referenced this issue Jan 14, 2025
@enoch85
Copy link

enoch85 commented Feb 23, 2025

Came here looking for answers after been struggling with connecting Overseerr (using gluetun) to Plex (not being on gluetun) without success. After some trial and error I finally succeeded with these settings:

   environment:
      - FIREWALL_OUTBOUND_SUBNETS=10.1.99.0/24
      # DNS
      - DNS_ADDRESS=10.1.99.1
      - DOT=off
      - DNS_UPDATE_PERIOD=24h
      - UPDATER_PERIOD=24h
      - HEALTH_TARGET_ADDRESS=quad9.net:443
      # Other stuff
      - HEALTH_TARGET_ADDRESS=quad9.net:443
      - HEALTH_VPN_DURATION_INITIAL=120s

Before this change I also followed this guide and put Plex on the same network as Gluetun: https://github.com/qdm12/gluetun-wiki/blob/main/setup/inter-containers-networking.md

I run Unbound on my firewall with DoT already, so this is perfect for me.

I hope it helps someone!

@gdsoumya
Copy link

gdsoumya commented Mar 9, 2025

If anyone's interested in running a custom version of this with the feature to set LOCALDNS_RESOLVERS you can take a look at my fork specifically this branch gdsoumya#1 build a custom image with the Dockerfile and use it with the following env.

Env to use :

      - DOT_PRIVATE_ADDRESS=127.0.0.1/8,10.0.0.0/8,192.168.0.0/16,169.254.0.0/16,::1/128,fc00::/7,fe80::/10,::ffff:7f00:1/104,::ffff:a00:0/104,::ffff:a9fe:0/112,::ffff:ac10:0/108,::ffff:c0a8:0/112
      - LOCALDNS_RESOLVERS=127.0.0.11:53

DOT_PRIVATE_ADDRESS needs to be overwritten to exclude the docker compose resolved ip subnet range or else the filter middleware rejects the dns queries. In my case I had to remove the 172.16.0.0./12 ip.

It has a few more custom changes like a barebones UI to interact with control server(still wip) and changes to the control server api routes(appends /api) and some other changes like on demand VPN start/stop feature which you may ignore if not required.

@trohnjavolta
Copy link

If anyone's interested in running a custom version of this with the feature to set LOCALDNS_RESOLVERS you can take a look at my fork specifically this branch gdsoumya#1 build a custom image with the Dockerfile and use it with the following env.

Env to use :

      - DOT_PRIVATE_ADDRESS=127.0.0.1/8,10.0.0.0/8,192.168.0.0/16,169.254.0.0/16,::1/128,fc00::/7,fe80::/10,::ffff:7f00:1/104,::ffff:a00:0/104,::ffff:a9fe:0/112,::ffff:ac10:0/108,::ffff:c0a8:0/112
      - LOCALDNS_RESOLVERS=127.0.0.11:53

DOT_PRIVATE_ADDRESS needs to be overwritten to exclude the docker compose resolved ip subnet range or else the filter middleware rejects the dns queries. In my case I had to remove the 172.16.0.0./12 ip.

It has a few more custom changes like a barebones UI to interact with control server(still wip) and changes to the control server api routes(appends /api) and some other changes like on demand VPN start/stop feature which you may ignore if not required.

I just tried this and can confirm it works for me! Thx a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests