Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update dnstest.sh #51

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

gitthangbaby
Copy link

support of ports.
Example:
127.0.0.1:5353#local dnscrypt

since we want to test our local dns forwarders, we can translate an usual host:port formula into dig's "-p PORT" formula.
also dropping bc, no need, small appliances don't have it.

support of ports.
Example:
127.0.0.1:5353#local dnscrypt

since we want to test our local dns forwarders, we can translate an usual host:port formula into dig's "-p PORT" formula.
also dropping bc, no need, small appliances don't have it.
@thomasmerz
Copy link
Contributor

@gitthangbaby , this doesn't work for me - I added your relevant changed line to my shellcheck'ed version:

🦎🖥  ✔ ~/temp/PRs/dnsperftest [test_shellcheck|✚ 1]
22:39 $ ./dnstest.sh
                     test1   test2   test3   test4   test5   test6   test7   test8   test9   test10  Average
127.0.0.1            3 ms    1 ms    1 ms    3 ms    1 ms    1 ms    1 ms    1 ms    1 ms    1 ms      1.40
45.90.30.39          15 ms   15 ms   19 ms   15 ms   15 ms   15 ms   15 ms   11 ms   15 ms   15 ms     15.00
84.200.69.80         15 ms   31 ms   15 ms   11 ms   15 ms   15 ms   11 ms   15 ms   15 ms   15 ms     15.80
localhost            dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms dig: couldn't get address for '127.0.0.1 -p 53': not found
1000 ms   1000.00
cloudflare           11 ms   11 ms   15 ms   15 ms   15 ms   15 ms   15 ms   19 ms   19 ms   19 ms     15.40
level3               15 ms   15 ms   15 ms   15 ms   15 ms   15 ms   15 ms   15 ms   15 ms   19 ms     15.40
google               19 ms   19 ms   23 ms   27 ms   23 ms   19 ms   19 ms   19 ms   23 ms   19 ms     21.00
quad9                23 ms   23 ms   19 ms   43 ms   23 ms   19 ms   19 ms   19 ms   23 ms   19 ms     23.00
freenom              35 ms   35 ms   39 ms   31 ms   83 ms   35 ms   35 ms   67 ms   39 ms   87 ms     48.60
opendns              27 ms   23 ms   15 ms   39 ms   11 ms   107 ms  19 ms   27 ms   15 ms   23 ms     30.60
norton               19 ms   11 ms   15 ms   15 ms   11 ms   19 ms   11 ms   19 ms   15 ms   15 ms     15.00
cleanbrowsing        15 ms   19 ms   15 ms   27 ms   15 ms   15 ms   15 ms   15 ms   19 ms   15 ms     17.00
yandex               51 ms   43 ms   47 ms   47 ms   51 ms   43 ms   51 ms   87 ms   47 ms   79 ms     54.60
adguard              115 ms  127 ms  115 ms  115 ms  123 ms  119 ms  115 ms  115 ms  119 ms  115 ms    117.80
neustar              23 ms   23 ms   23 ms   19 ms   19 ms   23 ms   19 ms   23 ms   31 ms   27 ms     23.00
comodo               15 ms   23 ms   19 ms   31 ms   15 ms   31 ms   15 ms   15 ms   23 ms   15 ms     20.20
nextdns              31 ms   19 ms   27 ms   27 ms   27 ms   31 ms   27 ms   23 ms   27 ms   31 ms     27.00
🦎🖥  ✔ ~/temp/PRs/dnsperftest [test_shellcheck|✚ 1]
22:39 $ git diff
diff --git a/dnstest.sh b/dnstest.sh
index 05b9b47..a1b4c9a 100755
--- a/dnstest.sh
+++ b/dnstest.sh
@@ -8,6 +8,7 @@ command -v bc > /dev/null || { echo "error: bc was not found. Please install bc.
 NAMESERVERS=$(grep ^nameserver /etc/resolv.conf | cut -d " " -f 2 | sed 's/\(.*\)/&#&/')

 PROVIDERSV4="
+127.0.0.1:53#localhost
 1.1.1.1#cloudflare
 4.2.2.1#level3
 8.8.8.8#google
@@ -77,6 +78,7 @@ echo ""

 for p in $NAMESERVERS $providerstotest; do
     pip=${p%%#*}
+    [[ "$pip" =~ [:] ]] && pip="${pip%%:*} -p ${pip##*:}"
     pname=${p##*#}
     ftime=0

@gitthangbaby
Copy link
Author

gitthangbaby commented Mar 18, 2022

this means the shell is super old.. i don't think i'd like to support it, i am using those expressions on all devices.
also, those 10 domains are not useful, it will all be cached swiftly. i am now using this strong script, running for 2 years every hour, using random domains, reporting details on screen and significant output to some logfile:

https://github.com/gitthangbaby/dnsperftest/blob/patch-1/dnstest_random

random servers etc
@thomasmerz
Copy link
Contributor

@gitthangbaby , that's a great feature in your fork!

Now I have evidence that I've done the best with my DNS config (Pi-hole as local resolver with 45.90.30.39 (dns2.nextdns.io) as one of some upstream resolvers (and as a privacy-aware man avoiding google or others) 😃

Fri 18 Mar 2022 11:45:29 AM CET TOP:
        1ms ........................... 127.0.0.1_cached
       20ms ........................... 45.90.30.39
       60ms ........................... 127.0.0.1_uncached
       62ms ........................... google
       68ms ........................... cleanbrowsing
       80ms ........................... level3
       84ms ........................... comodo
       92ms ........................... cloudflare_family
       93ms ........................... opendns
      104ms ........................... norton
      110ms ........................... 84.200.69.80
      149ms ........................... google
      164ms ........................... adguard_family
      188ms ........................... cloudflare
      225ms ........................... quad9
      260ms ........................... adguard

Much better than querying already cached domains:

$ ./dnstest.sh | sort -k 22 -n
                     test1   test2   test3   test4   test5   test6   test7   test8   test9   test10  Average
127.0.0.1            1 ms    1 ms    1 ms    1 ms    3 ms    1 ms    1 ms    1 ms    1 ms    1 ms      1.20
84.200.69.80         15 ms   15 ms   19 ms   15 ms   11 ms   15 ms   15 ms   15 ms   15 ms   11 ms     14.60
level3               11 ms   15 ms   19 ms   11 ms   15 ms   15 ms   19 ms   15 ms   15 ms   15 ms     15.00
45.90.30.39          19 ms   15 ms   15 ms   15 ms   15 ms   23 ms   15 ms   19 ms   19 ms   15 ms     17.00
norton               15 ms   15 ms   15 ms   15 ms   19 ms   15 ms   15 ms   19 ms   19 ms   23 ms     17.00
cleanbrowsing        15 ms   19 ms   15 ms   19 ms   15 ms   23 ms   19 ms   15 ms   15 ms   19 ms     17.40
comodo               15 ms   19 ms   19 ms   19 ms   15 ms   19 ms   15 ms   23 ms   19 ms   15 ms     17.80
cloudflare           31 ms   15 ms   19 ms   15 ms   15 ms   23 ms   11 ms   27 ms   23 ms   19 ms     19.80
opendns              15 ms   19 ms   15 ms   23 ms   19 ms   27 ms   11 ms   19 ms   39 ms   23 ms     21.00
neustar              23 ms   23 ms   31 ms   27 ms   19 ms   23 ms   19 ms   27 ms   27 ms   23 ms     24.20
nextdns              23 ms   23 ms   23 ms   23 ms   27 ms   23 ms   23 ms   27 ms   27 ms   23 ms     24.20
google               35 ms   19 ms   23 ms   31 ms   19 ms   31 ms   23 ms   39 ms   23 ms   23 ms     26.60
freenom              31 ms   39 ms   27 ms   35 ms   111 ms  35 ms   31 ms   71 ms   39 ms   123 ms    54.20
yandex               43 ms   47 ms   51 ms   51 ms   47 ms   47 ms   39 ms   43 ms   47 ms   127 ms    54.20
adguard              115 ms  119 ms  119 ms  119 ms  115 ms  115 ms  111 ms  119 ms  119 ms  115 ms    116.60
quad9                1000 ms 1000 ms 31 ms   27 ms   1000 ms 1000 ms 1000 ms 75 ms   19 ms   23 ms     517.50

@thomasmerz
Copy link
Contributor

@gitthangbaby , you should add this by using a parameter ("uncached"/"random"/…) so that both types are featured. @cleanbrowsing , what do you think about this? Plus some shellcheck(PR #68) and this will be great! 👍🏻

@gitthangbaby
Copy link
Author

gitthangbaby commented Mar 18, 2022

Now I have evidence that I've done the best with my DNS config (Pi-hole as local resolver with 45.90.30.39 (dns2.nextdns.io)

well, for me it's not fast (and it's not even argument for me to pick it!).
first i don't like the query limit, i don't want to make payments and then be traced, and i don't want much adblocking:
https://chriswiegman.com/2021/10/stepping-back-from-nextdns/
..as i do this with AdGuard server in much better way.. my AdGuard DNS via HTTPS is way faster than any of these, despite adding millions of local blocklists on top of it and a per device configuration.
an important issue of this script despite the randomization is DNS route reuse during the test, so i try to correlate results from DNSs to find out. In my case, nextdns running after google gives "cached" results, so i have to kick out google to test nextdns. just a food for thought, make sure you test your winner in more isolated scenario.
quad9 is defly the slowest, longterm. also, it's created by govt, shouldn't be considered as "privacy oriented".

@thomasmerz
Copy link
Contributor

Now I have evidence that I've done the best with my DNS config (Pi-hole as local resolver with 45.90.30.39 (dns2.nextdns.io)

well, for me it's not fast (and it's not even argument for me to pick it!).

For this we need and have toolings like this project here or this for regular performance monitoring of different DNS resolvers/providers 😉 So we can choose the fastest one for performance reasons. But you're right: that's not all that is counting…

first i don't like the query limit,

I'm running a Pi-hole installation at home that gets around 50.000 queries per day (5 persons: mom and dad and three teen-boys heavily surfing on the mobiles, computers, consoles). Somedays 40k, somedays 60k. About half of them are being forwarded to NextDNS:

root@pihole-merz-nimbus:/var/log# grep -E "forwarded.*to 45.90." pihole.log.1 -c
26147

Because my Pi-hole is very good in caching because I pimped it a little bit 😜

BLOCK_TTL=300      # blocked queries will be cached 5 minutes instead of 2 seconds
min-cache-ttl=3600 # raise TTL for all queries to max. = 1 hour
# instead of what upstream DNS resolver gives us (normally some minutes),
# so all clients don't need to query every minute
# but every hour (or if they "forgot" somehow due to disconnect WiFi on smartphones…)

Some infos about this:
https://00f.net/2019/11/03/stop-using-low-dns-ttls/
https://discourse.pi-hole.net/t/increase-ttl/25157
https://discourse.pi-hole.net/t/change-the-ttl/6903/14

i don't want to make payments and then be traced, and i don't want much adblocking:

Because I also dislike payments, tracing/tracking and ads, malware, phishing etc. I use Pi-hole at home (and when on mobile data via a Wireguard VPN tunnel to my Pi-hole running on a cloudserver). When query limit is reached NextDNS reacts like a normal non-blocking DNS resolver. This is fine for me due to my Pi-hole (sorry for repeating 😉 ).

my AdGuard DNS via HTTPS is way faster than any of these,

Just because I'm curious:
How fast? This is my Pi-hole via WiFi for all my clients at home (normally around 10-20 devices/gadgets/computers).

and a per device configuration

This can also be done with Pi-hole and connecting clients with groups/domains and by setting domains/adlists to groups 👍🏻

an important issue of this script despite the randomization is DNS route reuse during the test,

I think I will adapt this from you to my dnspingtest project… Currently it's more like testing the "usual domains" that are mostly queried in my home network. This really could and should be optimized. Thanks for your inspiration 🎉

quad9 is defly the slowest, longterm.

Some weeks ago I had an interesting contact with Quad9 support where I confronted them with my performance monitoring:

Unexpectedly, it looks like 9.9.9.0/24 is routing to Prague, and 149.112.112.0/24 is routing to Frankfurt, even though we are directly peered with AS24940 in Frankfurt.
…
We identified two problematic servers in our Paris cluster that were causing some significant delays in query processing and have taken action to get these servers in a healthy state again.
…
In addition, we identified two more servers in our Frankfurt cluster which were exhibiting the same issues. It's possible you'll see slightly better performance there as well.

But this seems to be repeatingly since Sept. 2021: https://www.heise.de/news/DNS-Dienst-Quad9-hat-massive-Lastprobleme-in-Frankfurt-6204506.html (Sorry, german article only. But TL;DR: toooo much queries, overrun by their own success, …)

Something got a little bit better, but it's still far away from "nice":

@gitthangbaby
Copy link
Author

The network recorded has 4.000.000 requests per month (exactly the same family "setup") thou many will be cached for sure.
Nice pimping:) I'm an AdGuard fan, because DNS blocking is just little part of what needs to be cleansed. Tho I do like their DNS server as I can enforce parental rules. All devices run AdGuard and must be rooted for the ultimate safety (yup i mean it). But the server runs on NAS, that's why it can process it quick. Router just couldn't do it and the same HTTPS response without post processing is slower. Actually what helped the performance was switching from archaic DNS to HTTPS/TLS.
So now we know Pihole has an old version of bash:)
Quad9 is dead, i have all the stats here for 2 years, and could parse it to one table, but it's quite obvious they're super slow historically not just nowadays.

@thomasmerz
Copy link
Contributor

Ok, I agree: Pi-hole is a little bit less "user-friendly" to nearly achieve what AdGuard delivers out of the box. But requirements are individual for everybody. Pi-hole works for me and AdGuard is your preferred choice.

But to get back or closer again to this project:

  • I adapted your random domain feature/idea into my project and let it run for a while to get some stats and graphs to compare and weigh with the old kind of performance monitoring…
  • This project suits well for a one-time-shot for current DNS performance monitoring but not for long-term. Which kind of long-time performance-monitoring do you have? Is this also a feature of AdGuard?

Actually what helped the performance was switching from archaic DNS to HTTPS/TLS.

May I ask for some query times of your setup? As you can see from my links provided before my Pi-hole is serving an average of a little bit above 20ms (mostly cached queries). AdGuard has native DoH/DoT support, hasn't it? Pi-hole hasn't and needs another upstream resolver between itself and "remote upstream" resolvers. But as far as I understood does encryption with DoH/DoT add some overhead and should impact performance rather bad than good (measurable, but maybe not noticeable to users) 🤔

So now we know Pihole has an old version of bash:)

No, how do you come? 🤔

root@pihole-merz-nimbus:/# echo $BASH_VERSION
5.0.3(1)-release

I'm running this project's script on my openSUSE 15.3 (or on my Ubuntu Servers). Pi-hole is running in a docker container on my linux hosts.

@thomasmerz
Copy link
Contributor

Did I already mention that I have a cache hit ratio normally of 50% and more with my Pi-hole? 😁
image

@gitthangbaby
Copy link
Author

gitthangbaby commented Mar 19, 2022

The reason i've put those last lines is to save the data and potentionally analyze it. I don't have that done, i consider it easy-to-do, as it would just parse the file.
For me nextdns is slow via DNS, others are fast via HTTPS. AdGuard supports all methods.

       36ms ........................... google(DNS)
       79ms ........................... router(adguard_DoH_subservice)
       83ms ........................... nas(adguard_cloudflare_DoH)
      148ms ........................... router(aggregated_DoH)
      155ms ........................... nextdns(DNS)
      156ms ........................... router(cloudflare_DoH_subservice)

I don't have any values close to 20ms like you do, but i can see the values were much higher before I started using DoH and DoT. But it might have been caused by the transfer of DNS from slow router to fast NAS, or changing VPN protocol as well. There is a study about these protocols:
https://www.cs.princeton.edu/~ahounsel/publications/www20.pdf
Despite it does look like some overhead is there, i still find DNS bad because it relies on broadcasts from some random servers, causing huge difference in response time. While the DoH responses seem way more consistent:

       61ms ........................... nas(adguard_cloudflare_DoH)
       83ms ........................... nas(adguard_cloudflare_DoH)
      102ms ........................... router(aggregated_DoH)
      176ms ........................... router(cloudflare_DoH_subservice)
      244ms ........................... google(DNS)

Your bash isn't old much, so i wonder if it's some setting causing string operations not support. I get the error "couldn't get address for" only if running with sh. Or the shebang isn't perfect?

@thomasmerz
Copy link
Contributor

I think best/perfect shebang is using env because some distros might differ for bash's path:

#!/usr/bin/env bash

@thomasmerz
Copy link
Contributor

And many thanks for your insights. When I have some more time (so maybe in some decades when getting retired 😜) I will have a look at AdGuard, too. Or earlier if my current setup won't fit my needs anymore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants