-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve proxy bandwidth capacity #3
Comments
Take a look at alternative proxy servers like Netty, HAProxy, and others. Check their bandwidth metrics. |
Test setup# Terminal 1 - iperf3 server
$ iperf3 -s -p 9000
# Terminal 2 - proxy server
$ python3.10 server.py --proxy-port=8000 --target-ip=localhost --target-port=9000 --hook-start-svc=/dev/null --hook-stop-svc=/dev/null Running testsSingle client$ iperf3 -c localhost -p 8000
Connecting to host localhost, port 8000
[ 5] local 127.0.0.1 port 36664 connected to 127.0.0.1 port 8000
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 252 MBytes 2.12 Gbits/sec 0 1.94 MBytes
[ 5] 1.00-2.00 sec 250 MBytes 2.10 Gbits/sec 0 1.94 MBytes
[ 5] 2.00-3.00 sec 246 MBytes 2.07 Gbits/sec 0 1.94 MBytes
[ 5] 3.00-4.00 sec 248 MBytes 2.08 Gbits/sec 0 1.94 MBytes
[ 5] 4.00-5.00 sec 244 MBytes 2.04 Gbits/sec 0 1.94 MBytes
[ 5] 5.00-6.00 sec 246 MBytes 2.07 Gbits/sec 0 1.94 MBytes
[ 5] 6.00-7.00 sec 248 MBytes 2.08 Gbits/sec 0 1.94 MBytes
[ 5] 7.00-8.00 sec 248 MBytes 2.08 Gbits/sec 0 1.94 MBytes
[ 5] 8.00-9.00 sec 244 MBytes 2.04 Gbits/sec 0 1.94 MBytes
[ 5] 9.00-10.00 sec 246 MBytes 2.07 Gbits/sec 0 1.94 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.41 GBytes 2.07 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 2.41 GBytes 2.07 Gbits/sec receiver
iperf Done. Multiple clients
Experiment - make server forkingThe following code has been added to the def run(self):
# Existing code: preparing the server coroutine and binding to socket
children_no = os.cpu_count() - 1
for child_idx in range(children_no):
pid = os.fork()
if pid == 0:
# This is a child process
logger.info("Forked process launched. PID={}".format(os.getpid()))
break
# Existing code: running the server Single client$ iperf3 -c localhost -p 8000
Connecting to host localhost, port 8000
[ 5] local 127.0.0.1 port 37032 connected to 127.0.0.1 port 8000
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 270 MBytes 2.26 Gbits/sec 0 2.19 MBytes
[ 5] 1.00-2.00 sec 260 MBytes 2.18 Gbits/sec 0 2.19 MBytes
[ 5] 2.00-3.00 sec 256 MBytes 2.15 Gbits/sec 0 2.19 MBytes
[ 5] 3.00-4.00 sec 259 MBytes 2.17 Gbits/sec 0 2.19 MBytes
[ 5] 4.00-5.00 sec 256 MBytes 2.15 Gbits/sec 0 2.19 MBytes
[ 5] 5.00-6.00 sec 252 MBytes 2.12 Gbits/sec 0 2.19 MBytes
[ 5] 6.00-7.00 sec 259 MBytes 2.17 Gbits/sec 0 2.19 MBytes
[ 5] 7.00-8.00 sec 255 MBytes 2.14 Gbits/sec 0 2.19 MBytes
[ 5] 8.00-9.00 sec 258 MBytes 2.16 Gbits/sec 0 2.19 MBytes
[ 5] 9.00-10.00 sec 256 MBytes 2.15 Gbits/sec 0 2.19 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.52 GBytes 2.17 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 2.51 GBytes 2.16 Gbits/sec receiver
iperf Done. Multiple clients$ iperf3 -c localhost -p 8000 -P16
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 4.47 GBytes 3.84 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 5.34 GBytes 4.58 Gbits/sec receiver
[ 7] 0.00-10.00 sec 5.34 GBytes 4.59 Gbits/sec 0 sender
[ 7] 0.00-10.00 sec 4.46 GBytes 3.83 Gbits/sec receiver
[ 9] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 9] 0.00-10.00 sec 5.51 GBytes 4.73 Gbits/sec receiver
[ 11] 0.00-10.00 sec 5.52 GBytes 4.74 Gbits/sec 0 sender
[ 11] 0.00-10.00 sec 392 MBytes 329 Mbits/sec receiver
[ 13] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 13] 0.00-10.00 sec 392 MBytes 329 Mbits/sec receiver
[ 15] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 15] 0.00-10.00 sec 392 MBytes 328 Mbits/sec receiver
[ 17] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 17] 0.00-10.00 sec 392 MBytes 328 Mbits/sec receiver
[ 19] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 19] 0.00-10.00 sec 391 MBytes 328 Mbits/sec receiver
[ 21] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 21] 0.00-10.00 sec 392 MBytes 328 Mbits/sec receiver
[ 23] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 23] 0.00-10.00 sec 391 MBytes 328 Mbits/sec receiver
[ 25] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 25] 0.00-10.00 sec 391 MBytes 328 Mbits/sec receiver
[ 27] 0.00-10.00 sec 400 MBytes 335 Mbits/sec 0 sender
[ 27] 0.00-10.00 sec 392 MBytes 328 Mbits/sec receiver
[ 29] 0.00-10.00 sec 400 MBytes 335 Mbits/sec 0 sender
[ 29] 0.00-10.00 sec 392 MBytes 328 Mbits/sec receiver
[ 31] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 31] 0.00-10.00 sec 391 MBytes 328 Mbits/sec receiver
[ 33] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 33] 0.00-10.00 sec 391 MBytes 328 Mbits/sec receiver
[ 35] 0.00-10.00 sec 400 MBytes 336 Mbits/sec 0 sender
[ 35] 0.00-10.00 sec 392 MBytes 328 Mbits/sec receiver
[SUM] 0.00-10.00 sec 20.4 GBytes 17.5 Gbits/sec 0 sender
[SUM] 0.00-10.00 sec 20.3 GBytes 17.4 Gbits/sec receiver
iperf Done. Test results
As you can see from the test results, handling requests by multiple processes significantly increases proxy throughput. This is enough proof for committing this experiment to the codebase. |
After several weeks of using the proxy with multiple users, there were no issues related to either concurrency or network bandwidth, however, to make the proxy production ready, a more thorough approach is needed.
First, it is needed to analyze current performance metrics (TBD).
Also, to improve server capacity, it might be worth making it forking.
The text was updated successfully, but these errors were encountered: