Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve proxy bandwidth capacity #3

Open
axcornea opened this issue Dec 2, 2022 · 2 comments
Open

Improve proxy bandwidth capacity #3

axcornea opened this issue Dec 2, 2022 · 2 comments
Labels
enhancement New feature or request testing Issues related to testing the app

Comments

@axcornea
Copy link
Owner

axcornea commented Dec 2, 2022

After several weeks of using the proxy with multiple users, there were no issues related to either concurrency or network bandwidth, however, to make the proxy production ready, a more thorough approach is needed.

First, it is needed to analyze current performance metrics (TBD).

Also, to improve server capacity, it might be worth making it forking.

@axcornea axcornea added enhancement New feature or request testing Issues related to testing the app labels Dec 2, 2022
@axcornea
Copy link
Owner Author

axcornea commented Dec 2, 2022

Take a look at alternative proxy servers like Netty, HAProxy, and others. Check their bandwidth metrics.

@axcornea
Copy link
Owner Author

axcornea commented Dec 4, 2022

Test setup

# Terminal 1 - iperf3 server
$ iperf3 -s -p 9000

# Terminal 2 - proxy server
$ python3.10 server.py --proxy-port=8000 --target-ip=localhost --target-port=9000 --hook-start-svc=/dev/null --hook-stop-svc=/dev/null

Running tests

Single client

$ iperf3 -c localhost -p 8000
Connecting to host localhost, port 8000
[  5] local 127.0.0.1 port 36664 connected to 127.0.0.1 port 8000
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   252 MBytes  2.12 Gbits/sec    0   1.94 MBytes
[  5]   1.00-2.00   sec   250 MBytes  2.10 Gbits/sec    0   1.94 MBytes
[  5]   2.00-3.00   sec   246 MBytes  2.07 Gbits/sec    0   1.94 MBytes
[  5]   3.00-4.00   sec   248 MBytes  2.08 Gbits/sec    0   1.94 MBytes
[  5]   4.00-5.00   sec   244 MBytes  2.04 Gbits/sec    0   1.94 MBytes
[  5]   5.00-6.00   sec   246 MBytes  2.07 Gbits/sec    0   1.94 MBytes
[  5]   6.00-7.00   sec   248 MBytes  2.08 Gbits/sec    0   1.94 MBytes
[  5]   7.00-8.00   sec   248 MBytes  2.08 Gbits/sec    0   1.94 MBytes
[  5]   8.00-9.00   sec   244 MBytes  2.04 Gbits/sec    0   1.94 MBytes
[  5]   9.00-10.00  sec   246 MBytes  2.07 Gbits/sec    0   1.94 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  2.41 GBytes  2.07 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  2.41 GBytes  2.07 Gbits/sec                  receiver

iperf Done.

Multiple clients

As noted by man iperf3 it doesn't really send requests in parallel, but still uses multiple connections.

$ iperf3 -c localhost -p 8000 -P16
...

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[  5]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[  7]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[  7]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[  9]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[  9]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 11]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 11]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 13]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 13]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 15]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 15]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 17]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 17]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 19]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 19]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 21]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 21]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 23]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 23]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 25]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 25]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 27]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 27]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 29]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 29]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 31]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 31]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 33]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 33]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[ 35]   0.00-10.00  sec   306 MBytes   257 Mbits/sec    0             sender
[ 35]   0.00-10.01  sec   298 MBytes   250 Mbits/sec                  receiver
[SUM]   0.00-10.00  sec  4.79 GBytes  4.11 Gbits/sec    0             sender
[SUM]   0.00-10.01  sec  4.66 GBytes  4.00 Gbits/sec                  receiver

iperf Done.

Experiment - make server forking

The following code has been added to the LifecycleManagingProxyServer.run() method:

def run(self):
   # Existing code: preparing the server coroutine and binding to socket

   children_no = os.cpu_count() - 1
   for child_idx in range(children_no):
      pid = os.fork()

      if pid == 0:
         # This is a child process
         logger.info("Forked process launched. PID={}".format(os.getpid()))
         break

   # Existing code: running the server

Single client

$ iperf3 -c localhost -p 8000
Connecting to host localhost, port 8000
[  5] local 127.0.0.1 port 37032 connected to 127.0.0.1 port 8000
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   270 MBytes  2.26 Gbits/sec    0   2.19 MBytes
[  5]   1.00-2.00   sec   260 MBytes  2.18 Gbits/sec    0   2.19 MBytes
[  5]   2.00-3.00   sec   256 MBytes  2.15 Gbits/sec    0   2.19 MBytes
[  5]   3.00-4.00   sec   259 MBytes  2.17 Gbits/sec    0   2.19 MBytes
[  5]   4.00-5.00   sec   256 MBytes  2.15 Gbits/sec    0   2.19 MBytes
[  5]   5.00-6.00   sec   252 MBytes  2.12 Gbits/sec    0   2.19 MBytes
[  5]   6.00-7.00   sec   259 MBytes  2.17 Gbits/sec    0   2.19 MBytes
[  5]   7.00-8.00   sec   255 MBytes  2.14 Gbits/sec    0   2.19 MBytes
[  5]   8.00-9.00   sec   258 MBytes  2.16 Gbits/sec    0   2.19 MBytes
[  5]   9.00-10.00  sec   256 MBytes  2.15 Gbits/sec    0   2.19 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  2.52 GBytes  2.17 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  2.51 GBytes  2.16 Gbits/sec                  receiver

iperf Done.

Multiple clients

$ iperf3 -c localhost -p 8000 -P16
...
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  4.47 GBytes  3.84 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  5.34 GBytes  4.58 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  5.34 GBytes  4.59 Gbits/sec    0             sender
[  7]   0.00-10.00  sec  4.46 GBytes  3.83 Gbits/sec                  receiver
[  9]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[  9]   0.00-10.00  sec  5.51 GBytes  4.73 Gbits/sec                  receiver
[ 11]   0.00-10.00  sec  5.52 GBytes  4.74 Gbits/sec    0             sender
[ 11]   0.00-10.00  sec   392 MBytes   329 Mbits/sec                  receiver
[ 13]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 13]   0.00-10.00  sec   392 MBytes   329 Mbits/sec                  receiver
[ 15]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 15]   0.00-10.00  sec   392 MBytes   328 Mbits/sec                  receiver
[ 17]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 17]   0.00-10.00  sec   392 MBytes   328 Mbits/sec                  receiver
[ 19]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 19]   0.00-10.00  sec   391 MBytes   328 Mbits/sec                  receiver
[ 21]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 21]   0.00-10.00  sec   392 MBytes   328 Mbits/sec                  receiver
[ 23]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 23]   0.00-10.00  sec   391 MBytes   328 Mbits/sec                  receiver
[ 25]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 25]   0.00-10.00  sec   391 MBytes   328 Mbits/sec                  receiver
[ 27]   0.00-10.00  sec   400 MBytes   335 Mbits/sec    0             sender
[ 27]   0.00-10.00  sec   392 MBytes   328 Mbits/sec                  receiver
[ 29]   0.00-10.00  sec   400 MBytes   335 Mbits/sec    0             sender
[ 29]   0.00-10.00  sec   392 MBytes   328 Mbits/sec                  receiver
[ 31]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 31]   0.00-10.00  sec   391 MBytes   328 Mbits/sec                  receiver
[ 33]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 33]   0.00-10.00  sec   391 MBytes   328 Mbits/sec                  receiver
[ 35]   0.00-10.00  sec   400 MBytes   336 Mbits/sec    0             sender
[ 35]   0.00-10.00  sec   392 MBytes   328 Mbits/sec                  receiver
[SUM]   0.00-10.00  sec  20.4 GBytes  17.5 Gbits/sec    0             sender
[SUM]   0.00-10.00  sec  20.3 GBytes  17.4 Gbits/sec                  receiver

iperf Done.

Test results

No of proxy processes No of clients Throughput
1 1 2.07 Gbits/sec
1 16 4.00 Gbits/sec
8 1 2.16 Gbits/sec
8 16 17.4 Gbits/sec

As you can see from the test results, handling requests by multiple processes significantly increases proxy throughput. This is enough proof for committing this experiment to the codebase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request testing Issues related to testing the app
Projects
None yet
Development

No branches or pull requests

1 participant