Skip to content

Commit a353e0e

Browse files
committed
Cleanup
1 parent 080cf78 commit a353e0e

File tree

2 files changed

+3
-5
lines changed

2 files changed

+3
-5
lines changed

dbutils.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,7 @@
88

99
import psycopg2
1010
from psycopg2.pool import ThreadedConnectionPool
11-
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT, register_adapter
12-
from psycopg2.extras import Json
11+
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
1312

1413
IS_DEBUG = os.environ.get('DEBUG', 'false') in ['true', 'yes', '1']
1514
logging.basicConfig(format='%(asctime)s.%(msecs)03d | %(levelname)s | %(message)s',
@@ -25,7 +24,6 @@
2524
DB_PREFIX = 'netflow_'
2625
S_PER_PARTITION = 3600
2726
LEAVE_N_PAST_PARTITIONS = 24 * 5 # 5 days
28-
register_adapter(dict, Json)
2927

3028

3129
# https://medium.com/@thegavrikstory/manage-raw-database-connection-pool-in-flask-b11e50cbad3

docker-compose.yml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,8 +66,8 @@ services:
6666
# This process collects NetFlow data and writes it to a shared named pipe. The
6767
# reason is that there is a Docker bug which causes UDP packets to change the source
6868
# IP if processed within the Docker network. To avoid that, we have a collector
69-
# listening on host network interface, then transferring the JSON-encoded data
70-
# to a "writer" process within the network, which writes the data to DB.
69+
# listening on host network interface, then transferring the data to a "writer"
70+
# process within the network, which writes the data to DB.
7171
image: grafolean/grafolean-netflow-bot
7272
container_name: grafolean-netflow-collector
7373
depends_on:

0 commit comments

Comments
 (0)