Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't close DB connections leading to "FATAL: sorry, too many clients already" #86

Open
Volpym opened this issue Nov 12, 2020 · 10 comments

Comments

@Volpym
Copy link

Volpym commented Nov 12, 2020

Hi!

I have 5 tasks that run periodically (using periodiq) and each time they these are executed they open a new connection to the database.

Already tried the middleware ordering showcased on issue #76.

Thank you in advance

@agamrp
Copy link

agamrp commented Feb 4, 2021

@Volpym did you end up figuring out what was causing this? I'm running into the same thing.

@Volpym
Copy link
Author

Volpym commented Feb 4, 2021

@agamrp unfortunately no.

@agamrp
Copy link

agamrp commented Feb 4, 2021

@agamrp unfortunately no.

@Volpym Thanks for getting back to me, if you don't mind me asking, did you end up using something else besides dramatiq for your use case?

@Volpym
Copy link
Author

Volpym commented Feb 4, 2021

@agamrp unfortunately no.

@Volpym Thanks for getting back to me, if you don't mind me asking, did you end up using something else besides dramatiq for your use case?

@agamrp Due to lack of time, I had to use another module (not sure if I'm allowed to share its name but with a google search you will find it).

@helsonxiao
Copy link

In case someone still have this problem.

Try to use the latest version. These two lines should fix it.
https://github.com/Bogdanp/django_dramatiq/blob/v0.10.0/django_dramatiq/middleware.py#L68-L69

@Schulzjo
Copy link

Schulzjo commented Sep 5, 2022

also mentioned: #76 ?

@Inokinoki
Copy link

Inokinoki commented Dec 16, 2024

Hi,

I am in the same situation, even though I enabled the new middleware.
I found that it might not be caused by the hanging connections, but by the number of threads there are in total.

In the run arguments, there are CPU processes and also CPU threads by default, which could give a matrix of CPU times CPU threads in total.

For each thread, Django will create a connection, because the connections are not shared across threads. See the following code taken from recent Django version (utils/connection.py):

class BaseConnectionHandler:
    settings_name = None
    exception_class = ConnectionDoesNotExist
    thread_critical = False

    def __init__(self, settings=None):
        self._settings = settings
        self._connections = Local(self.thread_critical)

   # ...

    def __getitem__(self, alias):
        try:
            return getattr(self._connections, alias)
        except AttributeError:
            if alias not in self.settings:
                raise self.exception_class(f"The connection '{alias}' doesn't exist.")
        conn = self.create_connection(alias)
        setattr(self._connections, alias, conn)
        return conn

   # ...

In my case, it was 256 connections at most (16 times 16). But in PostgresQL the max connection number is 100. This could lead to such errors with high concurrency.

Regarding that the connections in each thread are considered as "active", close_old_connections might not be able to handle it. And the failed task will be enqueued by the Retry middleware. So, everything would be fine after the retries, but leaving the error logs.

I suggest having a fair number of processes and as well as number of threads in our own applications.

Correct me if I misunderstood anything 🙏

@andrewgy8
Copy link
Collaborator

Thank you very much for the investigation @Inokinoki ! 🙏

It would be interesting to hear if this solves the issue for the others.

@Chenger1
Copy link

Chenger1 commented Jan 16, 2025

Thanks for your investigation @Inokinoki
Faced this problem with gevent workers. When try to open 500+ workers per process - they can easelly try to open hundreds of connections.
So, i solve this problem with connection pool.
Combination of django-db-connection-pool library and Pgbouncer works great for me.

django-db-connection-pool uses sqlalchemy pooling under the hood and share connection between threads inside each process, so 500+ gevent workers can reuse the same connections.
But they are still create dozens of them (few processes with 15-20 connections in pool), so to prevent hitting DB limits, we set up PgBouncer.

@Inokinoki
Copy link

Thanks for your investigation @Inokinoki Faced this problem with gevent workers. When try to open 500+ workers per process - they can easelly try to open hundreds of connections. So, i solve this problem with connection pool. Combination of django-db-connection-pool library and Pgbouncer works great for me.

django-db-connection-pool uses sqlalchemy pooling under the hood and share connection between threads inside each process, so 500+ gevent workers can reuse the same connections. But they are still create dozens of them (few processes with 15-20 connections in pool), so to prevent hitting DB limits, we set up PgBouncer.

Thanks for the feedback and the connection pool lib!

Indeed I was also considering doing that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants