fix: correct connection pool management #585
                
     Open
            
            
          
      
        
          +26
        
        
          −9
        
        
          
        
      
    
  
  Add this suggestion to a batch that can be applied as a single commit.
  This suggestion is invalid because no changes were made to the code.
  Suggestions cannot be applied while the pull request is closed.
  Suggestions cannot be applied while viewing a subset of changes.
  Only one suggestion per line can be applied in a batch.
  Add this suggestion to a batch that can be applied as a single commit.
  Applying suggestions on deleted lines is not supported.
  You must change the existing code in this line in order to create a valid suggestion.
  Outdated suggestions cannot be applied.
  This suggestion has been applied or marked resolved.
  Suggestions cannot be applied from pending reviews.
  Suggestions cannot be applied on multi-line comments.
  Suggestions cannot be applied while the pull request is queued to merge.
  Suggestion cannot be applied right now. Please check back later.
  
    
  
    
Rationale
I'm investigating this issue because I’ve been encountering frequent "Connection Timeout" errors when connecting to a GCP Cloud SQL database.
Upon inspection, I found that the provider was creating and immediately closing hundreds of connections in rapid succession. GCP enforces a rate limit on the number of new connections that can be opened per user, so I believe I was consistently hitting that limit. Many other users have reported similar timeouts when working with GCP-hosted databases.
#572
#257
Problem 1
The first issue I identified is that calling db.SetMaxIdleConns(0) causes every resource to open and close a new connection through the dbRegistry connection pools.
The reasoning behind this setting is documented in the code comments:
This makes sense because in PostgreSQL, a database can only be dropped if there are no active connections to it.
However, I found that the provider already supports performing a "FORCE" drop, which automatically terminates all active connections before dropping the database (available in PostgreSQL ≥ 13.0.0).
Therefore, when the target database supports forced drops, we can safely allow connection reuse by setting a non-zero value for MaxIdleConns. This prevents unnecessary connection churn while maintaining compatibility with database deletion operations.
Problem 2
After enabling connection reuse for resources that belong to the same logical database, I noticed that more connections than expected were being created. Also, the number of active connections would fluctuate unpredictably.
The root cause was that the provider builds DSN keys for its connection pool cache using unsorted Golang maps. Since map iteration order in Go is not deterministic, the generated DSNs varied across executions. As a result, multiple connection pools were being created for the same logical database.
By sorting the map keys before iteration, we ensure that DSN generation is deterministic, which guarantees that there is at most one connection pool per logical database.