Haki Benita
1 min readMar 4, 2019

--

Hey Ketan Bhatt,
Thanks for sharing this article.

A few comments and observations:

  • You use format in the logger. You should really avoid that, and provide parameters to the logger like this:
logger.error("KeyError while setting DB Timeout: %s", str(e))
  • You use format in the call to execute. This is very dangerous and exposes you to SQL Injection. You should really use bind/replacement variables like this:
cursor.execute("set statement_timeout=%(seconds)s", {'seconds': timeout_to_set})
  • You mentioned that Django uses a connection pool. This is inaccurate. Django does not maintain a connection pool per se, but a persistent db connection for MAX_CONN seconds for each worker process.
  • You mentioned the overhead during connection initialization. We actually changed the way we set the statement timeout since this article was published more than a year ago. Instead of setting it using connection.execute, we now set it directly in DATABASES settings as an argument to the psycopg:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': config['default_db_name'],
'USER': config['default_db_user'],
'PASSWORD': config['default_db_password'],
'HOST': config['default_db_host'],
'PORT': config['default_db_port'],
'CONN_MAX_AGE': config['default_db_conn_max_age'],
'OPTIONS': {
'options': '-c statement_timeout=10000',
},

},
}
  • Having said all of that, your idea is very interesting and the use-case is one that I am familiar with. We are actually right now, in the process of moving to an external connection pool for our main app (using pgBouncer) and we’ve been struggling a lot with how to set the timeout correctly.

--

--