'How to solve flask sqlalchemy connection time out error (queue pool overflow limit)?
I have made one POST API named as "/get_report" which takes some input from user and return data according to inputs by doing some search queries on database. If I keep hitting the same API multiple times like for 7-8 times, on 9th hit it throws error "sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached, connection timed out, timeout 30.00 (Background on this error at: https://sqlalche.me/e/14/3o7r)".
Here is my main.py:
from flask_sqlalchemy import SQLAlchemy
from flask import Flask, jsonify, request
from helper import *
from models.tx import *
app = Flask(__name__)
db = SQLAlchemy()
DB_URL = 'postgresql://postgres:postgres@localhost/test_db'
engine = create_engine(DB_URL)
Session = sessionmaker(engine)
app.config['SQLALCHEMY_DATABASE_URI'] = DB_URL
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db.init_app(app)
@contextmanager
def session_manager():
s = Session()
try:
yield s
except:
s.rollback()
raise
finally:
s.close()
@app.route('/get_report', methods=['POST'])
def get_report():
try:
vendor = request.form['vendor']
circle = request.form['circle']
c_name = request.form['c_name']
c_port = request.form['c_port']
all_ssr = request.form['all_ssr']
data_list = []
table_name = get_table_by_vendor(vendor, circle)
if table_name != None:
with session_manager() as s:
if all_ssr == 'SSR':
result = s.query(table_name).distinct(table_name.label, table_name.c_node, \
table_name.c_port, table_name.z_port, table_name.z_node) \
.filter(and_((or_( and_(table_name.c_node == c_name, table_name.c_port == c_port), \
and_(table_name.z_node == c_name, table_name.z_port == c_port ))), \
(table_name.__table__.c[field_name].like('SSR%')))).all()
elif all_ssr == 'ALL':
# Get all the records
result = s.query(table_name).distinct(table_name.label, table_name.c_node, \
table_name.c_port, table_name.z_port, table_name.z_node) \
.filter(or_( and_(table_name.c_node ==c_name, table_name.c_port == c_port), \
and_(table_name.z_node == c_name, table_name.z_port == c_port ))).all()
else:
result = []
# Preparing JSON data to send
for item in result:
port = c_port if c_port != '' else (item.c_port if item.c_node == c_name else item.z_port)
data_dict = {'user': item.user, 'port': item.port, 'rate':item.rate, 'down': item.down}
data_list.append(data_dict)
response = {"status": "Success", "message": "Successfully Fetched", "data": data_list}
return jsonify(response)
if __name__ == '__main__':
app.run(host='0.0.0.0', port = 5000,debug = True)
Here is my models/tx.py:
class C_Segment(db.Model):
__tablename__ = 'c_segment'
id = db.Column(db.Integer, primary_key=True)
c_node = db.Column(db.String(350), nullable=False)
c_port = db.Column(db.String(200), nullable=False)
label = db.Column(db.String(350), nullable=False)
z_port = db.Column(db.String(200), nullable=False)
z_node = db.Column(db.String(200), nullable=False)
user = db.Column(db.String(200), nullable=False)
down = db.Column(db.String(200), nullable=False)
port = db.Column(db.String(200), nullable=False)
rate = db.Column(db.String(200), nullable=FaI
return '<id {}>'.format(self.id)
I have searched a lot and found so many related content on google but none of them worked for me. I have tried to increase pool size and overflow size also but nothing happened. I am not able to understand where is the exact issue. I have been stuck into this from last two days and has gone through many stack overflow contents and flask sqlalchemy session documents.
Solution 1:[1]
You have a set of connections in the pool, e.g. 15 (5 in the pool and 10 from possible overflow). If a request processing is longer than connection checkout timeout (how long you will wait on available connection, default 30s) you will get this error (all 15 connections are busy and your request must wait for available connection - and waits but only for 30s, after that you get an error). Have you thought about a query optimization? How many records do you have in the table? How long a request last? You can profile your SQL query using:
EXPLAIN ANALYZE <query>
You can of course increase the pool timeout by setting e.g 300s (5min):
app.config['SQLALCHEMY_POOL_TIMEOUT'] = 300
or make the pool size bigger but it would not solve your problem genuinely. Such a long response time in a request is really bad UX and limit concurrent access to you application from clients. So I recommend you to make your query faster.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
