I am constantly getting a timeout error when attempting to read in tables from my sqlite database. This has happened in many instances in the recent past and its something that I need to address as I intend to use the db in tandem with a dashboard going forward. See below an instance of me getting the error:
empty_tables = []
for table in rs_tables:
df = pd.read_sql_table(table,RSengine.connect())
if df.empty:
empty_tables.append(table)
Where RSEngine = `RSengine = sqlalchemy.create_engine(‘sqlite:///RS.db’)
I have an image attached of the error i keep getting.
I tried splitting up the large list of tables into smaller chunks:
chunks = [tables[i:i + 10] for i in range(0, len(tables), 10)]
I also tried putting a sleep at the end of each iteration in the for loop, but I have a feeling this is more inefficient. Is there something I am missing? The first time i got this error It was during a larger calculation in a for loop as shown below:
initialErrors = {}
initialRS = {}
#iterate through the chunks:
for chunk in chunks:
#print chunk
#print(f'Processing chunk: {chunk}')
for ticker in tqdm(chunk):
try:
stock = pd.read_sql_table(ticker, stockengine.connect(), index_col='Date') #think it happens here the timeout
#take the last 2 years of data:
stock1 = stock.iloc[-504:]
**RSframe = relative_strength(df1=stock1,df2=INDEX1,windows=[21,63])**
initialRS[ticker] = RSframe
#write to RSengine:
#RSframe.to_sql(ticker, RSengine, if_exists='replace',index=False)
except Exception as e:
initialErrors[ticker] = e
#print(e)
#break
The bolded line with the function relative_strength is the calculation and on average takes about 7seconds per table. The error occurs reading in each table.
I have also tried reading in each table as show in the
stock = pd.read_sql_table(ticker, stockengine.connect(), index_col='Date')
line and performing the calculations with the tables in memory but still managed to get a timeout error.
I tried the recommended solution as shown here but still ended up getting the error.
I should mention that the db contains over 1000 tables and should keep growing. Is there a more efficient way to read in this amount of tables without risking a timeout error? Is sqlite the way to go?