My data contains of 6mln rows with such columns:
‘primary_key’, ‘timestamp’, ‘temperature_value’,’sensor_id’, ‘sensor_status’
e.g.
1, 2023-01-01 00:00:00.0, 20.28, 1, GOOD
The data was written to Redis as hash values.
I am experimenting with Redis, MySQL, PostgreSQL and InfluxDB with use of their python libraries.
For the dbs used, we want a filtered data – from all rows only those where sensor_id in (1,2,8) (in csv file sensor_id takes values from 1-10.).
For those sensor_ids we query select amounts: 10k, 20k, 40k, 80k, 160k, 320k overall.
For each amount we query it 10 times to get a time average for used database and query amount which results in 6 average times for 4 databases each. Each query has a different offset set such so it doesn’t exceed the data size. (it was done in order to have different results from each query)
My scripts (written in Python) execute queries with such fragments to measure the time for each database:
Redis:
start_time = time.time()
pipeline = connection.pipeline()
for key in filtered_keys:
pipeline.hgetall(key)
values = pipeline.execute()
end_time = time.time()
MySQL:
start_time = time.time()
# SELECT query
query = f”SELECT * FROM {table_name} WHERE sensor_id IN ({sensor_ids_str}) LIMIT {Offset}, {Amount};”
cursor.execute(query)
# Execute the query and fetch the results
results = cursor.fetchall()
# Record the end time
end_time = time.time()
PostgreSQL:
start_time = time.time()
select_query = f”SELECT * FROM {table_name} WHERE sensor_id IN ({sensor_ids_str}) OFFSET {random_offset} LIMIT {rows_amount};”
cursor.execute(select_query)
result = cursor.fetchall()
end_time = time.time()
InfluxDB:
sensor_ids = [1, 2, 8]
sensor_ids_str = ‘ OR ‘.join([f”sensor_id='{sensor_id}'” for sensor_id in sensor_ids])
time1 = time.time()
# Perform the SELECT query
result = client.query(f'SELECT * FROM {measurement_name} WHERE ({sensor_ids_str}) LIMIT {num_points} OFFSET {random_offset};')
time2 = time.time()
The part that concerns me is that results of these queries show that Redis takes the longest time to finish such operation, although I believe it should have a bigger efficiency as an in-memory database. I am currently reading about the possible reasons for the results I got. I have to be missing something, so if you know something about this case, please share your thoughts. 🙂
The results I got:
graph of the results
On the graph we have 4 databases with query sizes and divided into query times with database disturbance and without disturbance (the disturbance means simultaneous write operations). It is another thing I tested but the main point would be answering why Redis has the biggest retrieval time. Thanks.
Redis for 320k rows retrieval took under 12 seconds, while Postgres needed under 4s.
Before I included filtering by sensor_id, Redis was having almost the best performance results. From the moment I included the filtering and changed a script due to this, Redis became the slowest. I think it may not handle well the filtering, but it shouldn’t be a problem because I measure only the pipeline which is retrieving a list of filtered keys.
results without interruptions before including filtering
I don’t have currently the old code, but it wasn’t using pipelining for Redis, yet the results were so better.