I’m working on a project that involves managing large time-series datasets in GridDB. Currently, I’m experiencing significant query latency issues as the dataset size grows. While GridDB performs well with smaller datasets, as the dataset size has increased, certain queries have become slower than expected.
Details About the Problem
Here are some specifics to help contextualize the issue:
-
Dataset Size: 50 million records (approximately 200GB of data).
-
Query Performance: My current query takes 10 seconds to execute when filtering data over a 1-month range. Ideally, I’d like this to complete in under 2 seconds.
-
Hardware: Running on a 16-core CPU with 32GB RAM and SSD storage.
Example Query
Here’s an example of the query I’m using:
SELECT AVG(temperature), MAX(humidity)
FROM sensor_data
WHERE timestamp BETWEEN ‘2023-01-01’ AND ‘2023-01-31’
AND location = ‘warehouse-1’;
What I’ve Tried
To address the performance issues, I’ve attempted the following steps:
-
Indexing: Created an index on the timestamp and location columns.
-
Partitioning: Divided the data by month using GridDB’s container partitioning feature.
-
Resource Allocation: Adjusted memory settings for the query engine in GridDB.
Despite these efforts, I haven’t seen a noticeable improvement in query performance.
Questions
-
Are there additional optimizations I can apply to the query or container configuration in GridDB to handle this scale more efficiently?
-
Is it normal for a 10-second query time on a dataset of this size, or am I missing something in my GridDB setup?
Any insights or suggestions would be greatly appreciated!
Abel Mesfin is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.