We’re using TimescaleDB in an IoT environment, so doing quite a lot of inserts with no issues. In this case however, we’re seeing this particular SELECT query sometimes be very slow (10-15 seconds), even in the QA environment with very low load. The normal execution time is around 400ms (across VPN using pgAdmin).
This is the query, where the parameters of course will vary for each call:
SELECT
t.time :: timestamp + '60 minutes' :: INTERVAL,
t.data ->> 'gfdm:LEVEL',
t.data ->> ''
FROM q_telemetry.telemetry_data
WHERE device_twin_id = '3de7116b-e04b-446b-af8e-51a579222423'
AND data ? 'gfdm:LEVEL'
AND time >= '2024-06-14T01:00' :: timestamp
AND time < '2024-06-14T07:00' :: timestamp
ORDER BY time
This is the SQL of the table creation (using create script in pgAdmin):
CREATE TABLE IF NOT EXISTS q_telemetry.telemetry_data
(
device_twin_id uuid NOT NULL,
"time" timestamp with time zone NOT NULL,
write_time timestamp with time zone NOT NULL,
data jsonb NOT NULL,
type q_telemetry.telemetry_type
)
CREATE INDEX IF NOT EXISTS idx_telemetry_data
ON q_telemetry.telemetry_data USING gin
(data)
TABLESPACE pg_default;
CREATE INDEX IF NOT EXISTS idx_telemetry_devicetwinid_time
ON q_telemetry.telemetry_data USING btree
(device_twin_id ASC NULLS LAST, "time" ASC NULLS LAST)
TABLESPACE pg_default;
CREATE INDEX IF NOT EXISTS telemetry_data_time_idx
ON q_telemetry.telemetry_data USING btree
("time" DESC NULLS FIRST)
TABLESPACE pg_default;
CREATE TRIGGER ts_insert_blocker
BEFORE INSERT
ON q_telemetry.telemetry_data
FOR EACH ROW
EXECUTE FUNCTION _timescaledb_internal.insert_blocker();
These are the results from running query analyzes in pgAdmin:
I’m not knowledgable enough to determine if that looks optimal or not, and could use some input and improvement suggestions.