Edit note: This question was updated a couple of times for clarification.
The problem
We have a hypertable that holds the latest device readings. At this point we have around 1000 devices (each device is registering data each minute, even some devices each second). Each device can register different types of readings (avg 10 different reading types for each device). And in a future, the requirements could grow to 1 million devices. The insertion rate is really high, this is why we decided to partition this table. Also, we are using just a single device storage (so the rule of multiply the number of sotrage devices by a number to get the number of partitions will be 1 x N)
The key here is to know how to determine N.
We have two hypertables the first one holds all the readings registered by our devices. The other one registers just the latest reading registered per each device and reading type.
Example
We have a device that registers temperature and location (two different reading types). And every time that a device registers a new reading, this reading is saved in the “Readings” table. But in an hour, a single device can register a lot of readings. This is why we created the other table named “LatestReadings” that holds the latest reading (of each type) that this device has registered. All this applied to all our devices. So in this tiny example “Readings” would have a lot of rows but “LatestRadings” would have just two rows.
Database specifications
- 8 cores
- 64 GB ram
- +850GB used by Postgres
- The hypertable that we want to partition has a size of ~655GB with ~450.000.000 registries.
Hypertables create a partition by time, but we need to create another dimension (create another partition) for a column that holds a UUID, so we are going to create a hash partition. This is why I made this question.
The questions
In our case we are going to create a partition on the “LatestReadings” table because we expect a huge increment of devices.
Knowing that partitioning could bring us some drawbacks if we create more partitions than necessary, how can we find the correct number of partitions to create?
As I could investigate it’s improtant to know:
- how many disks do you have (and use a multiple of that number)
- the size of your partitions that should be around 100MB and a couple of GB
- if using has partitioning use 2^N number of partitions
- how many cores do you have and how many chunks will be involved on a query (to do not exced that number of cores)
- For data that is sesgated it’s useful to have shorter partitions with most accessed data and larger partitions with not frequently accessed data.
4strodev is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
7
You don’t need to determine the number of partitions, you partition by time. You can also choose to merge chunks with compress_chunk_time_interval
when you compress then, finding a way to balance the size of your chunks.
ALTER TABLE <table_name> SET (timescaledb.compress,
timescaledb.compress_orderby = '<column_name> [ASC | DESC] [ NULLS { FIRST | LAST } ] [, ...]',
timescaledb.compress_segmentby = '<column_name> [, ...]',
timescaledb.compress_chunk_time_interval='interval'
);
Check this related issue https://github.com/timescale/timescaledb/issues/6720
You can think about testing the chunk size and changing it over time as more devices are added:
https://www.timescale.com/blog/timescale-cloud-tips-testing-your-chunk-size
[UPDATED]
The idea is to keep the chunks at 25% of the RAM memory, so if you have 8 GB, your chunk can be around 2GB. Let’s say when you start, you only have 100 devices, and you can make a test and discover that your chunk time interval is getting into 6 months. During the 5th month, you already see that it’s over 2Gb, so you use set_chunk_time_interval to adjust it again. It will not work for the current chunk, but it will work for the next chunks.
If you want to go the other way around, you can also start with chunks of a smaller size and union them when you compress.
Think about adding a small background job to watch the chunk’s detailed size daily and switch for the next few days.
2