I would like to understand, why is my database (postgres in docker) that slow.
Table has 35 columns and 3.8m records. Table size is rouglhy 2GB.
The queries which I typicaly do here is:
SELECT x, y, z
count(distinct(w)) AS count_1
FROM criminal_acts
where y like 'xyz'
GROUP BY 1, 2, 3;
Normaly this kind of query tooks like 12-15 secunds, which from my point of very slow.
About indexes is user can choise x,y,z parameters and also for where condition user also chose whatever he want to filter out in our application.
I am wondering which measures should I take to improve performance ?
I try with vacum and adding every column to index but improvement is very minior.
Is probably problem with docker or virtual machine we had 8gb of RAM, and disk has following stats:
sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.505 s, 713 MB/s
sudo dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.244821 s, 4.4 GB/s