In Postgres 16.2 I’ve a table Product with extra index:
CREATE TABLE Product (id SERIAL , name: text, companyId: number, PRIMARY KEY (id));
CREATE INDEX product_company_id ON Product (companyId);
The table has about 12M rows.
When I search for a row with companyId that exists in the table/index the right “product_company_id” index is used and execution is about 100ms.
select id, name, companyId from Product where companyId=12 order by id limit 1;
But when I search for a row with non existent companyId the execution is very slow about 7s. According to Explain/Analyze the index on companyId is not used:
Limit (cost=0.09..208.24 rows=1 width=43) (actual time=6404.660..6404.661 rows=0 loops=1)
-> Index Scan Backward using "Product_pkey" on "Product" (cost=0.09..562625.56 rows=2703 width=43) (actual time=6404.659..6404.659 rows=0 loops=1)
Filter: (companyid = 667)
Rows Removed by Filter: 11797182
Planning Time: 0.100 ms
Execution Time: 6404.674 ms
Removing the “limit” fixes the problem and index is used again. But the limit is important.
Is there any way to force the DB to use the index to find out there are no matching rows?