The table has close to 250 million records and I am trying to update certain records and it take close to 45s, just want to understand if there a way to optimise it.
Query:
UPDATE "commission"
SET "knowledge_end_date" = '2024-07-31T01:47:41.053562+00:00' :: timestamptz
WHERE ( "commission"."client_id" = 93
AND NOT "commission"."is_deleted"
AND "commission"."knowledge_end_date" IS NULL
AND "commission"."payee_email_id" = '*******'
AND ( "commission"."period_end_date" BETWEEN
'2024-04-01T00:00:00+00:00' :: timestamptz AND
'2024-06-30T23:59:59.999999+00:00' :: timestamptz
OR "commission"."period_start_date" BETWEEN
'2024-04-01T00:00:00+00:00' :: timestamptz AND
'2024-06-30T23:59:59.999999+00:00' :: timestamptz )
AND "commission"."criteria_id" IN
(
'354dec74-2f1e-4413-8c3e-ac3b2ffde584' ::
uuid
,
'563f15a2-e5f8-46c0-94c3-65cec8e9f841' ::
uuid, '1f0626d9-5d5a-4781-8c34-2385b8c0edf0'
:: uuid,
'e7d2fa13-383c-45ff-a9c1-55fb6c353634' :: uuid,
'c017efdb-7774-4374-97ba-0a4938924945' :: uuid,
'6a9c6d90-a1ca-4af7-8a05-393f2314367a'
:: uuid,
'0afaf878-47e3-4845-8203-0d9dbdbe152a' :: uuid,
'4e459ae7-0868-4604-b4f3-62679a05db59'
::
uuid
,
'8953abd5-1881-4bf5-ba1a-34c03eeb6331'
::
uuid
) )
Query Plan
Update on public.commission (cost=0.70..808370.78 rows=5690 width=650) (actual time=47317.841..47317.842 rows=0 loops=1)
Update on public.commission_93
Buffers: shared hit=2321291 read=13061 written=261
I/O Timings: read=39525.082
-> Index Scan using commission_93_client_id_is_deleted_knowledge_end_date_payee_idx on public.commission_93 (cost=0.70..808370.78 rows=5690 width=650) (actual time=0.023..198.990 rows=62497 loops=1)
Output: commission_93.temporal_id, commission_93.knowledge_begin_date, '2024-07-31 01:47:41.053562+00'::timestamp with time zone, commission_93.is_deleted, commission_93.additional_details, commission_93.period_start_date, commission_93.period_end_date, commission_93.payee_email_id, commission_93.commission_plan_id, commission_93.criteria_id, commission_93.line_item_type, commission_93.tier_id, commission_93.amount, commission_93.client_id, commission_93.commission_snapshot_id, commission_93.context_ids, commission_93.line_item_id, commission_93.primary_kd, commission_93.secondary_kd, commission_93.secondary_snapshot_id, commission_93.show_do_nothing, commission_93.commission_date, commission_93.original_tier_id, commission_93.ctid
Index Cond: ((commission_93.client_id = 93) AND (commission_93.is_deleted = false) AND (commission_93.knowledge_end_date IS NULL) AND ((commission_93.payee_email_id)::text = '********'::text))
Filter: ((((commission_93.period_end_date >= '2024-04-01 00:00:00+00'::timestamp with time zone) AND (commission_93.period_end_date <= '2024-06-30 23:59:59.999999+00'::timestamp with time zone)) OR ((commission_93.period_start_date >= '2024-04-01 00:00:00+00'::timestamp with time zone) AND (commission_93.period_start_date <= '2024-06-30 23:59:59.999999+00'::timestamp with time zone))) AND (commission_93.criteria_id = ANY ('{354dec74-2f1e-4413-8c3e-ac3b2ffde584,563f15a2-e5f8-46c0-94c3-65cec8e9f841,1f0626d9-5d5a-4781-8c34-2385b8c0edf0,e7d2fa13-383c-45ff-a9c1-55fb6c353634,c017efdb-7774-4374-97ba-0a4938924945,6a9c6d90-a1ca-4af7-8a05-393f2314367a,0afaf878-47e3-4845-8203-0d9dbdbe152a,4e459ae7-0868-4604-b4f3-62679a05db59,8953abd5-1881-4bf5-ba1a-34c03eeb6331}'::uuid[])))
Rows Removed by Filter: 61754
Buffers: shared hit=16983
- Why are the buffer hits/reads higher between read (Buffers: shared hit=16983) and write(Buffers: shared hit=2321291 read=13061 written=261)?
- How do we bring down the IO timing?
- Can any config change be done at the Postgres level to speed up writes, like turning off replica sync or checkpoint etc?