'PostgreSQL 13 - Performance Improvement to delete large table data

I am using PostgreSQL 13 and has intermediate level experience with PostgreSQL.

I have a table named tbl_employee. it stores employee details for number of customers.

Below is my table structure, followed by datatype and index access method

  Column     |            Data Type        |     Index name            | Idx Access Type
-------------+-----------------------------+---------------------------+---------------------------
 id          | bigint                      |                           |
 name        | character varying           |                           | 
 customer_id | bigint                      |  idx_customer_id          | btree
 is_active   | boolean                     |  idx_is_active            | btree
 is_delete   | boolean                     |  idx_is_delete            | btree

I want to delete employees for specific customer by customer_id.

In table I have total 18,00,000+ records.

When I execute below query for customer_id 1001 it returns 85,000.

SELECT COUNT(*) FROM tbl_employee WHERE customer_id=1001;

When I perform delete operation using below query for this customer then it takes 2 hours, 45 minutes to delete the records.

DELETE FROM tbl_employee WHERE customer_id=1001

Problem

My concern is that this query should take less than 1 min to delete the records. Is this normal to take such long time or is there any way we can optimise and reduce the execution time?

Below is Explain output of delete query

enter image description here

The values of seq_page_cost = 1 and random_page_cost = 4.

Below are no.of pages occupied by the table "tbl_employee" from pg_class.

enter image description here Please guide. Thanks



Solution 1:[1]

During :

DELETE FROM tbl_employee WHERE customer_id=1001

Is there any other operation accessing this table? If only this SQL accessing this table, I don't think it will take so much time.

Solution 2:[2]

In RDBMS systems each SQL statement is also a transaction, unless it's wrapped in BEGIN; and COMMIT; to make multi-statement transactions.

It's possible your multirow DELETE statement is generating a very large transaction that's forcing PostgreSQL to thrash -- to spill its transaction logs from RAM to disk.

You can try repeating this statement until you've deleted all the rows you need to delete:

DELETE FROM tbl_employee WHERE customer_id=1001 LIMIT 1000;

Doing it this way will keep your transactions smaller, and may avoid the thrashing.

Solution 3:[3]

SQL: DELETE FROM tbl_employee WHERE customer_id=1001 LIMIT 1000; will not work then.

To make the batch delete smaller, you can try this:

DELETE FROM tbl_employee WHERE ctid IN (SELECT ctid FROM tbl_employee where customer_id=1001 limit 1000)

Until there is nothing to delete. Here the "ctid" is an internal column of Postgresql Tables. It can locate the rows.

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 SeanH
Solution 2 O. Jones
Solution 3 SeanH