In the article about Buffers in PostgreSQL we kept adding EXPLAIN (ANALYZE, BUFFERS) to every query without giving much thought to the output. Time to fix that. PostgreSQL breaks down buffer usage for each plan node, and once you learn to read those numbers, you'll know exactly where your query spent time waiting for I/O - and where it didn't have to. That's about as fundamental as it gets when diagnosing performance problems.
PostgreSQL 18: BUFFERS by Default
Starting with PostgreSQL 18, EXPLAIN ANALYZE automatically includes buffer statistics - you no longer need to explicitly add BUFFERS. The examples below use the explicit syntax for compatibility with older versions, but on PG18+ a simple EXPLAIN ANALYZE gives you the same information.
A complete example
For this article we will use following schema and seeded data.
CREATE TABLE customers (
id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
name text NOT NULL
);
CREATE TABLE orders (
id integer GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
customer_id integer NOT NULL REFERENCES customers(id),
amount numeric(10,2) NOT NULL,
status text NOT NULL DEFAULT 'pending',
note text,
created_at date NOT NULL DEFAULT CURRENT_DATE
);
INSERT INTO customers (name)
SELECT 'Customer ' || i
FROM generate_series(1, 2000) AS i;
-- seed data: ~100,000 orders spread across 2022-2025
INSERT INTO orders (customer_id, amount, status, note, created_at)
SELECT
(random() * 1999 + 1)::int,
(random() * 500 + 5)::numeric(10,2),
(ARRAY['pending','shipped','delivered','cancelled'])[floor(random()*4+1)::int],
CASE WHEN random() < 0.3 THEN 'Some note text here for padding' ELSE NULL END,
'2022-01-01'::date + (random() * 1095)::int -- ~3 years of data
FROM generate_series(1, 100000);
-- make sure stats are up to date
ANALYZE customers;
ANALYZE orders;
-- we are going to skip indexes on purpose
-- and fire sample query
select count(1) from customers;
Let's start with a random query
EXPLAIN (ANALYZE, BUFFERS)
SELECT
[...]