Latest Blog Posts

PgPedia Week, 2025-10-26
Posted by Ian Barwick on 2025-11-06 at 10:53
PostgreSQL 19 changes this week ALTER SUBSCRIPTION : REFRESH SEQUENCES syntax added error_on_null() added for checking if the input is the NULL value COPY TO now works with partitioned tables full text search : database default collation now used for parsing psql : Improve tab completion for large objects. PostgreSQL 19 articles PostgreSQL 18 articles Waiting for PostgreSQL 19 – Support COPY TO for partitioned tables. (2025-10-22) - Hubert 'depesz' Lubaczewski Say Hello to OIDC in PostgreSQL 18! (2025-10-22) - Jan Wieremjewicz / Percona

more...

Seamless PostgreSQL subscriber upgrades: Preserving Logical Replication state
Posted by vignesh C in Fujitsu on 2025-11-05 at 23:59

Upgrading to a new major version of PostgreSQL has become faster and safer with pg_upgrade, but for teams using logical replication, a persistent pain point remained: the state of their subscribers was lost, forcing a tedious and high-risk rebuild of their replication setups.

Producing UUIDs Version 7 disguised as Version 4 (or 8)
Posted by Daniel Vérité on 2025-11-05 at 14:13
When communicating UUID-v7 values to third parties, the creation dates inside them are leaked. In this post, let's see how to encrypt the timestamp to solve that problem.

PgPedia Week, 2025-10-19
Posted by Ian Barwick on 2025-11-05 at 13:30

Due to an unfortunate recent visitation by the Virus of the Decade (so far), I have a backlog of these which I'm trying to work through, so in the remote chance anyone is waiting with bated breath for the newest editions, my apologies. Normal service will be resumed as soon as humanly possible.

PostgreSQL 19 changes this week Add log_autoanalyze_min_duration PostgreSQL 19 articles Waiting for PostgreSQL 19 – Add IGNORE NULLS/RESPECT NULLS option to Window functions. (2025-10-13) - Hubert 'depesz' Lubaczewski PostgreSQL 18 articles Benchmarking Postgres 17 vs 18 (2025-10-14) - Ben Dickens / PlanetScale

more...

October PUG Recording
Posted by Henrietta Dombrovskaya on 2025-11-05 at 11:28

Almost a month late, but I hope you enjoy it!

NUMA, Linux, and PostgreSQL before libnuma Support
Posted by Chris Travers in Stormatics on 2025-11-05 at 09:26

PostgreSQL and NUMA, part 2 of 4

This series covers the specifics of running PostgreSQL on large systems with many processors. My experience is that people often spend months learning the basics when confronted with the problem. This series tries to dispel these difficulties by providing a clear background into the topics in question. The hope is that future generations of database engineers and administrators don’t have to spend months figuring things out through trial and error.

This article builds on the previous entry, “An Introduction to NUMA” and assumes a general working knowledge of NUMA not only in definition but in hardware and software implementation as well.

This entry in this series covers the general interaction of PostgreSQL and Linux on NUMA systems. This topic is complex and so there are cases of simplification. However this distills the general information about running PostgreSQL 17 and below (or Postgres 18 without libnuma support) on NUMA systems, with Linux as a reference point. By the end of this blog entry, you should both be able to run Postgres on a NUMA system and also understand why the libnuma support in PostgreSQL 18 is so important.

A Quick Recap

NUMA represents an effort at creating an architecture that localizes parallel memory access by allocating memory to specific controllers, which serialize access to the memory they control.  Memory controllers can ask other memory controllers for memory, and this is slower than asking for local memory.

The Linux kernel has several strategies available for allocating memory (including the executable memory for new threads and processes), and these can be tuned for specific processes by an administrator using the program numactl, though systemd also supports setting NUMA policies in the unit files. These include membind, interleave, and prefer, with a default of “prefer.”

The Linux kernel can also migrate processes to

[...]

PostgreSQL Partition Pruning: The Role of Function Volatility
Posted by Deepak Mahto on 2025-11-04 at 14:39

In one of our earlier blogs, we explored how improper volatility settings in PL/pgSQL functions — namely using IMMUTABLE, STABLE, or VOLATILE — can lead to unexpected behavior and performance issues during migrations.

Today, let’s revisit that topic from a slightly different lens. This time, we’re not talking about your user-defined functions, but the ones PostgreSQL itself provides — and how their volatility can quietly shape your query performance, especially when you’re using partitioned tables.

⚙ When Partitioning Doesn’t Perform

When designing partitioning strategies, performance is usually the first thing that comes to mind.
But what if despite all your tuning the queries still fail to leverage partition pruning effectively?

The answer sometimes lies not in your schema or indexes, but in how your filters are evaluated.

Every function you use in a filter has its own volatility behavior that determines how it’s planned and executed. You might pick the right filter logically — but if the function is volatile, it can completely change how PostgreSQL builds and optimizes the plan.

Let’s say you’ve designed a table as daily partitions and use Date Functions as partitions filters and check its underlying executions plan.

CREATE TABLE IF NOT EXISTS events_daily (
    id bigint generated always as identity ,
    created_at timestamptz NOT NULL,
    created_date date NOT NULL,      
    user_id int NOT NULL,
    amount numeric(10,2) NOT NULL,
    status text,
    payload text
) PARTITION BY RANGE (created_date);

--create a year of partitions as daily
DO
$$
DECLARE
    start_date date := current_date - INTERVAL '364 days';
    d date;
    part_name text;
BEGIN
    d := start_date;
    WHILE d <= current_date LOOP
        part_name := format('events_p_%s', to_char(d, 'YYYYMMDD'));
        EXECUTE format($sql$
            CREATE TABLE IF NOT EXISTS %I PARTITION OF events_daily
            FOR VALUES FROM (%L) TO (%L);
        $sql$, part_name, d::text, (d + INTERVAL '1 day')::da
[...]

Counting Customers in PostgreSQL
Posted by Hans-Juergen Schoenig in Cybertec on 2025-11-04 at 06:00

As a database consulting company, we are often faced with analytics and reporting related tasks which seem to be easy on the surface but are in reality not that trivial. The number of those seemingly simple things is longer than one might think, especially in the area of reporting

The purpose of this post is not to share some fancy SQL techniques, some super cool magic AI tool that does whatever new hot thing or some crazy coding technique one can use in PostgreSQL. This is all about two simple things that are often forgotten or just neglected. We are of course talking about “awareness” and “clarity”.

Yes, it is that simple. If you don’t know what you have or if you don’t know what you want, all data is worthless (in a best case scenario) or even dangerous. 

Simple Things: Counting Customers

One of those seemingly simple things might be related to a trivial question: “How many customers do you have?”. Every CEO will proudly talk about this number but have you ever asked those guys a simple question: “What is a customer?”.

Well, just try. In my experience the process often starts with an arrogant smile that indicates “What stupid question is that?”. But just dig deeper and keep asking for details. I can assure you: In the majority of cases, people have no clue what they are talking about.

Here are some typical scenarios:

  • If a person has not ordered for a long time, when do you stop counting him as a customer?
  • If two customers merge into a single company and rebrand: How many customers do you have?
  • If a customer has ordered an item and sent it back immediately without paying for it. Is it a customer?

One could continue this list forever and a day. In other words: It is not that simple and many "business people” tend to think that displaying things in a pie chart has the power to change the fundamental problem: “What do you actually count?”

Example: Counting Customers in Banking

The following data structure shows a simple scenario: A bank wants t

[...]

ALTER Egos: Me, Myself, and Cursor
Posted by Mayur B. on 2025-11-04 at 05:56

I pushed the most boring change imaginable, add an index. Our CI/CD pipeline is textbook ==> spin up a fresh DB, run every migration file in one single transaction, in sequential manner. If anything hiccups, the whole thing rolls back and the change never hits main. Foolproof autotests.

Enter The Drama Queen :

ERROR: cannot CREATE INDEX "index_name_xyz" on table abcdef_tab
because it is being used by active queries in this session

This session. Same PID, same transaction. No parallel runner. No second connection. Still blocked.

Our schema migration repository structure is as shown above. CI tool will create a fresh DB → execute all scripts chronlogically → one by one → one transaction → one session as a part of an autotest before merging any new code (dml/ddl for schema migration = infra as a code) to main branch.

Naturally ignoring last part of error message like a seasoned DBA, I accused CI tool of going rogue, maybe someone “optimized” migrations to run in parallel after reading a LLM pep talk about “unleashing concurrency.”

I printed the PID at the start and end of every file.

-- Stamp the migration transaction
SELECT 'start', pg_backend_pid(), txid_current();
-- ... run all migration steps ...
SELECT 'end', pg_backend_pid(), txid_current();

Same PID. Same transaction. Same sadness.

One of the least talked about but best Postgres feature as a FOSS project is that you can search error message in code base and documentation is excellent.

  • In indexcmds.c, PostgreSQL calls CheckTableNotInUse(rel, "CREATE INDEX"); before proceeding. The comment explains why: otherwise an in-progress INSERT/UPDATE in this same session could have already picked its list of target indexes and would not update the new one. doxygen.postgresql.org
  • CheckTableNotInUse() (in tablecmds.c) raises
    ERROR: cannot %s
[...]

Do you really need tsvector column?
Posted by Hubert 'depesz' Lubaczewski on 2025-11-03 at 12:34
When using tsearch one usually, often, creates a tsvector column to put data in, and then create index on it. But, do you really need the index? I wrote once already that you don't have to, but then a person talked with me on IRC, and pointed this section of docs: One advantage of the … Continue reading "Do you really need tsvector column?"

PostgreSQL 18 enables data‑checksums by default
Posted by Josef Machytka in credativ on 2025-11-03 at 09:30

As I explained in my talk on PostgreSQL Conference Europe 2025, data corruption can be silently present in any PostgreSQL database and will remain undetected until we physically read corrupted data. There can be many reasons why some data blocks in tables or other objects can be damaged. Even modern storage hardware is far from being infallible. Binary backups done with pg_basebackup tool – which is very common backup strategy in PostgreSQL environment – leave these problems hidden. Because they do not check data but copy whole data files as they are. With release of PostgreSQL 18, the community decided to turn on data‑checksums by default – a major step toward early detection of these failures. This post examines how PostgreSQL implements checksums, how it handles checksum failures, and how we can enable them on existing clusters.

Why PostgreSQL Checksums matter

Why checksums matter

A PostgreSQL table or index is stored in 8 KB pages. When a page is written to disk, PostgreSQL computes a 16‑bit checksum using every byte of the page (except the checksum field itself) and the page’s physical block address. The checksum is stored in the page header. On every read, PostgreSQL recalculates the checksum and compares it against the stored value. Because the block address is part of the calculation, the system detects both bit flips within the page and pages written to the wrong place. Checksums are not maintained while the page sits in shared buffers – they are computed only when the page is flushed from the buffer cache to the operating system page cache. Consequently, an incorrect in‑memory page cannot be detected until it is written and read again. PostgreSQL uses a fast FNV‑1a hash (with CRC32C on WAL records) that is optimized for performance. On typical hardware the cost of calculating checksum seems to be small. A benchmarking studies found the penalty is usually less than 2 % for normal workloads. PostgreSQL 18’s release notes acknowledge that the overhead is non‑zero but accept it for the benefit of data integrit

[...]

Beyond Start and End: PostgreSQL Range Types
Posted by Radim Marek on 2025-11-02 at 20:45

Beyond Start and End columns with PostgreSQL range types

One of the most read articles at boringSQL is Time to Better Know The Time in PostgreSQL where we dived into the complexities of storing and handling time operations in PostgreSQL. While the article introduced the range data types, there's so much more to them. And not only for handling time ranges. In this article we will cover why to consider range types and how to work with them.

Bug Not Invented Here

But before we can talk about the range types, let's try to understand why we should look at them in the first place. Let's imagine a booking platform for large flash sales of the seats, that goes live at 10pm and will be taken by storm by thousands of people who want to get their tickets.

CREATE TABLE seat_holds (
    hold_id BIGINT GENERATED ALWAYS AS IDENTITY PRIMARY KEY,

    seat_id INTEGER NOT NULL REFERENCES seats(id),
    user_session_id UUID NOT NULL,
    -- define the hold period explicitly
    hold_started_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP,
    hold_expires_at TIMESTAMPTZ NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT CURRENT_TIMESTAMP
);

CREATE INDEX seat_holds_on_seat_id ON seat_holds(seat_id);
CREATE INDEX seat_holds_on_expiration ON seat_holds(hold_expires_at);

While the table design looks perfectly reasonable, it has one serious flaw - there's no database-level atomicity guarantee there to prevent two holds for the same seat_id at the same time. The table design requires on application logic to check for the existing holds before inserting a new hold, and at the same time it does not provide any high-concurrency guarantee.

If all you have in your toolbelt are those two columns you will end up increasing the complexity to make it work. You started with one problem and soon your application developers might want to ask you to add caching layer (most likely external K/V store) to place the holds there and very soon you have N-problems when you will resort to building a complex, custom application

[...]

Contributions for week 44, 2025
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-11-02 at 13:32

PostgreSQL received attention through the following contributions at Data Stack Conf 2025 on Oct 29, 2025:

Speaker

  • Radoslav Stanoev
  • Pavlo Golub
  • Lætitia Avrot
  • Valeria Bogatyreva
  • Devrim Gündüz

PostgreSQL Booth Staff

  • Devrim Gündüz
  • Pavlo Golub

Gabriele Quaresima spoke at Cloud Native Bergen on Tuesday, October 28, 2025.

Community Blog Posts

Migration From MySQL To PostgreSQL In Five Steps Using DBeaver
Posted by Dave Stokes on 2025-11-01 at 15:21

 I wrote a post in my MySQL blog on migrating from MySQL to PostgreSQL using DBeaver. You can pass it along to your acquaintances who want to get off the Dolphin and on the Elephant. 

Not only will DBeaver move your tables and data, but you can compare them afterwards. In the post, I outline the process in five steps. DBeaver will let you do it in four. 

Please share this blog with your friends to promote PostgreSQL. 

Meeting High Availability Requirements in Non-Distributed PostgreSQL Deployments
Posted by Antony Pegg in pgEdge on 2025-10-31 at 18:25

High availability in PostgreSQL doesn't always require a globally distributed architecture. Sometimes you need reliable failover and replication within a single datacentre or region. pgEdge Enterprise Postgres handles this scenario with a production-ready PostgreSQL distribution that includes the tools you need for high availability out of the box.

What You Get

pgEdge Enterprise Postgres bundles PostgreSQL with components you'd otherwise install and configure separately. The distribution includes pgBouncer for connection pooling, pgBackRest for backup and restore, pgAudit for audit logging, and pgAdmin for database management. You also get PostGIS and pgVector if you need geographic or vector comparison capabilities.High availability support is added in through the addition of open-source PostgreSQL extensions from pgEdge (all available to work with on GitHub), like spock, lolor, and snowflake sequences.The package supports PostgreSQL versions 16, 17, and 18 running on Red Hat Enterprise Linux v9 and v10 (including Rocky, Alma, and Oracle Enterprise) on both x86 and ARM architectures. You can currently deploy it as a VM, and we'll be adding container and managed cloud editions soon (keep an eye on our social channels for updates along the way).

High Availability Through Physical and Logical Replication

pgEdge Enterprise Postgres supports physical replication with automated failover for traditional high availability setups. This works for the standard scenario where you need a primary database with standby replicas ready to take over if the primary fails.For more advanced scenarios, Spock comes bundled out-of-the-box to enable logical multi-master replication. Unlike physical replication that copies entire database clusters at the block level, Spock uses PostgreSQL's logical decoding to replicate individual table changes between nodes. This enables active-active deployments where multiple nodes can accept writes simultaneously.

Spock: Multi-Master Replication for High Availability

Spock provides multi[...]

Don't give Postgres too much memory
Posted by Tomas Vondra on 2025-10-31 at 13:00

From time to time I get to investigate issues with some sort of a batch process. It’s getting more and more common that such processes use very high memory limits (maintenance_work_mem and work_mem). I suppose some DBAs follow the logic that “more is better”, not realizing it can hurt the performance quite a bit.

Let me demonstrate this using an example I ran across while testing a fix for parallel builds of GIN indexes. The bug is not particularly interesting or complex, but it required a fairly high value for maintenance_work_mem (the initial report used 20GB).

What Are “Dirty Pages” in PostgreSQL?
Posted by Umair Shahid in Stormatics on 2025-10-31 at 07:52

PostgreSQL stores data in fixed‑size blocks (pages), normally 8 KB. When a client updates or inserts data, PostgreSQL does not immediately write those changes to disk. Instead, it loads the affected page into shared memory (shared buffers), makes the modification there, and marks the page as dirty. A “dirty page” means the version of that page in memory is newer than the on‑disk copy.

Before returning control to the client, PostgreSQL records the change in the Write‑Ahead Log (WAL), ensuring durability even if the database crashes. However, the actual table file isn’t updated until a checkpoint or background writer flushes the dirty page to disk. Dirty pages accumulate in memory until they are flushed by one of three mechanisms:

  • Background Writer (BGWriter) – a daemon process that continuously writes dirty pages to disk when the number of clean buffers gets low.
  • Checkpointer – periodically flushes all dirty pages at a checkpoint (e.g., checkpoint_timeout interval or WAL exceeds max_wal_size).
  • Backend processes – in emergency situations (e.g., shared buffers are full of dirty pages), regular back‑end processes will write dirty pages themselves, potentially stalling user queries.

Understanding and controlling how and when these dirty pages are flushed is key to good PostgreSQL performance.

Why Dirty Pages Matter

Dirty pages affect performance in multiple ways:

  1. I/O Spikes During Checkpoints: When a checkpoint occurs, every dirty page must be flushed to disk. If many pages are dirty, this flush can produce huge I/O spikes that degrade performance for other queries. The checkpoint_timeout, checkpoint_completion_target, and max_wal_size parameters control how often and how aggressively checkpoints flush dirty pages.
  2. Backend Writes: When the shared buffer cache fills up with dirty pages and the BGWriter can’t keep up, bac
[...]

#PostgresMarathon 2-011: Prepared statements and partitioned tables — the paradox, part 3
Posted by Nikolay Samokhvalov in Postgres.ai on 2025-10-30 at 23:59

In #PostgresMarathon 2-009 and #PostgresMarathon 2-010, we explored why execution 6 causes a lock explosion when building a generic plan for partitioned tables — the planner must lock all 52 relations because it can't prune without parameter values.

Today we'll test what actually happens with different plan_cache_mode settings.

Let's test empirically on Postgres 18 with the 12-partition table from the previous posts. This time, we'll insert 1M rows into each partition to get realistic query plans with index scans:

-- Insert 1M rows into each partition
do $$
declare
i int;
start_date date;
partition_name text;
begin
for i in 0..11 loop
start_date := '2024-01-01'::date + make_interval(months => i);
partition_name := 'events_' || to_char(start_date, 'YYYY_MM');

execute format(
'insert into %I (event_id, event_time, event_data)
select s, %L::timestamptz + (s * interval ''1 second''), ''data_'' || s
from generate_series(1, 1000000) s',
partition_name, start_date
);
end loop;
end $$;

vacuum analyze events;

Now let's see what different plan_cache_mode settings do.

Test 1: auto mode

As we already saw, with the default auto mode setting, Postgres decides whether to use custom or generic plans based on cost comparison. We saw, that first 5 runs it uses custom plan, then, on the important 6th call, it builds the generic plan (and we have the "lock explosion" we already studied), and then, on 7th call and further, it uses the generic plan.

But that was on empty tables. Here things will be different, because we have data. Let's see.

As before, prepare a statement and execute it multiple times, observing the number of relation-level locks and plan type counters:

prepare test (timestamptz) as
select event_id, event_data from events where event_time = $1;

-- Run this snippet to test each execution
begin;
explain (verbose) execute test(timestamptz '2024-06-06 00:00:00+00');

select
count(*) as lock_count,
array_agg(
distinct relation::reg
[...]

Install PostgreSQL 18 on Ubuntu 25.10
Posted by Paolo Melchiorre in ITPUG on 2025-10-30 at 23:00

How-to guide for installing PostgreSQL version 18 on Ubuntu, after a fresh installation of version 25.10 (Questing Quokka).

#PostgresMarathon 2-010: Prepared statements and partitioned table lock explosion, part 2
Posted by Nikolay Samokhvalov in Postgres.ai on 2025-10-29 at 23:59

In #PostgresMarathon 2-009, we focused on Lock Manager's behavior when dealing with prepared statements and partitioned tables.

And observed a lock explosion in our simple synthetic example: from 8 locks (custom plans) during first 5 calls, to 52 locks (building generic plan) in the 6th call, to 13 locks (using cached generic plan) in the 7th and subsequent calls. We left with questions:

  • this lock explosion at the 6th call – why is it so exactly and can it be avoided?
  • why do we lock all 12 partitions even though runtime pruning removes 11 of them?

Let's dig deeper.

The 6th call: why 52 locks?

In #PostgresMarathon 2-008, we studied the code flow for unpartitioned tables. The same pattern applies here, but with a critical difference: while for the first 5 calls we had very efficient planning-time partition pruning, it is not used during generic plan building in the 6th call.

Let's trace the execution (using PG18 sources; // comments are mine):

Step 1: Acquire planner locks

GetCachedPlan() starts by locking the Query tree via AcquirePlannerLocks():

AcquirePlannerLocks(plansource->query_list, true);

The Query tree (parser output) contains only the parent table reference. Result -- 4 locks acquired (parent table + 3 parent indexes). Partition locks will be acquired later during planning.

Step 2: plan type decision

Function choose_custom_plan() decides whether to use custom or generic plan:

if (plansource->num_custom_plans < 5)
return true; // NOT taken (num_custom_plans = 5)

if (plansource->generic_cost < avg_custom_cost)
return false; // TAKEN (generic_cost = -1, meaning "not yet calculated")

Result here: use generic plan.

Step 3: build generic plan

Since no cached plan exists yet (CheckCachedPlan() returns false), we build one:

plan = BuildCachedPlan(plansource, qlist, NULL, queryEnv);
// ^^^^ NULL = no bound parameters

Insi

[...]

Waiting for PostgreSQL 19 – Add psql PROMPT variable for search_path.
Posted by Hubert 'depesz' Lubaczewski on 2025-10-29 at 11:59
On 28th of October 2025, Nathan Bossart committed patch: Add psql PROMPT variable for search_path.   The new %S substitution shows the current value of search_path. Note that this only works when connected to Postgres v18 or newer, since search_path was first marked as GUC_REPORT in commit 28a1121fd9. On older versions that don't report search_path, … Continue reading "Waiting for PostgreSQL 19 – Add psql PROMPT variable for search_path."

Head of Marketing experiences @PGConf.EU 2025 – A Riga Recap
Posted by Cornelia Biacsics in Cybertec on 2025-10-29 at 06:00

PostgreSQL Conference Europe 2025 in Riga has officially come to an end — and what remains are the impressions, emotions, moments, and the incredible vibe that remind me why we do what we do.

I met so many inspiring people, had conversations with some of my role models, and wore many hats throughout the conference.

Preparations and Looking Ahead

PostgreSQL Conference Europe is an annual event — and funnily enough, it was also the last time I had used my large suitcase for a trip abroad. So, in a way, this year’s conference experience began a day before I even left Austria, when I pulled out that same suitcase and noticed the old airport baggage tag still attached from PGConf.EU 2024 in Athens. That small detail instantly brought back a wave of memories — and the excitement for Riga began right there.

suitcase_pgconf.eu

This time, I was traveling with two large bags — one for my personal items (six days in a country colder than Austria!) and one filled with CYBERTEC marketing swag, booth material, and a roll-up banner on my shoulder.

suitcase2_pgconf.eu

At the Vienna (Schwechat) departure gate, I met Pavlo Golub and Svitlana Lytvynenko, who were on the same flight. Other colleagues departed from different parts of the world — CYBERTEC truly is a global team. In total, 20 colleagues attended PGConf.EU 2025, and I was looking forward to reconnecting in person — without plastic screens between us. Which is true for all the other people in the community I got to know over the past years - it’s quite a challenge to catch up with everyone in just a few days!

CYBERTEC team_pgconf.eu

Arrival in Riga

We flew with Air Baltic, and apart from one small scare — a colleague’s plane experienced an engine failure right on the runway before takeoff — everyone eventually made it safely to Riga.

My own travel day started early: I left home at 5 am., and by the time I arrived at the conference hotel around noon, I already felt like I’d been on the move forever. (For context, Riga is on GMT+3, while Austria is GMT+2 in summer — so there was a bit

[...]

#PostgresMarathon 2-009: Prepared statements and partitioned table lock explosion, part 1
Posted by Nikolay Samokhvalov in Postgres.ai on 2025-10-28 at 05:00

In #PostgresMarathon 2-008, we discovered that prepared statements can dramatically reduce LWLock:LockManager contention by switching from planner locks (which lock everything) to executor locks (which lock only what's actually used). Starting with execution 7, we saw locks drop from 6 (table + 5 indexes) to just 1 (table only).

There we tested only a simple, unpartitioned table. What happens if the table is partitioned?

The following was tested on Postgres 18.0 with default settings:

  • enable_partition_pruning = on
  • plan_cache_mode = auto

Postgres behavior in this field might change in the future — there are WIP patches optimizing performance.

Let's create a simple partitioned table with multiple partitions:

create table events (
event_id bigint,
event_time timestamptz,
event_data text
) partition by range (event_time);

-- Create 12 monthly partitions
do $$
declare
i int;
start_date date;
end_date date;
begin
for i in 0..11 loop
start_date := '2024-01-01'::date + make_interval(months => i);
end_date := start_date + interval '1 month';

execute format(
'create table events_%s partition of events for values from (%L) to (%L)',
to_char(start_date, 'YYYY_MM'),
start_date,
end_date
);
end loop;
end $$;

Result:

test=# \d+ events
Partitioned table "public.events"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
------------+--------------------------+-----------+----------+---------+----------+-------------+--------------+-------------
event_id | bigint | | | | plain | | |
event_time | timestamp with time zone | | | | plain | | |
event_data | text | | | | extended | | |
Partition key: RANGE (event_
[...]

The Future of CYBERTEC and PostgreSQL
Posted by Hans-Juergen Schoenig in Cybertec on 2025-10-28 at 05:00

For this second entry into our blog feature to celebrate 25 years of CYBERTEC, our CEO gave some interesting insights into what he expects of the future - for the company as well as PostgreSQL as a whole.

CYBERTEC in another 25 years

Interviewer: Let's talk about the future. Where do you think you see CYBERTEC in another 25 years? 

Hans: If I would make a prediction, it would be wrong for sure, but certainly… larger. Certainly, if this is even possible at this point, more international, and definitely the same technology, I would say, but with more ecosystem around it.

More locations, hopefully not more processes. Hopefully similar vibes and hopefully in the same place - physically, I mean.

Interviewer: Alright, so moving away from CYBERTEC a little, where do you see yourself personally in 25 years?

Hans: I don't know. Hopefully alive. [laughs] 

Interviewer: That's a great goal to have. [laughs] 

Hans: In 25 years I’m 72. I mean, hopefully alive, right?

Interviewer: Yeah. 

Hans: I don't know. Wouldn't it be damn boring if you would know?

Interviewer: That's true. 

Hans: Why would you wanna know? I mean, I believe that if you wanna know exactly what's gonna happen in 20 years, you have to become a bureaucrat. Otherwise, you don't want to know, I believe. I believe you don't. Even if everything is good, you don't wanna know because that would destroy the fun, I believe. 

Interviewer: Okay. Then, looking at a closer future, what would you still like to achieve?

Hans: It's not about achievement. It's not about achievement because achievement is an ego thing. “If I do this, I will be more happy,” whatever. No, achievement, as in big philosophy, I'm not sure if it's about achievement, it's about doing the right thing. Is doing the right thing an achievement? I don’t know, that's quite philosophical, I would say. 

But doing the right thing, I don't think that's an achievement. That's something you should do, you know? It’s not about achievements, like: “I ne

[...]

Explaining IPC:SyncRep – Postgres Sync Replication is Not Actually Sync Replication
Posted by Jeremy Schneider on 2025-10-27 at 23:12

Postgres database-level “synchronous replication” does not actually mean the replication is synchronous. It’s a bit of a lie really. The replication is actually – always – asynchronous. What it actually means is “when the client issues a COMMIT then pause until we know the transaction is replicated.” In fact the primary writer database doesn’t need to wait for the replicas to catch up UNTIL the client issues a COMMIT …and even then it’s only a single individual connection which waits. This has many interesting properties.

One benefit is throughput and performance. It means that much of the database workload is actually asynchronous – which tends to work pretty well. The replication stream operates in parallel to the primary workload.

But an interesting drawback is that you can get into situation where the primary can speed ahead of the replica quite a bit before that COMMIT statement hits and then the specific client who issued the COMMIT will need to sit and wait for awhile. It also means that bulk operations like pg_repack or VACUUM FULL or REFRESH MATERIALIZED VIEW or COPY do not have anything to throttle them. They will generate WAL basically as fast as it can be written to the local disk. In the mean time, everybody else on the system will see their COMMIT operations start to exhibit dramatic hangs and will see apparent sudden performance drops – while they wait for their commit record to eventually get replicated by a lagging replication stream. It can be non-obvious that this performance degradation is completely unrelated to the queries that appear to be slowing down. This is the infamous IPC:SyncRep wait event.

Another drawback: as the replication stream begins to lag, the amount of disk needed for WAL storage balloons. This makes it challenging to predict the required size of a dedicated volume for WAL. A system might seem to have lots of headroom, and then a pg_repack on a large table might fill the WAL volume without warning.

This is a bit different from storage-level synchronous re

[...]

Troubleshooting PostgreSQL Logical Replication, Working with LSNs
Posted by Robert Bernier in Percona on 2025-10-27 at 14:10
PostgreSQL logical replication adoption is becoming more popular as significant advances continue to expand its range of capabilities.  While quite a few blogs have described features, there seems to be a lack of simple and straightforward advice on restoring stalled replication. This blog demonstrates an extremely powerful approach to resolving replication problems using the Log […]

Slonik on the Catwalk: PGConf.EU 2025 Recap
Posted by Mayur B. on 2025-10-27 at 11:53

I volunteered as a room host and Slonik guide.
Best gig: posing our elephant. The photographer had runway-level ideas. Slonik delivered every single time.

Slonik modelling session
Slonik having a Diva moment

Community Day ~ people > hype

  • PostgreSQL & AI Summit: I sat on the panel and played “Team Human” vs Skynet (As advised by John Connor in the future).
    postgresql.eu
  • “Establishing the PostgreSQL standard: What’s Postgres compatible?”
    Half-day workshop, lot of brain storming and discussion split into groups then presenting your group’s conclusion on what makes postgres derivatives compatible with community postgres. We spun up a Telegram group to keep building the rubric post-conference. postgresql.eu
The Hallway Track

Coffee with CYBERTEC (meeting Laurenz Albe)

  • Picked the “Coffee with CYBERTEC” option to meet Laurenz Albe, the most prolific Stack Overflow answerer.
  • We traded notes on most popular features to adopt from other databases, their feasibility, and why Postgres avoided them historically.

I left with a starting map for contributing to core.

Talks I caught (and why they stuck)

“Don’t do that!” — Laurenz Albe
A rapid-fire list of Postgres anti-patterns. Simple, blunt, useful from the most beloved speaker of the conference. (postgresql.eu)

Parsing Postgres logs the non-pgBadger way — Kaarel Moppel
Meet pgweasel. Lean CLI. Fast. Cloud-friendly. For prod support, less noise beats glossy graphs. I’m convinced. (postgresql.eu)
https://github.com/kmoppel/pgweasel

Improved freezing in VACUUM — Melanie Plageman
Cleaner anti-wraparound story. Scan all-visible (not all-frozen) pages early; fewer emergency freezes later. Sensible, much nee

[...]

Trigger Happy: Live edits in QGIS
Posted by Rhys Stewart on 2025-10-27 at 10:00
QGIS and PostgreSQL working well together

Returning Multiple Rows with Postgres Extensions
Posted by Shaun Thomas in pgEdge on 2025-10-27 at 05:09

Creating an extension for Postgres is an experience worthy of immense satisfaction. You get to contribute to the extension ecosystem while providing valuable functionality to other Postgres users. It’s also an incredibly challenging exercise in many ways, so we’re glad you’ve returned to learn a bit more about building Postgres extensions.In the previous article in this series, we discussed creating an extension to block DDL. That sample extension was admittedly fairly trivial, in that it only added a single configuration parameter and utilized one callback hook. A more complete extension would provide a function or view to Postgres so users could interact with the extension itself. So let’s do just that!

Deciding on Data

Once again we’re faced with choosing a topic for the extension. Users will sometimes ask the question: how much memory is Postgres using? Using various tools like top or ps will show a fairly inaccurate picture of this, limiting results to opaque fields like VIRT, RES, SHR, PSS, RSS, and others. Some are aggregates, others include Postgres shared buffers, and none really describe how memory is being used.Luckily on Linux systems, there’s an incredibly handy /proc filesystem that provides a plethora of information about several process metrics, including memory. The smaps file in particular reports several process-specific memory categories, and does so on a per-allocation basis.  What if we could parse that file for every Postgres backend process and return output in a table? Admins could then see exactly which user sessions or worker processes are using the most memory and why, rather than an imprecise virtual or resident memory summary.Sounds interesting!

Starting the Extension

As with our previous extension, we need to bootstrap the project with a few files. Start with creating the project folder:And create a  file with these contents:As before, we just need to name the extension, give it a version, and provide an installation path for the resulting library file. Nothing surprising [...]

Contributions for week 43, 2025
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-10-26 at 20:20

In October 2025, PostgreSQL Conference Europe brought the community together in Riga, Latvia from the 21st to the 24th.

Organizers

  • Andreas Scherbaum
  • Chris Ellis
  • Dave Page
  • Ilya Kosmodemiansky
  • Jimmy Angelakos
  • Karen Jex
  • Magnus Hagander
  • Marc Linster
  • Samed Yildirim
  • Valeria Kaplan

Talk selection committee

  • Karen Jex (non voting chair)

Application Developer and Community Subcommittee

  • Dian Fay
  • Hari P Kiran
  • Johannes Paul
  • Kai Wagner

Postgres Internals and DBA Subcommittee

  • Anastasia Lubennikova
  • Dirk Krautschick
  • Samed Yildirim
  • Teresa Lopes

Code of Conduct Committee

  • Floor Drees
  • Pavlo Golub
  • Celeste Horgan

Community Events Day

Extensions Showcase - Organizers

  • Alastair Turner
  • David Wheeler
  • Floor Drees
  • Yurii Rashkovskii

PostgreSQL on Kubernetes Summit - Organizers

  • Floor Drees
  • Gülçin Yıldırım Jelinek

Speakers

  • Gülçin Yıldırım Jelínek
  • Floor Drees
  • Oleksii Kliukin
  • Polina Bungina
  • Thomas Boussekey
  • Pavel Lukasenko
  • Danish Khan
  • Esther Minano
  • Gabriele Bartolini
  • Andrew Farries
  • Jonathan Gonzalez V.
  • Jonathan Battiato

Establishing the PostgreSQL standard: What's Postgres compatible? - Organizers

  • Jimmy Angelakos
  • Henrietta Dombrovskaya
  • Boriss Mejías

Community Summit – PGConf EU Edition - Organizers

  • Teresa Giacomini
  • Andreas Scherbaum

Crafty Slonik - Organized by

  • Chris Ellis

PostgreSQL & AI Summit - Organizers

  • Gülçin Yıldırım Jelinek
  • Ellyne Phneah
  • Torsten Steinbach

Speakers

  • Ellyne Phneah
  • Marc Linster
  • Bruce Momjian
  • Torsten Steinbach
  • Gulcin Yildirim Jelinek
  • Miguel Toscano
  • Luigi Nardi
  • Bertrand Hartwig-Peillon
  • Daniel Krefl
  • Adrien Obernesser
  • Jimmy Angelakos
  • Mohsin Ejaz
  • Gleb Otochkin
  • Erik Hellsten

Panel Discussion – Moderator: Torsten Steinbach, Panelist: Gul

[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.