Latest Blog Posts

Waiting for PostgreSQL 19 – Add GROUP BY ALL.
Posted by Hubert 'depesz' Lubaczewski on 2025-10-02 at 06:56
On 29th of September 2025, Tom Lane committed patch: Add GROUP BY ALL.   GROUP BY ALL is a form of GROUP BY that adds any TargetExpr that does not contain an aggregate or window function into the groupClause of the query, making it exactly equivalent to specifying those same expressions in an explicit GROUP … Continue reading "Waiting for PostgreSQL 19 – Add GROUP BY ALL."

Pipelining in psql (PostgreSQL 18)
Posted by Daniel Vérité on 2025-10-01 at 13:50
the psql client version 18 comes with pipelining, which can speed up client-server communication. In this post, let's see how it works and how much can be gained in query throughput with a simple example.

PostgreSQL 18 Old & New
Posted by Dave Stokes on 2025-09-30 at 12:25

 Learning Structured Query Language can be frustrating when double-checking that what you wanted to have done is actually what was done. PostgreSQL 18 has 'OLD and NEW support for RETURNING clauses in INSERT, UPDATE, DELETE, and MERGE commands'. Now you can get instant feedback. 

The addition of the RETURNING clause in the previous version made MERGE much easier to use. Now it makes other commands easier.

To demonstrate, let's create a table with one column that is designated as a unique, primary key integer and insert a value.

create table foo (a int unique primary key);


insert into foo (a) values (1);

Now is the point where many of us would run a SELECT a FROM foo, just to double check that indeed there is a 1 in column a.

There is now an option in PG 18 to use RETURNING, and it provides the previous 'old' value of a column along with the 'new' value.

insert into foo (a) values (2) returning old.a, new.a;

a|a|
-+-+
 |2|

The 'old' value was a NULL, and the new value is 2.

I suggest adding an alias to make things more explicit.

insert into foo (a) values (3) returning old.a as "Old", new.a as "New";
Old|New|
---+---+
   |  3|

Upserts

As I mentioned earlier, sometimes doubt creeps into your mind when you enter or update data.
RETURNING also works with UPSERTS (Thanks to the brilliant Magnus Hagander, who pointed this out out in
his presentation at the PG NYC 2025 show and his slides are available here).

In the following example, we have a conflict because there is already a '2' in the table, so the
ON CONFLICT path is taken. Using RETURNING in this case takes away any doubt of what happened when
this query was executed.

insert into foo(a) values (2)

on conflict(a) do update

set a = 99

returning old.a as old, new.a as update


old|update|
--
[...]

How to do UPDATE ... LIMIT in PostgreSQL
Posted by Laurenz Albe in Cybertec on 2025-09-30 at 06:00
Dolphin to elephant: "MySQL has got UPDATE ... LIMIT, and PostgreSQL doesn't." - Elephant to dolphin: "But on the other hand, there is no limit to what you can do with an UPDATE in PostgreSQL."
© Laurenz Albe 2025

If you are reading this hoping that PostgreSQL finally got UPDATE ... LIMIT like MySQL, I have to disappoint you. The LIMIT clause is not yet supported for DML statements in PostgreSQL. If you want to UPDATE only a limited number of rows, you have to use workarounds. This article will describe how to do this and how to avoid the pitfalls and race condition you may encounter. Note that most of the following also applies to DELETE ... LIMIT!

An example for how we would like to do UPDATE ... LIMIT

The following table contains 100 categories of 100 entries each. Since we don't discuss persistence, an UNLOGGED table will do:

CREATE UNLOGGED TABLE upd (
   id bigint GENERATED ALWAYS AS IDENTITY,
   category integer NOT NULL,
   flag boolean DEFAULT FALSE NOT NULL
);

INSERT INTO upd (category)
SELECT (i - 1) / 100
FROM generate_series(1, 100000) AS i;

/* set hint bits, gather optimizer statistics */
VACUUM (ANALYZE) upd;

/* created after INSERT for performance */
ALTER TABLE upd ADD PRIMARY KEY (id);

CREATE INDEX ON upd (category);

Now we would like to set the flag on a single row with category 42:

UPDATE upd
SET flag = TRUE
WHERE category = 42
LIMIT 1;

But alas, we cannot do that. We'll have to look for other ways to achieve our goal.

Attempt to solve UPDATE ... LIMIT with a simple subquery

Since there is a LIMIT clause for SELECT, we can use a subquery as follows:

UPDATE upd AS u
SET flag = TRUE
WHERE u.category = 42
  AND u.id IN (SELECT id FROM upd AS u2
               WHERE u2.category = u.category
               LIMIT 1);

This query works as intended if nobody else is using the table. But it can fail to work as expected in the face of concurrent data modifications:

BEGIN;

DELETE FROM upd WHERE id = 4201;

Now run our proposed update statement in a concurrent session, and it will hang. Now, if you COMMIT the DELETE, our UPDATE statement will delete nothing:

UPDATE 0

How did this happen? It is true that the entire query sees the same state

[...]

Going down the rabbit hole of Postgres 18 features
Posted by Tudor Golubenco in Xata on 2025-09-29 at 07:00
A comprehensive list of PostgreSQL 18 new features, performance optimizations, operational and observability improvements, and new tools for devs.

Losing Data is Harder Than I Expected
Posted by Jeremy Schneider on 2025-09-29 at 06:33

This is a follow‑up to the last article: Run Jepsen against CloudNativePG to see sync replication prevent data loss. In that post, we set up a Jepsen lab to make data loss visible when synchronous replication was disabled — and to show that enabling synchronous replication prevents it under crash‑induced failovers.

Since then, I’ve been trying to make data loss happen more reliably in the “async” configuration so students can observe it on their own hardware and in the cloud. Along the way, I learned that losing data on purpose is trickier than I expected.


Methodology and a Kubernetes caveat

To simulate an abrupt primary crash, the lab uses a forced pod deletion, which is effectively a kill -9 for Postgres:

kubectl delete pod -l role=primary --grace-period=0 --force --wait=false

This mirrors the very first sanity check I used to run on Oracle RAC clusters about 15 years ago: “unplug the server.” It isn’t a perfect simulation, but it’s a simple, repeatable crash model that’s easy to reason about.

I should note that the label role is deprecated by CNPG and will be removed. I originally used it for brevity, but I will update the labs and scripts to use the label cnpg.io/instanceRole instead.

After publishing my original blog post, someone pointed out an important Kubernetes caveat with forced deletions:

Irrespective of whether a force deletion is successful in killing a Pod, it will immediately free up the name from the apiserver. This would let the StatefulSet controller create a replacement Pod with that same identity; this can lead to the duplication of a still-running Pod

https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/

This caveat would apply to the CNPG controller just like a StatefulSet controller. In practice, for my tests, this caveat did not undermine the goal of demonstrating that synchronous replication prevents data loss. The lab includes an automation script (Exercise 3) to run the 5‑minute Jepsen test in a loop for ma

[...]

PostgreSQL 18 and beyond: From AIO to Direct IO?
Posted by Hans-Juergen Schoenig in Cybertec on 2025-09-29 at 06:00

PostgreSQL 18 is such an exceptional piece of software - it is hard to imagine anything better, right? I remember a similar thought when PostgreSQL 7 was released (“wow, how cool is that - this is it”). Well, let me put it mildly: I was wrong, and things are a lot better than they used to be. So, obviously the idea that PostgreSQL 18 is as far as humanity can go is also wrong. But what might be next?

Synchronous IO, Asynchronous I/O, Direct I/O?

The question is: While a major step forward has been made in PostgreSQL 18, and an important milestone has been achieved … is there anything out there that is even cooler? Something we might see in a couple of years? Well, maybe: I am of course talking about “Direct I/O”.

What is Direct I/O?

Under normal circumstances, a read works like this:

  • Application requests some data
    • Kernel accepts the read 

      • does sanity checks

      • cache lookup happens

        • if needed data is read from disk

        • otherwise the filesystem cache serves the data
    • Application receives an OK

Sounds good? Well it is, and in most cases this is exactly what we want. However, there is an additional method: Direct IO. What it does can be summed up in one sentence:  DirectIO bypasses the OS page cache layer. 

But let us take a look at more detail: The good part is that it skips the overhead and scalability limitations. The bad part is that it skips the services that layer provi

[...]

PgPedia Week, 2025-09-28
Posted by Ian Barwick on 2025-09-29 at 03:37

PostgreSQL 18 was released on schedule this week!

PostgreSQL 18 articles Exploring PostgreSQL 18 – Part 1: The COPY Command Gets More User-Friendly (2025-09-28) - Deepak Mahto Postgres 18: OLD and NEW Rows in the RETURNING Clause (2025-09-26) - Brandur Leach / Crunchy Data PostgreSQL: "UPDATE … RETURNING" made even better (2025-09-25) - Hans-Jürgen Schönig / CYBERTEC PostgreSQL 18: part 5 or CommitFest 2025-03 (2025-09-25) - Pavel Luzanov / PostgresPro Parallel Apply of Large Transactions (2025-09-24) - Amit Kapila Celebrating the PostgreSQL 18 Release (2025-09-24) - EnterpriseDB Tuning AIO in PostgreSQL 18 (2025-09-24) - Tomas Vondra Changes to NOT NULL in Postgres 18 (2025-09-23) - Álvaro Herrera / EnterpriseDB PostgreSQL 19 changes this week

Development continues apace, but no user-visible features were committed this week. See the Commits of Interest list below for more details on changes.

more...

Cumulative Statistics in PostgreSQL 18
Posted by Cédric Villemain in Data Bene on 2025-09-29 at 00:00

In PostgreSQL 18, the statistics & monitoring subsystem receives a
significant overhaul - extended cumulative statistics, new per-backend I/O
visibility, the ability for extensions to export / import / adjust statistics,
and improvements to GUC controls and snapshot / caching behavior. These changes
open new doors for performance analysis, cross‑environment simulation, and
tighter integration with extensions. In this article I explore what’s new, what
to watch out for, Grand Unified Configuration (GUC) knobs, and how extension
authors can leverage the new C API surface.

Introduction & motivation

Statistics (in the broad sense: monitoring counters, I/O metrics, and planner /
optimizer estimates) lie at the heart of both performance tuning and internal
decision making in PostgreSQL. Transparent, reliable, and manipulable
statistics, among other things, allow DBAs to address the efficiency of
PostgreSQL directly, as well as enable “extensions” to improve the user
experience.

That said, the historic statistics system of PostgreSQL has not been without
points of friction. These include limited ability to clear (relations)
statistics, metrics with units that don’t always align with user goals, and no C
API for using the PostgreSQL Cumulative Stats engine. PostgreSQL 18 addresses
these concerns head on.

Below is a summary of the key enhancements.

A warning on stats

While statistics offer incredible value, their collection can take up
significant time and resources. PostgreSQL 18 introduces an important
consideration: with the expanded range of collectible metrics, the hash table
maximum size has been increased. Do keep in mind, especially if you’re designing
large-scale systems with table-per-customer architectures, that 1GB ceilings
have been shown to be hit with some millions of tables.

What’s new with PostgreSQL 18 and “stats”

Here are the major new or improved features relating to statistics and
monitoring. Each item links to the relevant

[...]

PgPedia Week, 2025-09-21
Posted by Ian Barwick on 2025-09-28 at 21:01
PostgreSQL 19 changes this week encode() / decode() : support for base64url format added extension_control_path : only the first extension name found in the path will be shown by extension availability views logical_replication : conflict-relevant data retention will now be automatically resumed pg_replication_origin_session_setup() : parameter pid added function lookup failures will now have more specific error details/hints provided PostgreSQL 19 articles Waiting for PostgreSQL 19 – Add date and timestamp variants of random(min, max). (2025-09-17) - Hubert 'depesz' Lubaczewski PostgreSQL 18 articles PostgreSQL 18: Better I/O performance with AIO (2025-09-19) - Hans-Jürgen Schönig / CYBERTEC Preview PostgreSQL 18's OAuth2 Authentication (3) - Enhancing a PostgreSQL Client Library to Speak OAUTHBEARER (2025-09-19) - Guang Yi Xu / EnterpriseDB Preview PostgreSQL 18’s OAuth2 Authentication (2) - Building a Custom OAuth2 Validator by Rust (2025-09-17) - Guang Yi Xu / EnterpriseDB Preview PostgreSQL 18’s OAuth2 Authentication (1) - Explore How it Works (2025-09-15) - Guang Yi Xu / EnterpriseDB

more...

PostgreSQL 18 Release Notes
Posted by Dave Stokes on 2025-09-28 at 12:58

 The PostgreSQL 18 Release Notes are like a great noir novel, full of surprises and intrigue. But we know who did it - the amazing PostgreSQL community.

If you have not perused the Notes, I advise you to do so now. They contain many of what I call 'Wow!' level items, such as the redesigned I/O subsystem, skip scan lookups, virtual generated columns, and more.  The new Uuidv7 function and temporal constraints would be a significant leap forward. The OLD and NEW support for RETURNING will help lessen the learning curve for many who are new to SQL.

But go down to the seemingly more mundane items. Removing redundant self-joins, merge-joins can now use incremental sorts, and internally reordering the keys of SELECT DISTINCT to avoid sorting are the types of improvements that are not as attention-getting as the items in the preceding paragraph, but will make life much nicer for us. 

Each release of PostgreSQL is faster than the previous one, and early reports report that this has happened again. This is an astounding delivery for those of us with backgrounds in other databases. All too often, new features came with a trade-off in performance that relegated upgrades to the back burner.

Please read the Acknowledgments. This is a list of the good folks who have worked very hard on our behalf to deliver an improved version of PostgreSQL. And they are already working on 19! If you run into any of them, please give them a big 'thank you'.  

Exploring PostgreSQL 18: A Developer’s Guide to New Features – Part 1: The COPY Command Gets More User-Friendly
Posted by Deepak Mahto on 2025-09-28 at 06:39

PostgreSQL 18 was released on September 25, 2024, introducing a host of exciting new features across all major areas. In this blog series, we’ll delve into these features and explore how they can benefit database developers and migration engineers transitioning to PostgreSQL.

Part 1 of the series focuses on enhancements to the COPY command, particularly for use cases involving loading external files into the database.

PostgreSQL 18 – Enhancements with the COPY Command

The COPY command is the default native option for loading flat files, primarily in text or CSV format, into PostgreSQL. If you are a data engineer or database developer looking for ways to load flat files into PostgreSQL, you must take care of quality checks and encapsulate a mechanism to report or discard failed records.

Until PostgreSQL 16, if quality checks were planned to be done on the database side, one option was to load flat files into staging tables with all columns as text or a generic data type and later move legitimate and discarded data to the concerned tables.With PostgreSQL 17 onward, the COPY command provides options for handling data type incompatibilities and logging failed rows or records using on_error options.

In PostgreSQL 18 these features is further enhances with REJECT_LIMIT options. If overall discarded or error row data is beyond the reject limit the copy command is discarded as whole.

PostgreSQL 18 – New REJECT_LIMIT option.

Let’s start by creating sample target tables and a CSV file with intentional data type mismatch issues. The sample table also has check and NOT NULL constraints defined on it.

CREATE TABLE copy18(
    col1 integer,
    col2 text DEFAULT 'x'::text,
    col3 text NOT NULL,
    col4 text,
    col5 boolean,
    col6 date,
    col7 timestamp without time zone
    CONSTRAINT copy18_col4_check CHECK (length(col4) > 2)
);

The CSV file contains data type incompatibilities in the following lines:

  • Timestamp (4th line)
  • Integer (5th line)
  • Boolean type (7t
[...]

How I learned to use wal_inspect
Posted by Henrietta Dombrovskaya on 2025-09-27 at 18:10

It has been a while since last time I blogged about any real-database-life problems and investigation. Here is one which I am really happy about, because it took a while and it was not obvious, but I found the root cause of a problem!

The problem has been going on for months: one of the Postgres databases WAL file system was growing with an alarming speed, many times faster than the database itself. Not like we can’t promptly remove the WALs, but this situation made any database restore a reali nightmare. At first, I suspected long transactions with suboptimal backfill algorithms. There were, indeed, a lot of jobs running on a regular basis which could be improved, however, I noticed a couple of things.

First, when the users fixed one of the primary suspects jobs, the situation with WAL growth didn’t change. Second, the rate of the growth couldn’t be explained by these suboptimal jobs: the data volumes they were removing and reloading were still magnitudes smaller than the WAL size we were dealing with. Finally, I decided to do what I should have done from the start – to take a look at what exactly was in these super-fast growing WALs.

Here is what I did:

create extension  pg_walinspect;

select pg_current_wal_insert_lsn();  /* save the result: '17797/A167C8E8' */

select pg_current_wal_insert_lsn() /* save the result : '17797/FEDE0CC8'*/

select * from  pg_get_wal_stats('17797/A167C8E8', '17797/FEDE0CC8', true)
where count>0;

To my amazement, I realized that 97% of the WALs stored the information about locks! To be honest, I didn’t even know that locks are recorded in the WAL files, so I am thankful for gracious Postgres community enlightening me!

Now that I knew where to look, I ran the following:

select * from pg_get_wal_records_info('17797/A167C8E8', '17797/FEDE0CC8')
where resource_manager='Heap'
and record_type='LOCK'

Zooming in:

Next, I found the table which had all these locks:

select * from pg_class where oid =10229951

When I fo

[...]

Query optimizations in Odoo versions 17–19 for faster PostgreSQL performance
Posted by Umair Shahid in Stormatics on 2025-09-26 at 13:50

We get a lot of queries about performance optimization for PostgreSQL from organizations using Odoo. The platform has been built entirely on PostgreSQL, and Odoo’s ORM and database layer have undergone several performance‑oriented changes across releases 17, 18, and 19 to reduce SQL overhead and take better advantage of PostgreSQL.

I have always maintained that SQL optimization is the most significant factor contributing towards database performance improvements, so I am very happy to see these improvements being rolled out. 

Last week, Odoo released version 19 and ended support for version 16. If that isn’t motivation enough, I have gathered some of the performance improvements made in the system after version 16. Hopefully, that gives you enough incentive to upgrade Odoo in order to reduce performance woes. 

The optimizations below are drawn from the official changelog and pull-request discussions, highlighting the most relevant changes that affect query generation and execution. 

Odoo 17.x

1.Combining search and read queries: In Odoo 17.4, the implementation of searching and reading was refactored so that search()/search_read() can perform the search and fetch fields in the same SQL query. Two new methods – search_fetch() and fetch() – were introduced to minimize the number of queries needed for search_read. By fetching the required fields at the time of the search, the ORM avoids executing separate SELECT statements for each record.

2. Limit parameter for search_count(): Odoo 17 added a limit argument to search_count(). When a UI element only needs an approximate count, the ORM stops counting after the limit and returns the approximate number, avoiding full table scans.

3. Specific index types: Developers can now specify the type of index when declaring a field. Supported types include:

  • btree (default).
  • btree_not_null (used when most reco
[...]

Run PostgreSQL 18 on Kubernetes Today with CloudNativePG
Posted by Gabriele Bartolini in EDB on 2025-09-26 at 11:30

PostgreSQL 18 is officially released, packed with improvements for performance, authentication, operations, and security. In this article, I’ll show you how to run it on Kubernetes from day one with CloudNativePG. I will summarise key new features like asynchronous I/O and OAuth 2.0, as well as the extension_control_path parameter. This is a simple but critical feature for operational control in cloud-native environments, and one our team at CloudNativePG and EDB was proud to help contribute to PostgreSQL. You’ll see how this reflects our close collaboration with the upstream community, learn about our new half-sized minimal container image, and be able to follow my guide to deploy your first cluster today.

Exploration: CNPG Point In Time Recovery
Posted by Umut TEKIN in Cybertec on 2025-09-26 at 04:00

Introduction

In our CNPG series, we have mentioned that backups are crucial in every database system in case of any disaster. PostgreSQL has powerful recovery capabilities as well as backup capabilities, including the ability to restore a database cluster to a specific moment in time. This is extremely useful in scenarios where we need to recover from user errors—such as accidental data deletion, wrong updates, dropped tables, or even dropped databases.

We will now walk through how to perform Point-in-Time Recovery (PITR) with CloudNativePG (CNPG) using a Barman Cloud plugin for backups.


Preparing the Scenario

First, we need to simulate a typical disaster. In our CNPG cluster instance:

postgres=# create database pitr_test;
CREATE DATABASE

postgres=# \c pitr_test
You are now connected to database "pitr_test" as user "postgres".

Create a sample table and load some data:

CREATE TABLE test_data (
    id SERIAL PRIMARY KEY,
    name TEXT,
    age INT,
    created_at TIMESTAMP DEFAULT NOW()
);

INSERT INTO test_data (name, age, created_at)
SELECT
    'User_' || gs,
    (random() * 60 + 18)::INT,
    NOW() - (random() * interval '365 days')
FROM generate_series(1, 10000) gs;

Update some rows to check later:

UPDATE test_data
SET age = 1
WHERE id <= 15;

Note the current Write-Ahead Log (WAL) position and timestamp to recover exactly to this point with PITR:

SELECT pg_current_wal_lsn();
 pg_current_wal_lsn 
--------------------
 0/85BDBB0

SELECT now();
             now              
------------------------------
 2025-09-19 13:39:18.89106+00

Now, simulate a disaster:

DROP TABLE test_data;

At this point, the table is dropped and the only option we have to recover the table to timestamp we want is PITR.


Backup and WAL Archiving

We already have a backup and WAL archive system in place (as covered in the CNPG Backup with Barman Cloud blog). For point in time recovery, WAL files are essential since they contain all transactions after the last backup.

We can confir

[...]

Contributions for the week 38
Posted by Floor Drees in postgres-contrib.org on 2025-09-25 at 15:05

Daria Aleshkova, Oleksii Kliukin, Sergey Dudoladov and Andreas Scherbaum organized the PostgreSQL Berlin September Meetup. Divyendu Singh and Marko Coha presented talks.

The Prague PostgreSQL meetup met for a September edition on the 22th. Pavel Lukasenko talked about “From SQL to Energy Charts: Scalable Energy Metering with PostgreSQL, Kubernetes, and Longhorn”, Josef Machytka presented “PostgreSQL Connections Memory Usage on Linux: How Much, Why, and When?” Gülçin Yıldırım Jelínek, Mayuresh B, Pavel Hák, Pavel Stěhule, and Tomas Vondra are organizers for the group.

September 25 PGDay Napoli took place, we recognized the organizers and talk selection committee in a previous post.

Speakers:

  • Patrick Lauer - Postgres with many data: To MAXINT and beyond
  • Pep Pla - New kid on the block: Introduction to PostgreSQL Support in ProxySQL 3
  • Derk van Veen - Leveraging table partitioning for query performance and data archiving
  • Gülçin Yıldırım Jelinek - Anatomy of Table-Level Locks in PostgreSQL
  • Daniele Varrazzo - Make the most of PostgreSQL using Python
  • Ellyne Phneah - Postgres and People: The neuroscience and psychology of why we stick around
  • Priyanka Chatterjee - All about Common Vulnerabilities and exposures in PostgreSQL

Lightning talk speakers:

  • Floor Drees - The PostgreSQL Europe Diversity Committee
  • Jonathan Gonzalez - Building Better Postgres Images with Docker Bake
  • Jonathan Battiato - Storage Performance Matters: Benchmarking PostgreSQL in Kubernetes
  • Gülçin Yıldırım Jelinek - How to make Postgres Scale to Zero
  • Ellyne Phneah - Stop chasing ghosts: A smarter way to tune PostgreSQL performance
  • Federico Campoli - Spaghetti Carbonara the PostgreSQL way

Postgres 18: OLD and NEW Rows in the RETURNING Clause
Posted by Brandur Leach in Crunchy Data on 2025-09-25 at 15:00

Postgres 18 was released today. Well down page from headline features like async I/O and UUIDv7 support, we get this nice little improvement:

This release adds the capability to access both the previous (OLD) and current (NEW) values in the RETURNING clause for INSERT, UPDATE, DELETE and MERGE commands.

It's not a showstopper the way async I/O is, but it is one of those small features that's invaluable in the right situation.

A simple demonstration with UPDATE to get all old and new values:

UPDATE fruit
SET quantity = 300
WHERE item = 'Apples'
RETURNING OLD.*, NEW.*;

 id |  item  | quantity | id |  item  | quantity
----+--------+----------+----+--------+----------
  5 | Apples |      200 |  5 | Apples |      300
(1 row)

Detecting new rows with OLD on upsert

Say we're doing an upsert and want to differentiate between whether a row sent back by RETURNING was one that was newly inserted or an existing row that was updated. This was possible before, but relied on an unintuitive check on xmax = 0 (see the very last line below):

INSERT INTO webhook (
    id,
    data
) VALUES (
    @id,
    @data
)
ON CONFLICT (id)
    DO UPDATE SET id = webhook.id -- force upsert to return a row
RETURNING webhook.*,
    (xmax = 0) AS is_new;

The statement relies on xmax being set to zero for a fresh insert as an artifact of Postgres' locking implementation (see a full explanation for why this happens). It works, but isn't a guaranteed part of the API, and could conceivably change at any time.

In Postgres 18, we can reimplement the above so it's more legible and doesn't rely on implementation details. It's easy too -- just check whether OLD is null in the returning clause:

INSERT INTO webhook (
    id,
    data
) VALUES (
    @id,
    @data
)
ON CONFLICT (id)
    DO UPDATE SET id = webhook.id -- force upsert to return a row
RETURNING webhook.*,
    (OLD IS NULL)::boolean AS is_new;

Access to OLD and NEW will undoubtedly have many other useful cases, but this is one example that lets us imp

[...]

PGDay Lowlands 2025 and Getting Postgres to the Next Level
Posted by Jimmy Angelakos on 2025-09-25 at 14:00

Obligatory photo from PGDay Lowlands 2025 talk. Credit: honkingelephant.com

I recently attended PGDay Lowlands 2025, an awesome PostgreSQL Community Event which took place on September 12th in the wondrous environment of the Rotterdam Zoo. It was a fantastic and focused day of PostgreSQL content, made all the more memorable by being in the company of amazing fish, sea turtles, and penguins!

Obligatory sea turtle photo from PGDay Lowlands 2025

I was honoured to kick off the conference, presenting a talk that I co-wrote with Hettie Dombrovskaya. We've been collecting material and ideas for this for over a year, and it was exciting to finally share it.

The talk: How Do We Get Postgres to the Next Level?

Another obligatory photo from PGDay Lowlands 2025 talk. Credit: honkingelephant.com

  • Our presentation, "How Do We Get Postgres to the Next Level?", was a call to action to think strategically about the future of the project. We argued that to continue its incredible growth and solidify its leadership position, we need to focus on three core pillars: improving code quality, strategically adding new features, and kicking into overdrive our advocacy and community growth. We touched on the need for things like a dedicated performance team, a more modular development process, standardization on what PostgreSQL compatibility means, and improving the developer experience to attract new contributors.

Sparking the Conversation

Packing over a year's worth of research into a 20-minute talk was a challenge, but our main goal was to spark a conversation. Based on the reaction, I think we succeeded! The ideas were received positively, and as we'd hoped, they also challenged some existing perspectives.

What was most rewarding were the hours of "hallway track" discussions that followed. It was fantastic to hear that so many of our points resonated with attendees. A recurring theme in these conversations was a shared sense that for Postgres to continue scaling, we have to brin

[...]

What are SLRUs and Multixacts in Postgres? What can go wrong?
Posted by Andrew Atkinson on 2025-09-25 at 11:15

In this post we’ll cover two types of Postgres internals.

The first internal item is an “SLRU.” The acronym stands for “simple least recently used.” The LRU portion refers to caches and how they work, and SLRUs in Postgres are a collection of these caches.

SLRUs are small in-memory item stores. Since they need to persist across restarts, they’re also saved into files on disk. Alvaro1 calls SLRUs “poorly named” for a user-facing feature. If they’re internal, why are they worth knowing about as Postgres users?

They’re worth knowing about because there can be a couple of possible failure points with them, due their fixed size. We’ll look at those later in this post.

Before getting into that, let’s cover some basics about what they are and look at a specific type.

Main purpose of SLRUs

The main purpose of SLRUs is to track metadata about Postgres transactions.

SLRUs are a general mechanism used by multiple types. Like a lot of things in Postgres, the SLRU system is extensible which means extensions can create new types.

The “least recently used” aspect might be recognizable from cache systems. LRU refers to how the oldest items are evicted from the cache when it’s full, and newer items take their place. This is because the cache has a fixed amount of space (measured in 8KB pages) and thus can only store a fixed amount of items.

Old SLRU cache items are periodically cleaned up by the Vacuum process.

What about the buffer cache?

The buffer cache (sized by configuring shared_buffers) is another form of cache in Postgres. Thomas Munro proposed unifying the SLRUs and buffer cache mechanisms.

However, as of Postgres 17 and the upcoming 18 release (released September 9, 2025), SLRUs are still their own distinct type of cache.

What types of data is stored in SLRUs?

What type of data is tracked in SLRUs?

Transactions are a core concept for relational databases like Postgres. Transactions are abbreviated “Xact,” and Xacts are one of the types of data stored in SLRU

[...]

PostgreSQL: "UPDATE … RETURNING" made even better
Posted by Hans-Juergen Schoenig in Cybertec on 2025-09-25 at 06:00

PostgreSQL 18 offers a couple of new features such as asynchronous I/O (aio), improvements around indexing, and a lot more. However, there is one feature that has really caught my attention. While it might go unnoticed by many users, it is really powerful and, to me, somehow represents everything I like about PostgreSQL.

The improvement I want to focus on today is related to the way the "RETURNING-clause" works in PostgreSQL 18.

Using RETURNING-clauses

Before we dive into the new functionality, we have to create a simple test table containing a single row of data:

test=# CREATE TABLE t_test AS 
        SELECT 1 AS id, 'Hans' AS name;
SELECT 1

The way RETURNING works is that it allows us to see what a DML statement has changed. 

Here is an example:

test=# UPDATE t_test 
        SET   id = 2 
        WHERE id = 1 
        RETURNING *;
 id | name 
----+------
  2 | Hans
(1 row)

UPDATE 1

The statement finds our row and modifies the ID by changing it from 1 to 2. The RETURNING-clause at the end can be used to fetch whatever columns we want from the new rows that have been written to the table. By placing a star, we will get the entire tuple, but we can of course simply list single columns or even expressions (depending on our needs).

This feature has existed for many years now and has been adopted by a fair number of applications.

RETURNING in PostgreSQL 18 and beyond

What is new in PostgreSQL is support for the NEW and OLD keywords, which some users might already know from PostgreSQL trigger functions. What is the purpose of those two keywords? Well, NEW represents the new row as it is sent to the table by our UPDATE statement and OLD represents the row as it has been before the modification. 

The following listing shows how the new syntax can be applied:

test=# UPDATE t_test 
        SET   id = 3 
        WHERE id = 2 
        RETURNING NEW.*;
 id | name 
----+------
  3 | Hans
(1 row)

UPDATE 1

Essentially, the example does the same as the previous one - we modify one

[...]

PostgreSQL 18: part 5 or CommitFest 2025-03
Posted by Pavel Luzanov in Postgres Professional on 2025-09-25 at 00:00

September 25th marks the release of PostgreSQL 18. This article covers the March CommitFest and concludes the series covering the new features of the upcoming update. This article turned out quite large, as the last March CommitFest is traditionally the biggest and richest in new features.

You can find previous reviews of PostgreSQL 18 CommitFests here: 2024-07, 2024-09, 2024-11, 2025-01.

Client and Server Applications

  • pg_dump[all]/pg_restore: dump and restore statistics
  • Collect statistics after server upgrade
  • pg_upgrade --swap: migrate directories to a new cluster
  • pg_combinebackup --link: hard links instead of copying files
  • pg_dump[all], pg_restore: --no-policies
  • pg_createsubscriber: enable two_phase for all subscriptions
  • pg_createsubscriber: remove publications on subscriber
  • pg_createsubscriber: create subscriptions for every database on the publication server
  • psql: pipeline mode
  • psql: current connection information
  • psql: set the default time interval for \watch
  • psql: \dx displays the default extension version

Monitoring

  • NUMA: monitoring tools for non-uniform memory access architectures
  • pg_stat_get_backend_wal: WAL statistics for a specific process
  • EXPLAIN: actual rows shown to two decimal places
  • EXPLAIN: an interface for adding more options to the command
  • Log failed lock attempts
  • Log session time-to-connect
  • log_line_prefix: local server IP address
  • pg_stat_statements: normalize commands with lists of constants in IN
  • Additional WAL buffer overflow monitoring tools
  • Track vacuum and analyze delays

[Auto]vacuum and Analysis

  • vacuum_truncate: manage the truncation of empty pages at the end of a table
  • More frequent autovacuuming of dead tuples in large tables
  • Autovacuum triggers sooner after new rows are inserted
  • Eager freezing to amortize aggressive vacuum

Performance

  • Asynchronous input/output
  • io_combine_limit: increase maximum allowed limit to 1MB
[...]

Tuning AIO in PostgreSQL 18
Posted by Tomas Vondra on 2025-09-24 at 10:00

PostgreSQL 18 was stamped earlier this week, and as usual there’s a lot of improvements. One of the big architectural changes is asynchronous I/O (AIO), allowing asynchronous scheduling of I/O, giving the database more control and better utilizing the storage.

I’m not going to explain how AIO works, or present detailed benchmark results. There have been multiple really good blog posts about that. There’s also a great talk from pgconf.dev 2025 about AIO, and a recent “Talking Postgres” podcast episode with Andres, discussing various aspects of the whole project. I highly suggest reading / watching those.

I want to share a couple suggestions on how to tune the AIO in Postgres 18, and explain some inherent (but not immediately obvious) trade-offs and limitations.

Parallel Apply of Large Transactions
Posted by Amit Kapila in Fujitsu on 2025-09-24 at 09:39
Logical replication in PostgreSQL has steadily evolved since its introduction in version 10. In a previous blog post, I discussed how PostgreSQL 14 introduced streaming of large transactions. PostgreSQL 16 took this further by enabling parallel apply of large transactions via a non-default subscription option. Now, with PostgreSQL 18, parallel apply is the default behavior—marking a significant milestone in replication performance and scalability.

In this post, we’ll explore how parallel apply works for large transactions and touch upon future plans for small transactions.

Enabling Parallel Apply
From PostgreSQL 18 onwards, parallel apply is enabled by default. To explicitly configure it, users can set the streaming parameter to parallel when creating a subscription:

postgres=# CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres' PUBLICATION pub WITH (streaming = parallel);
NOTICE:  created replication slot "sub" on publisher
CREATE SUBSCRIPTION
postgres=# SELECT subname, substream FROM pg_subscription;
 subname | substream
---------+-----------
 sub     | p
(1 row)

You can control the degree of parallelism using the max_parallel_apply_workers_per_subscription parameter. By default, this is set to 2, allowing two large transactions to be applied in parallel per subscription.

How Parallel Apply Works

Parallel apply significantly improves the efficiency of large transaction handling by not waiting for the COMMIT message from the publisher before applying changes. Previously, the subscriber would write all streamed data to temporary files and apply them only after receiving the commit record.




Architecture Overview

When the first stream of a large transaction arrives, the leader apply worker (LA) assigns a parallel apply worker (PA)—if available. The LA sends changes to the PA via a dedicated

[...]

PgPedia Week, 2025-09-14
Posted by Ian Barwick on 2025-09-23 at 21:31
PostgreSQL 19 changes this week random(min, max) : date and timestamp variants added log_lock_waits now set to on by default ERRCODE_DATA_CORRUPTED error codes will be emitted if VM corruption is discovered during vacuum operations PostgreSQL 18 articles Get Excited About Postgres 18 (2025-09-12) - Elizabeth Garrett Christensen / Crunchy Data 3 Features I am Looking Forward to in PostgreSQL 18 (2025-09-09) - Umair Shahid / Stormatics

more...

Changes to NOT NULL in Postgres 18
Posted by Álvaro Herrera in EDB on 2025-09-23 at 19:49

After a very long development period, we finally completed the project to rework NOT NULL constraints in PostgreSQL.  This has long been a desire of the Postgres development community, and we finally pulled it off for version 18, which has made me very happy.

What changed?

The most visible user change is that NOT NULL constraints now have names. These names are visible in psql under the \d+ command. For instance, looking at the aircraft table in the postgres_air database, we can see something like this:

=# \d+ aircraft
                                       Table "postgres_air.aircraft"
  Column

Postgres’ Original Project Goals: The Creators Totally Nailed It
Posted by Elizabeth Garrett Christensen in Crunchy Data on 2025-09-23 at 13:00

I had a chance last week to sit down and read the original academic paper announcing Postgres as a platform and the original design goals from 1986. I was just awestruck at the forethought - and how the original project goals laid the foundation for the database that seems to be taking over the world right now.

The PostgreSQL creators totally nailed it. They laid out a flexible framework for a variety of business use cases that would eventually become the most popular database 30 years later.

The paper outlines 6 project goals:

  1. better support for complex objects growing world of business and engineering use cases

  2. provide user extendibility for data types, operators and access methods

  3. provide facilities for active databases like alerters and triggers

  4. simplify process for crash recovery

  5. take advantage of upgraded hardware

  6. utilize Codd’s relational model

Let's look at all of them in reference to modern features of Postgres.

1) Objects and data types for a growing world of business and engineering use cases

Postgres has a rich and flexible set of native data types that are designed to meet a vast array of business use cases, from simple record-keeping to complex data analysis.

Numeric Types like SMALLINT and INTEGER are used for whole numbers while BIGINT might be for a user's unique ID or primary keys. Precision like NUMERIC and  DECIMAL are used, exact precision is critical, especially for money in Postgres. Floating-Point Types like REAL or DOUBLE PRECISION can be used for scientific or engineering calculations where absolute precision isn't as important as the range of values. You also have your UUID (indexable UUIDs in Postgres 18) for distributed systems and secure URLs.

Character Types like VARCHAR(n) or CHAR(n) store variable-length text up to a specified maximum length (n) and only use as much storage as needed for the actual text.

Date/Time

[...]

PostgreSQL at the Zoo – My PGDay Lowlands 2025 Recap
Posted by Cornelia Biacsics in Cybertec on 2025-09-23 at 06:00

Summer is slowly fading, and that means one thing: the PostgreSQL conference season in Europe is officially back. After PGDay Austria on September 4 and PGDay UK on September 9, the next stop was PGDay Lowlands in Rotterdam on September 12. Three conferences in such a short time is a clear sign that the community is back in full swing after the summer break. And honestly—it felt great to be part of it again.

The Journey to Rotterdam

My conference adventure began a little earlier than the official start. I first traveled from Austria to Belgium to visit our partner Zebanza together with Jan Karremans. We met at their new office at Piros (in Everberg) for a strategic alignment session. It’s always rewarding to step out of the online bubble and sit together in person—discussing strategy, exchanging ideas, and building stronger ties.

From there, our journey continued to Amsterdam for a customer meeting, followed by another strategy session with our partner Nibble IT.

The plan sounded simple: hop on one train, work a bit along the way, and arrive fresh and prepared. But the Dutch railway system had other ideas. The direct connection was canceled, and we ended up taking four trains instead of one.

What could have been a stressful detour actually turned into a positive surprise. It gave me more time to enjoy the ride, watch the changing landscapes of Belgium and the Netherlands, and reflect on the upcoming days. Sometimes, detours are exactly what you need.

PGDay Lowlands Warmup Meetup

The official start of PGDay Lowlands happened the evening before the main event, with the warmup meetup. It combined short talks, pizza, and drinks—a casual setting that perfectly set the stage for the main conference.

For me, this was the first moment to reconnect with familiar faces from the PostgreSQL community, meet new ones, and exchange ideas in a relaxed atmosphere.

Jan Karremans also gave a Lightning Talk during the meetup, introducing the Open Alliance for PostgreSQL Education. The anno

[...]

Understanding PostgreSQL WAL and optimizing it with a dedicated disk
Posted by warda bibi in Stormatics on 2025-09-22 at 10:23

If you manage a PostgreSQL database with heavy write activity, one of the most important components to understand is the Write-Ahead Log (WAL). WAL is the foundation of PostgreSQL’s durability and crash recovery as it records every change before it’s applied to the main data files. But because WAL writes are synchronous and frequent, they can also become a serious performance bottleneck when they share the same disk with regular data I/O.

In this guide, we’ll first break down what WAL is and how it works internally, then walk through the exact steps to move pg_wal to a dedicated disk. You’ll also learn how to validate the setup and roll back safely if needed.

What Exactly is the WAL?

PostgreSQL operates on a simple principle:

No modification to a data file can happen until the logical change describing it has been permanently recorded in the WAL.

WAL is a sequential record of all changes made to the database. It ensures that a record of every change exists on stable storage before the actual data pages on disk are updated.

How  WAL works internally

When we run an insert query, it’s first inserted into the appropriate table page within PostgreSQL’s shared memory area (the buffer pool). This memory page is now “dirty” because it differs from the version on disk.

Before this dirty page can be flushed back to its permanent location on disk, the database engine creates a WAL record. This isn’t the SQL command itself, but a low-level, binary representation of the exact change made. Each WAL record contains:

  • Transaction ID – Which transaction made the change
  • Page information – Which database page was modified
  • Redo information – How to reconstruct the change
  • Undo information – How to reverse the change (for rollbacks)
  • CRC checksum – For detecting corruption

This WAL record is first written to a small, fast area in memory called the WAL Buff

[...]

pgsql_tweaks Version 1.0.1 Released
Posted by Stefanie Janine Stölting on 2025-09-21 at 22:00
  1. pgsql_tweaks is a bundle of functions and views for PostgreSQL
    1. Release of pgsql_tweaks 1.0.1

pgsql_tweaks is a bundle of functions and views for PostgreSQL

The source code is available on Codeberg.

The extension is also available on PGXN.

The extension is also availabe through the PostgreSQL rpm packages.

Release of pgsql_tweaks 1.0.1

This release corrects some wrong links in the help file and on PGXN, no source code was changed.

Before releasing 1.0.0 it was not possible to check the links with a link checker as the pages have not been published.
Therefore some links did point to a structure of the documentation that was correct at the beginning of creating the documentation.

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.