Latest Blog Posts

All Your GUCs in a Row: autovacuum_vacuum_insert_scale_factor and autovacuum_vacuum_insert_threshold
Posted by Christophe Pettus in pgExperts on 2026-05-08 at 01:00
PostgreSQL 13 added insert-triggered autovacuum to solve a critical problem: append-only tables never vacuumed, breaking index-only scans and delaying tuple…

The Maintainer Is Not the Owner
Posted by Christophe Pettus in pgExperts on 2026-05-07 at 17:30
When a maintainer rewrites a project with AI and changes its license, they've crossed a line.

pgEdge Control Plane Adds Supporting Services and a Preview of systemd Support
Posted by Antony Pegg in pgEdge on 2026-05-07 at 17:20

Most Postgres management tools ask you to pick a lane. You can manage databases, or you can manage the services around them. You can run in containers, or you can run on bare metal. You get one deployment model, one operational surface, one set of assumptions about how your infrastructure works.The pgEdge Control Plane just added two features that refuse to pick a lane: Supporting Services and systemd Support. Together, they push the Control Plane into territory that, as far as we can tell, nobody else in the Postgres world is covering. Supporting Services is fully available, while the systemd support is currently a Preview feature.

Supporting Services: More Than Just Postgres

Here's the thing about enterprise Postgres in 2026: the database is only part of the story. Your AI agents need an MCP server to talk to the data, your applications need a REST API to query it, and your knowledge base needs a RAG server to index and retrieve from it. These services aren't optional extras, they're what make the database useful in production.Until now, you managed those services separately: different deployment pipelines, different configuration, different credentials, different monitoring. The database lived in one world and the services that depended on it lived in another, even though they're fundamentally coupled. When the database moves, the services need to follow, and when credentials rotate, every connected service needs to know about it. When you scale out, everything needs to come along for the ride.Supporting Services in the Control Plane fixes this by treating the database and its surrounding services as a single declarative unit. You add a array to the same JSON spec you already use for your database, and the Control Plane handles deployment, credential provisioning, health checking, and lifecycle management for everything together.That's a two-node distributed database with Spock multi-master replication, an MCP server for AI agent access on the US node, and PostgREST instances on both nodes for REST API [...]

Eight Bytes Is the Easy Part
Posted by Christophe Pettus in pgExperts on 2026-05-07 at 15:00
PostgreSQL 19 expands MultiXactOffset to 64 bits, eliminating a real outage failure mode. So when do regular transaction IDs get the same treatment?

You have a Patroni leader election. You are only halfway to PostgreSQL high availability.
Posted by Umair Shahid in Stormatics on 2026-05-07 at 10:40

A PostgreSQL primary loses power at 2am. Writes resume in under thirty seconds. The on-call engineer reads the alert in the morning, sees that the cluster healed itself, and goes back to coffee. That is the outcome PostgreSQL high availability is supposed to deliver.

A working Patroni cluster, on its own, gets you partway there. The leader election runs. A standby gets promoted. The cluster state in etcd stays consistent. Then the application keeps trying to reach an IP address that points at the wrong node now, the old primary needs a manual rejoin, and the on-call engineer is on a conference bridge instead of in bed.

I have seen this pattern enough times to call it the default. The cluster does its job. The application waits on a human. The runbook comes out. RTO passes the SLA. Everyone agrees afterward that “we should look at HA more seriously.”

The arithmetic of recovery time

The case for automation is mostly arithmetic.

When the cluster heals itself, the RTO clock starts at the failure detection and stops at the application’s first successful write. With Patroni’s TTL set to 30 seconds, a routing layer that follows promotion within another second or two, and an application that retries with backoff, the whole sequence finishes in under a minute. Often well under.

Bring a human into the loop, and a different clock starts. The monitoring system needs to detect the failure, group it into an alert, and deliver it to the on-call engineer’s pager. That alone is often 30 to 60 seconds. The engineer needs to wake up, find a laptop, log into the bastion, and load enough context to know what is happening. Even for a sharp on-call engineer ready at the keyboard, that is 5 to 10 minutes of best-case effort. Then comes the investigation: which node failed, what state the cluster is in, what is safe to do next. That is another 5 to 30 minutes depending on how clean the runbook is and how confident the engineer f

[...]

PG DATA 2026: The talks I am most excited about. Part 4 (the last one!)
Posted by Henrietta Dombrovskaya on 2026-05-07 at 02:08

That’s the last post of the series about the talks at the upcoming PG DATA 2026 conference, covering the remaining Friday talks.

Part 1

Part 2

Part 3

First, I wanted to mention two more talks presented by PG DATA organizers: Comparing Apples to Oranges with Postgres’ Type System by Dian Fay and Master Upgrading PostgreSQL, Using Real World stories and examples by Pat Wright. Dian’s talk is about Postgres types, and I can’t say enough how much I love the ability to create new types! Probably even more than I love Postgres extensions! Pat’s talk is about real-life upgrade stories, and although we know way too well that our own upgrades will present us with our unique challenges, it’s still worth learning from other people’s experience 🙂

Yet another real-life story is Apoorv Garg’s Electric SQL: Local -first Architecture. Building mobile applications is not something I am familiar with, but it looks like developers had to face familiar problems of reliability, performance, and security, which become especially challenging in the situation of a disappearing network.

Egor Tarasenko’s presentation, Streamlining Data Ingestion and Transformation with Trino + dbt, addresses a well-known problem: handling DDL changes in the source when they are not promptly communicated to the streaming process. I’ve seen multiple solutions to this problem, but none of them appeared to be perfect, so I’m very interested to hear Egor’s perspective.

Several presentations will address understanding and monitoring query execution. First is Alfredo Rodriguez’s presentation How to understand EXPLAIN without dying in the attempt. I remember Alfredo presenting at PG Day Chicago for the first time, and I know he is happy to be back with his by now well-known presentation. Then comes Mohsin Ejaz’s “Why your PostgreSQL tuning guide might be wrong (and what to do about it),” in which he shares DBTune’s perspective. And finally, Postgres plan monitoring and management in practice by Lukas Fittl. I am a great admirer of Lu

[...]

All Your GUCs in a Row: autovacuum_naptime, autovacuum_vacuum_cost_delay, autovacuum_vacuum_cost_limit
Posted by Christophe Pettus in pgExperts on 2026-05-07 at 01:00
Three autovacuum parameters control how often PostgreSQL vacuums, how hard it works, and how long it pauses.

Christophe’s Seven Rules of Disaster Reponse
Posted by Christophe Pettus in pgExperts on 2026-05-06 at 23:00
When your database catches fire, panic is optional. Learn seven battle-tested rules that turn chaos into a coordinated response.

MultiXact Members at 64 Bits: One Less Wraparound to Worry About
Posted by Christophe Pettus in pgExperts on 2026-05-06 at 19:00
PostgreSQL 19 eliminates the 32-bit MultiXactOffset ceiling that has crashed high-concurrency FK-heavy clusters at 3 a.m.

What a Data Lake Actually Is (and why you probably don’t need one)
Posted by Christophe Pettus in pgExperts on 2026-05-06 at 15:00
Most organizations that build data lakes don't need them.

I Built Three GitHub Codespaces Walkthroughs for Our Products. Would You Use Them?
Posted by Antony Pegg in pgEdge on 2026-05-06 at 09:18

I need your feedback to either convince Marketing that I’m a genius and they should put these GitHub Codespaces Walkthroughs on our website, or to tell me I need to keep looking for different ways to make Quickstarts easier.Bi-directional logical replication is not a simple thing. It's a genuinely complicated problem, and getting it right across multiple nodes in a distributed PostgreSQL cluster is hard. That's what makes what we do at pgEdge special: we've done the hard engineering so you don't have to. Multi-master replication, conflict resolution, failover, all of it wrapped up so you can have this capability without needing to be a rocket scientist.But there's still a gap between "this product exists" and "I've actually tried it," and that gap is almost always the setup. You need multiple Postgres instances, a replication extension configured between them, and enough infrastructure to actually prove it's working. By the time you've got all of that running on your laptop, you've burned an afternoon and you haven't learned anything about distributed Postgres yet. You've learned about Docker networking.I wanted to see if I could close that gap. I’ve created three GitHub Codespaces walkthroughs, each targeting a different pgEdge product, each designed to take you from zero to a running environment without installing a single thing on your machine. The issue is that I have no idea whether developers would actually find them useful until I get some data - and that is where you, dear Reader, can help me out, just by trying them and letting me know.

Why Codespaces

GitHub Codespaces gives you a full Linux development environment in a browser tab, backed by a container running on GitHub's infrastructure. The free tier gives individual developers 120 core-hours per month, (so 60 for a 2-core, 30 for a 4-core machine) which is more than enough to run through all three of these walkthroughs multiple times. For us, the appeal was simple: if you can click a link, you can be inside a working environment in about 60 se[...]

Nordic Cool Meets Parisian Chic Vlog: Two PGDays, One Week
Posted by Pavlo Golub in Cybertec on 2026-05-06 at 07:28

Two conferences. Two cities. Two completely different personalities. And me, somewhere in the middle, trying to keep up. 😄

First stop Helsinki, March 24. Nordic PGDay 2026. The Finns are punctual, focused, and deadly serious about PostgreSQL. Talks start on time. Coffee is strong. Silence is not awkward, it is just how things are. I loved every minute of it.

Two days later Paris, March 26. pgDay Paris, the 10th edition! Same elephant, completely different atmosphere. People arrive fashionably late, conversations go long, and somehow everything still works out beautifully. C'est la vie. 

Same community, same passion for open source, but such different energy. If you ever wondered whether PostgreSQL people have a cultural identity — yes, they do. And it depends heavily on latitude. I grabbed my camera to capture both! 😅

 

The post Nordic Cool Meets Parisian Chic Vlog: Two PGDays, One Week appeared first on CYBERTEC PostgreSQL | Services & Support.

All Your GUCs in a Row: autovacuum_multixact_freeze_max_age
Posted by Christophe Pettus in pgExperts on 2026-05-06 at 01:00
Prevent MultiXact ID wraparound by controlling when autovacuum freezes old locks.

Managed Postgres, Examined: Amazon Aurora PostgreSQL
Posted by Christophe Pettus in pgExperts on 2026-05-05 at 13:00
Aurora PostgreSQL separates compute and storage, replicating redo records across three Availability Zones.

How are committers selected?
Posted by Tomas Vondra on 2026-05-05 at 10:00

At a couple recent conferences, I got to describe the process Postgres uses to select new committers/maintainers. Usually to users and developers using Postgres, but in some cases it was unclear even to experienced Postgres contributors.

The official docs are rather brief, and don’t explain various important details. Let me explain how I understand the informal process, who’s responsible for what etc.

This post is not meant to give you advice on how to become a committer, that’s a far more subjective question. Perhaps in some future post, not sure yet.

CYBERTEC's contributions to PostgreSQL 19
Posted by Christoph Berg in Cybertec on 2026-05-05 at 09:40

The window for new features in PostgreSQL 19 has closed with the Commitfest PG19-Final on April 9th. 182 patches were committed in this commitfest alone (plus more in the preceding ones). No new features are being accepted for PostgreSQL 20 yet, the git branches for 19 and 20 will likely be branched off in June. Currently the focus of the PostgreSQL community is on stabilizing PostgreSQL 19 so it is ready for release at the end of summer. If everything goes well, it will be released in September 2026.

Time to have a look at what the CYBERTEC people have been doing during the PostgreSQL 19 cycle since the PG 18 branch was split off in June 2025.

The big change: REPACK CONCURRENTLY by Antonin Houska

One of the most popular CYBERTEC open-source PostgreSQL project is pg_squeeze, written by Antonin Houska. Like PostgreSQL's built-in VACUUM FULL feature and other projects like pg_repack, it lets users compact bloated PostgreSQL tables by rewriting them into fresh tables, reclaiming any wasted storage space caused by DELETE and UPDATE operations. The downside of VACUUM FULL is that it requires an access-exclusive lock on tables, so it cannot be used while the database is being accessed by users. In contrast, pg_squeeze and pg_repack perform the operation online. The even support write operations while the table is copied over, duplicating writes to the copy. pg_repack does that the traditional way by creating a set of triggers on the table. CYBERTEC's pg_squeeze uses modern PostgreSQL mechanisms, setting up logical replication between the old and the new table for the duration of the operation. Both methods still need an access-exclusive lock at the end of the operation to swap the new table into the place of the old one, but that is a quick constant-time operation.

Some time ago, PostgreSQL committer Álvaro Herrera approached Antonin asking if CYBERTEC would be willing to donate pg_squeeze for inclusion into PostgreSQL itself. We were of course were happy to support the idea. Antonin and Álvaro put i

[...]

Contributions for week 17, 2026
Posted by Cornelia Biacsics in postgres-contrib.org on 2026-05-05 at 06:02

Gülçin Yıldırım Jelinek organized the Prague PostgreSQL Meetup on 27 April, 2026 Artjoms Iskovs and Andreas Scherbaum presented at the Meetup.

Cornelia Biacsics organized the PostgreSQL User Group Vienna Meetup #2 on 28th April, 2026. Bernd Reiß and Sahil Sharma spoke at the event.

Sydney PostgreSQL User Group met on April 29, 2026, organized by Rajni Baliyan and Roneel Kumar

Speaker:

  • Diksha Sharma
  • Roneel Kumar
  • Shadab Mohammad

PGDay Armenia happened on April 30 2026, organized by Emma Saroyan and Sarah Conway

Call for Paper Committee:

  • Emma Saroyan (Voting Chair)
  • Boriss Mejías
  • Derk van Veen
  • Dian Fay
  • Floor Drees
  • Gülçin Yıldırım Jelínek
  • Ilya Kosmodemiansky
  • Laurenz Albe
  • Teresa Lopes
  • Vik Fearing

Speaker:

  • Robert Treat
  • Varik Matevosyan
  • Xavier Fischer
  • Vlada Pogozhelskaya
  • Dalto Curvelano
  • Vik Fearing
  • Ruslan Senchukov
  • Alicja Kucharczyk
  • Bruce Momjian

Community Blog Posts

All Your GUCs in a Row: autovacuum_max_workers
Posted by Christophe Pettus in pgExperts on 2026-05-05 at 01:00
Raising autovacuum_max_workers above 3 won't speed up vacuum unless you also increase autovacuum_vacuum_cost_limit—the I/O budget is divided among workers, not…

When Open Source Becomes Infrastructure: The pgBackRest Lesson
Posted by Vibhor Kumar on 2026-05-04 at 15:44

The recent archival of pgBackRest has created an important and necessary conversation in the PostgreSQL community, not only about one project, one maintainer, or one repository, but about how we think about open-source software once it becomes part of critical enterprise infrastructure.

For many PostgreSQL users, pgBackRest has never been just another utility. It has been part of the operational backbone of PostgreSQL environments, especially for backup, restore, archive management, recovery planning, and disaster recovery readiness. The pgBackRest site describes the project as a reliable backup and restore solution for PostgreSQL that can scale to large databases and workloads, which helps explain why many organizations came to rely on it in serious production environments.  

That is why this moment matters, and it is also why the conversation deserves care.

The pgBackRest website now carries a notice of obsolescence stating that pgBackRest is no longer being maintained and asking anyone who forks the project to select a new name. The notice also explains that the project had been a passion project for thirteen years, supported for much of that time by corporate sponsorship, late nights, weekends, and contributions from others in the community.  

When something like this happens, it is natural for people to ask a difficult question: if an open-source project can be archived by its original maintainer, was it truly open source?

My answer is yes, but I think that answer is only the beginning of the conversation.

From a licensing perspective, pgBackRest remains open source. The repository uses the MIT License, which grants broad permission to use, copy, modify, merge, publish, distribute, sublicense, and sell copies of the software, provided the copyright and permission notices are preserved.  

So, legally and structurally, pgBackRest remains open source. The code can be forked, the work can continue, and the community can create a successor if there is enough will, capability, f

[...]

Failover Slots, Two Years On
Posted by Christophe Pettus in pgExperts on 2026-05-04 at 15:00
PostgreSQL 19 finally makes logical replication and physical standbys work together safely.

Potential Consequences of Using Postgres as a Job Queue
Posted by Richard Yen on 2026-05-04 at 06:00

This post was originally published on the Microsoft Tech Community Blog.

Introduction

At small scale, using Postgres as a job queue is totally fine, and I’d even say it’s the right call. Fewer moving parts, one less system to manage, ACID guarantees on your jobs. What’s not to love?

The problem is that “small scale” has a ceiling, and the ceiling is lower than most people expect. When you’ve got thousands of concurrent workers hammering a jobs table with SELECT ... FOR UPDATE SKIP LOCKED, things start to behave in ways that aren’t obvious from the application layer. CPU usage creeps up. Also vacuum sometimes can’t keep up. Finally, in the wait event stats, you start seeing ominous entries like LWLock:MultiXactSLRU stacking up across many backends.

This pattern has tripped up teams more than a few times, and it usually plays out the same way: everything works fine in dev and staging, then goes off a cliff in production once the concurrency gets real. So let’s dig into why this happens, and what the alternatives look like.


The Typical Pattern

When using Postgres as a job queue, the standard approach looks something like this:

CREATE TABLE job_queue (
    id         bigserial PRIMARY KEY,
    status     text NOT NULL DEFAULT 'pending',
    payload    jsonb NOT NULL,
    created_at timestamptz NOT NULL DEFAULT now(),
    locked_by  text,
    locked_at  timestamptz
);

CREATE INDEX idx_job_queue_status ON job_queue (status) WHERE status = 'pending';

Workers grab jobs with:

UPDATE job_queue
   SET status = 'processing',
       locked_by = 'worker-42',
       locked_at = now()
 WHERE id = (
     SELECT id FROM job_queue
      WHERE status = 'pending'
      ORDER BY created_at
      LIMIT 1
        FOR UPDATE SKIP LOCKED
 )
 RETURNING *;

And then mark them done:

UPDATE job_queue SET status = 'completed' WHERE id = $1;

Some users may DELETE the row entirely. Either way, the lifecycle is: insert, lock-and-update, update-or-delete. Repeated thousands of times per second

[...]

PgQue: Two Snapshots and a Diff
Posted by Christophe Pettus in pgExperts on 2026-05-04 at 03:00
PgQue's zero-mutation queue algorithm eliminates the "queue death spiral" by replacing UPDATE-and-DELETE with snapshot diffing.

All Your GUCs in a Row: autovacuum_freeze_max_age
Posted by Christophe Pettus in pgExperts on 2026-05-04 at 01:00
PostgreSQL wraps transaction IDs every two billion transactions—autovacuum_freeze_max_age stops your database from crashing when it does.

PG DATA 2026. The talks I am most excited about. Part 3
Posted by Henrietta Dombrovskaya on 2026-05-03 at 23:09

After Part 1 and Part 2, here comes the Friday schedule! I hope that on the second day of the conference, I will have more time to attend different talks and actually stay and listen!

My absolutely-most-anticipated Friday talk is Paul Jungwirth’s Migrating to a Temporal Schema. I hope I do not need to explain why. It has been more than ten years since I first tried to implement an asserted versioning model in Postgres, and I took in all the endless possibilities that opened up when you incorporate time into Postgres. I’ve been closely watching Paul’s work for several years, and in 2024, I asked him to present at the Chicago PostgreSQL User Group. That was a blast, but then I really wanted him to give a talk on temporal tables at any Postgres conference, preferably in Chicago :). I am super-excited that temporal features are making their way into Postgres Core, slowly but surely, and waiting for this talk like for no other!

Another speaker whom I encouraged to apply is Denis Magda. I have been following his work for several years, and I really appreciate his contribution to optimizing applications’ interaction with Postgres. Needless to say, I love his book Just Use Postgres! In fact, the talk that Denis will present at PG DATA is just about that: Using modern Postgres capabilities for hybrid search encourages app developers to use Postgres native capabilities in place of “specialized” third-party tools.

I am also happy that Varun Dhawan’s talk was finally accepted for presentation in Chicago! His talk Using Postgres to locate the best coffee near you! demonstrates the versatility of Postgres and presents some non-trivial use cases.

As much as I love seeing new faces at PG DATA, I really appreciate the well-known speakers who often come to Chicago and consistently provide the highest-quality content to conference attendees. We want to bring the world’s best speakers to our local audience, and I am very grateful to all of those who help us to achieve this goal.

This “Gold standard list” inclu

[...]

All Your GUCs in a Row: autovacuum_analyze_scale_factor and autovacuum_analyze_threshold
Posted by Christophe Pettus in pgExperts on 2026-05-03 at 01:00
Autovacuum's ANALYZE threshold formula combines a fixed floor and a percentage of table size.

wal_sender_shutdown_timeout: Now Actually a Timeout
Posted by Christophe Pettus in pgExperts on 2026-05-02 at 18:30
If you have ever run pg_ctl stop -m fast on a primary and watched it hang well past wal_sender_shutdown_timeout, you have met a bug that has been sitting in walsender.c for years. As of commit c0b24b3 on master (Fujii Masao, May 1, reported by Andres Freund via FreeBSD CI), it is fixed. PostgreSQ…

All Your GUCs in a Row: autovacuum
Posted by Christophe Pettus in pgExperts on 2026-05-02 at 01:00
Disable autovacuum and PostgreSQL will cheerfully show you every failure mode in its playbook, from table bloat to transaction ID wraparound.

Two Hundred and Twelve Things
Posted by Christophe Pettus in pgExperts on 2026-05-01 at 15:00
PostgreSQL 19 is an admin-and-monitoring release with 212 items: worker-managed AIO, smarter planner joins, faster diagnostics, and a C11 requirement.

pgxbackup: Continuity Support for pgBackRest
Posted by Christophe Pettus in pgExperts on 2026-05-01 at 13:00
PGX is stepping in to maintain pgBackRest as pgxbackup, ensuring critical fixes and PostgreSQL compatibility for the industry-standard backup tool.

It Depends: Using Session Variables in Postgres
Posted by Shaun Thomas in pgEdge on 2026-05-01 at 05:36

There's been a kind of persistent myth regarding Postgres since I first started using it seriously over 20 years ago: "Postgres doesn't support user variables." This hasn't really been true since version 8.0 way back in 2005. Part of this stems from the fact it doesn't do things the same way as other common database engines.Why don't we spend a little time exploring the functionality that time forgot?

What Everyone Else Is Doing

Before I delve into the Postgres approach, let's take a look at the competition. If anyone wants to switch to Postgres (as they should), they'll bring along plenty of assumptions.Let's start with MySQL, the formerly undisputed database king of the LAMP stack. MySQL session variables merely prefix any name with  to assign a value:Simple, right? It's even possible to use them directly in queries:We don't have to get into the finer minutiae here, as the MySQL documentation on user-defined variables does that job splendidly. The point is that some users expect this level of compatibility and balk when it's missing.When it comes to SQL Server, things are very similar to MySQL, though perhaps a bit more structured:Once again, the SQL Server documentation on variables is pretty clear about how these work. The primary caveat here is that these are limited to the current batch, making them somewhat tedious to work with in some cases.The picture for Oracle is a bit different. Oracle calls them substitution variables, and prefixes using  rather than : This is also closer to a macro system than a true variable; the SQL*Plus or SQLcl clients substitute the values prior to sending statements to the server. It's not something other drivers or clients can use unless they added it themselves for compatibility purposes.

Postgres Has Entered the Chat

So where does Postgres fit into all of this?If Oracle's  substitution is what you're accustomed to, Postgres actually has a direct equivalent. The psql client supports  for defining client-side variables: The  tool has supported these practically sinc[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.