Latest Blog Posts

September PUG recording
Posted by Henrietta Dombrovskaya on 2025-09-17 at 18:55

I am glad we had an option to replay this talk from PG Day Chicago one more time! If you didn’t have a chance to join us, here is the recording – enjoy!

Waiting for PostgreSQL 19 – Add date and timestamp variants of random(min, max).
Posted by Hubert 'depesz' Lubaczewski on 2025-09-17 at 15:18
On 9th of September 2025, Dean Rasheed committed patch: Add date and timestamp variants of random(min, max).   This adds 3 new variants of the random() function:   random(min date, max date) returns date random(min timestamp, max timestamp) returns timestamp random(min timestamptz, max timestamptz) returns timestamptz   Each returns a random value x in the … Continue reading "Waiting for PostgreSQL 19 – Add date and timestamp variants of random(min, max)."

2025 PostGIS & GEOS Release
Posted by Paul Ramsey in Crunchy Data on 2025-09-17 at 12:00

I am excited to announce PostGIS 3.6 and GEOS 3.14.

The PostGIS spatial extension to PostgreSQL and the GEOS computational geometry library taken together provide much of the functionality of PostGIS, and are the open source focus of the (Crunchy Data) Snowflake PostGIS team.

Each year we work hard to ensure a release before the autumn PostgreSQL release, to ensure that the latest and greatest PostGIS and GEOS ready to be packaged with the latest PostgreSQL. All the critical issues are closed and the software tested and ready to go.

The 2025 release includes:

  • Automated cleaning of dirty polygonal coverages!
  • Many new 3D functions from the CGAL library are exposed in SQL.
  • Enhanced logging of GDAL raster access for easier debugging and development.
  • Simplified handling of PostgreSQL interrupts for better reliability.
  • Numerous fixes for performance and bugs in GEOS and PostGIS.

The GEOS 3.14 release notes and PostGIS 3.6 release notes provide more detail on the other changes over the past year.

What is Coverage Cleaning

A “coverage” is a set of polygons that uniquely tile an area. There are lots of coverages in real life data. States are a coverage. Counties are a coverage. Parcels are a coverage. For any coverage, every point is either in one-and-only-one member of the coverage, or not in the coverage. No area should be covered twice.

A “classic” problem in spatial data management is “coverage cleaning” – repairing situations where there are gaps and/or overlaps in a polygonal coverage.

Clean coverages are very useful. If you know you have a clean coverage, you can very quickly merge neighboring polygons, or simplify adjacent edges.

What has been missing is any way to automatically repair invalid coverages – coverages with overlapping polygons, or with small gaps between polygons. End users have had to export their dirty data and run it through external tools like Mapshaper to get clean coverages.

With the new coverage cleaning functionality w

[...]

Making Postgres scale to zero with CNPG
Posted by Esther Minano in Xata on 2025-09-17 at 10:15
How we built activity-aware Postgres clusters that hibernate automatically and save resources

Contributions for the week 37
Posted by Floor Drees in postgres-contrib.org on 2025-09-17 at 09:32

Miles Richardson presented the WarehousePG project (an open source Greenplum fork) at the Apache Iceberg™ Europe Community Meetup in London, September 8. Watch the recording: youtu.be/lz6w9W1Ubps?si=upETJvoKD_zuHL0R

PGDay UK took place, September 9, at the Cavendish Conference Center.

Organizers:

  • Chris Ellis
  • Devrim Gunduz
  • Dave Page

Talk Selection Committee:

  • Ayse Bilge Ince
  • Dave Pitts
  • Gülçin Yıldırım Jelinek
  • Alastair Turner (chair, voting)

Volunteers:

  • Dave Pitts (room host)
  • Floor Drees (Code of Conduct team, room host)
  • Vik Fearing (registration)
  • Magnus Hagander (registration)

Speakers:

  • Bruce Momjian - Future Postgres Challenges
  • Andrew Farries – Postgres schema migrations using the expand/contract pattern Andrew Farries
  • Michael Banck – PostgreSQL Performance Tuning
  • Marta Palandri - Serving Literary Knowledge as Data: Building and Documenting DH APIs with PostgreSQL
  • Ellyne Phneah – Explaining PostgreSQL Like I’m Five (or 85)
  • Jan Karremans – PostgreSQL Certification – The PostgreSQL Way
  • Mehmet Yılmaz – Adaptive Partitioning: Balancing OLTP & OLAP with Citus
  • Jimmy Angelakos – Row-Level Security sucks. Can we make it usable?

The PostgreSQL UG NL meetup took place September 11, at the Lunatech offices in Rotterdam. Stefan Fercot and Chris Ellis delivered talks on PgBouncer and the (mis)use of some Postgres features respectively, and there were lightning talks by Jan Karremans (on the Open Alliance for PostgreSQL Education), Josef Machytka (on the credativ-pg-migrator project), and Kaarel Moppel (on pgbadger alternative pgweasel).

PGDay Lowlands took place September 12 at the Rotterdam Blijdorp Zoo.

Organizers:

  • Boriss Mejias
  • Derk van Veen
  • Floor Drees
  • Nico Rikken
  • Teresa Lopes
  • Sarah Conway
  • Stacy Raspopina

Talk Selection Committee:

  • Teresa Lopes (chair, voting)
  • Carole Arnaud
  • Chris Ellis
  • Gerard Zuidweg
  • Emma Saroyan

Volu

[...]

Reconsidering the interface
Posted by Ants Aasma in Cybertec on 2025-09-17 at 06:00

Recently a great presentation “1000x: The Power of an Interface for Performance” from Joran Dirk Greef from TigerBeetle made the rounds. If I may summarize, the gist of the presentation was that the correct programming model can mean many orders of magnitude performance difference. As the presentation did not explore this, I wanted to see how far we get by adjusting our programming style on boring old relational databases.

As my aim is not to engage in competitive benchmarking I will not try to reproduce the exact benchmark used in the talk. Rather I will be using our trusty old pgbench, which implements workload described in TPC-B. Not at all coincidentally, this is almost exactly Jim Gray’s DebitCredit workload referenced in the talk. Here is the benchmark script used:

\set aid random(1, 100000 * :scale)
\set bid random(1, 1 * :scale)
\set tid random(1, 10 * :scale)
\set delta random(-5000, 5000)
BEGIN;
UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES 
     (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
END;

The important tunable parameter here is the scale factor. This determines the size of the database - about 17MB per increment - but also the number of branches. The general wisdom is that one shouldn’t benchmark the database with a low scale factor, because then you get bad performance due to contention. But bad performance due to contention on hot rows is exactly what happens in real world transaction systems. So instead of increasing the scale factor, I’m going to keep it at 10 to have something equivalent to 10% contention, and will look into increasing performance using other means.

The benchmark was done using PostgreSQL 18 running on Ryzen 9 9900X and a Samsung 990 Pro NVMe drive. The only tuning

[...]

PgPedia Week, 2025-09-07
Posted by Ian Barwick on 2025-09-16 at 20:15
PostgreSQL 19 changes this week New GUC debug_print_raw_parse Option max_retention_duration added for subscriptions COPY : hint added for COPY TO when a WHERE clause was provided, noting that COPY (SELECT ... WHERE ...) TO can be used instead PostgreSQL 18 changes this week

PostgreSQL 18 RC1 has been released ( announcement ). 

more...

How to handle "database is not accepting commands"
Posted by Laurenz Albe in Cybertec on 2025-09-16 at 06:00

Fuzzies (small furry animals) are nice, but make bad DBAs: a fuzzy is sitting in a deck chair, saying that it got the error message "database is not accepting commands" and thought it was time to take off
© Laurenz Albe 2025

If you ever get the error message “database is not accepting commands”, you are dangerously close to transaction ID wraparound. Most PostgreSQL users understand the principle behind transaction ID wraparound, but I recently realized that even many PostgreSQL power users have a wrong idea of how to fix the problem. So I decided to write about it in some more detail.

How do you end up with “database is not accepting commands”?

If you end up with this error, your application will have down time while you manually repair the problem. In this state, you can still run queries, but you cannot perform any more data modifications. Few people ever get that far, because PostgreSQL has several lines of defense before it has to take this last, invasive measure:

  • if a table contains live rows older than autovacuum_freeze_max_age transactions (200 million by default), PostgreSQL will launch an anti-wraparound autovacuum worker
  • if a table contains live rows older than vacuum_failsafe_age transactions (1.6 billion by default), PostgreSQL will launch an emergency anti-wraparound autovacuum worker that skips the index cleanup step and runs as fast as it can
  • 40 million transactions before transaction ID wraparound, you will get warnings in the log

Only if none of these safeties can prevent the problem will PostgreSQL stop data modifications.

There are a few ways to prevent PostgreSQL from fixing the problem by itself:

  • keep a database transaction open forever
  • keep a prepared transaction around without committing it or rolling it back
  • keep an orphaned replication slot with the standby server having hot_standby_feedback enabled
  • have data corruption that makes VACUUM fail

What is the proper measure against “database is not accepting commands”?

The documentation describes how to fix the problem:

In this condition, any transactions already in progress can continue, but only read-only transactions can be started. Operations that modify data

[...]

pgsql_tweaks 1.0.0 Released
Posted by Stefanie Janine on 2025-09-15 at 22:00

pgsql_tweaks is a bundle of functions and views for PostgreSQL

The source code is available on Codeberg.
You can install the whole package, or just copy what is needed from the source code.

The extension is also available on PGXN.

Version 1.0

I decided to make this the 1.0 Version. The First commit is from 2017-08-11, the extension is now more than eight years old.

News

The project started as a personal repository. When I left GitHub, see Leaving GitHub and GitHub is History for details, I decided to move it to a project on its own on Codeberg.

Website

The website on pgsql-tweaks.org has been setup first with the HTML help page of the project.

Now it is a real website realised with Jekyll. Future changes of pgsql_tweaks will be blogged on pgsql-tweaks.org/blog.
The source code is part of the project on Codeberg.

Also the documentation has been changed from a single Markdown file to a website, also realised with Jekyll on rtfm.pgsql-tweaks.org. RTFM stands for Read The Fine Manual.
The source code is part of the project on Codeberg.

Contribution

This move also opens the possibilities to contribute to the project. The documentation has now detailed information about how to contribute:

Extension Changes

Thanks to Elodie Jex the extension has a shiny logo.

![The pgsql_tweaks logo](https://pgsql-tweaks.org/assets/img/pgsql_tweaks-logo.png “”)

Created in Schema pgsql_tweaks

Instead of the current default schema, the extension is now created in its own schema. This solves conflicts with other extensions, as has been the case in the past with pgtap, see pgTAP Issue 340 for details.

Documentation for View pg_unused_indexes

The view has not been documented in previous versions.

New Features

The new features are blogged on pgsql_tweaks Version 1 Released!.

PgPedia Week, 2025-08-31
Posted by Ian Barwick on 2025-09-15 at 10:14

The PostgreSQL code base has been around for almost 3 decades, and as recent commit 710e6c43 shows, there is still some legacy cruft from the initial import/conversion in need of cleanup:

Remove unneeded casts of BufferGetPage() result BufferGetPage() already returns type Page, so casting it to Page doesn't achieve anything. A sizable number of call sites does this casting; remove that. This was already done inconsistently in the code in the first import in 1996 (but didn't exist in the pre-1995 code), and it was then apparently just copied around.

This a harmless, cosmetic cleanup, but I thought it was an interesting reminder of the longevity of the code.  

PostgreSQL 19 changes this week

Main item of note this week is that the C requirement has been raised from C99 to C11 (commit  f5e0186f ).

PostgreSQL 18 changes this week

The release schedule has been tentatively announced :

18rc1 : 2025-09-04 18.0 :  2025-09-25

Note that these dates assume no unexpected issues are found with the current code.

more...

PostgreSQL maintenance without superuser
Posted by Radim Marek on 2025-09-13 at 20:45

How many people/services have superuser access to your PostgreSQL cluster(s)? Did you ever ask why your software engineers might need it? Or your BI team? Why those use cases require same privileges as someone who can drop your databases?

The answer isn't because these operations are inherently dangerous - it's because PostgreSQL historically offered limited options for operational access or simply because not enough people are aware of the options. So the common practice is to either got basic permissions or handover the keys to the kingdom.

PostgreSQL's built-in predefined roles solve this problem by providing purpose-built privileges for common maintenance tasks. Instead of granting superuser access for routine operations, you can delegate specific capabilities

  • monitoring teams get comprehensive observability access, backup services get data reading capabilities, and maintenance scripts get precisely the permissions they need, nothing more.

What are Predefined Roles?

PostgreSQL's built-in administrative roles are purpose-built permission sets that solve the superuser dilemma for common maintenance tasks. Out of the box, there are 15 predefined roles that provide granular access to specific operational capabilities without requiring full superuser privileges.

While you can view their list and description in official documentation, in this article we will explore them bit more thoroughly and at the same time look into system catalogs to understand them better. The individual roles can be grouped by their functionality and most of them are quite easy to grasp, ranging from simple monitoring access to powerful filesystem operations that require careful consideration.

Data Access Role

  • pg_database_owner - Database-specific ownership (special case)
  • pg_read_all_data - Read access to all tables, views, sequences
  • pg_write_all_data - Write access to all tables, views, sequences

Monitoring & Observability

  • pg_monitor - Which is actually monitor
[...]

Hacking Workshop for October 2025
Posted by Robert Haas in EDB on 2025-09-12 at 13:25

Next month, I'll be hosting 2 or 3 discussions of Thomas Munro's talk, Investigating Multithreaded PostgreSQL, given at 2025.pgconf.dev (talk description here). If you're interested in joining us, please sign up using this form and I will send you an invite to one of the sessions. Thanks to Thomas for agreeing to attend the sessions. As usual, nobody is too inexperienced to join us, and nobody is too experienced. We have everyone from total newcomers to interested committers.

Get Excited About Postgres 18
Posted by Elizabeth Garrett Christensen in Crunchy Data on 2025-09-12 at 12:00

Postgres 18 will be released in just a couple weeks! Here’s some details on the most important and exciting features.

Asynchronous i/o

Postgres 18 is adding asynchronous i/o. This means faster reads for many use cases. This is also part of a bigger series of performance improvements planned for future Postgres, part of which may be multi-threading. Expect to see more on this in coming versions.

What is async I/O?

When data isn’t in the shared memory buffers already, Postgres reads from disk, and I/O is needed to retrieve data. Synchronous I/O means that each individual request to the disk is waited on for completion before moving on to something else. For busy databases with a lot of activity, this can be a bottleneck.

Postgres 18 will introduce asynchronous I/O, allowing workers to optimize idle time and improve system throughput by batching reads. Currently, Postgres relies on the operating system for intelligent I/O handling, expecting OS or storage read-ahead for sequential scans and using features like Linux's posix_fadvise for other read types like Bitmap Index Scans. Moving this work into the database with asynchronous I/O will provide a more predictable and better-performing method for batching operations at the database level. Additionally, a new system view, pg_aios, will be available to provide data about the asynchronous I/O system.

Postgres writes will continue to be synchronous - since this is needed for ACID compliance.

If async i/o seems confusing, think of it like ordering food at a restaurant. In a synchronous model, you would place your order and stand at the counter, waiting, until your food is ready before you can do anything else. In an asynchronous model, you place your order, receive a buzzer, and are free to go back to your table and chat with friends until the buzzer goes off, signaling that your food is ready to be picked up.

Async I/O will affect:

  • sequential scans
  • bitmap heap scans (following the bitmap index scan)
  • some maintenance opera
[...]

The Cost of TDE and Checksums in PGEE
Posted by Christoph Berg in Cybertec on 2025-09-12 at 06:00

It's been a while since the last performance check of Transparent Data Encryption (TDE) in Cybertec's PGEE distribution - that was in 2016. Of course, the question is still interesting, so I did some benchmarks.

Since the difference is really small between running without any extras, with data checksums turned on, and with both encryption and checksums turned on, we need to pick a configuration that will stress-test these features the most. So in the spirit of making PostgreSQL deliberately run slow, I went with only 1MB of shared_buffers with a pgbench workload of scale factor 50. The 770MB of database size will easily fit into RAM. However, having such a small buffer cache setting will cause a lot of cache misses with pages re-read from the OS disk cache, checksums checked, and the page decrypted again. To further increase the effect, I ran pgbench --skip-some-updates so the smaller, in-cache-anyway pgbench tables are not touched. Overall, this yields a pretty consistent buffer cache hit rate of only 82.8%.

Here are the PGEE 17.6 tps (transactions per second) numbers averaged over a few 1-minute 3-client pgbench runs for different combinations of data checksums on/off, TDE off, and the various supported key bit lengths:

  no checksums   data checksums  
no TDE 2455,6 100,00 % 2449,7 99,76 %
128 bits 2440,9 99,40 % 2443,3 99,50 %
192 bits 2439,6 99,35 % 2446,1 99,61 %
[...]

A Trip To Vienna With Surprises
Posted by Christoph Berg in Cybertec on 2025-09-11 at 05:43

My trip to pgday.at started Wednesday at the airport in Düsseldorf. I was there on time, and the plane started with an estimated flight time of about 90 minutes. About half an hour into the flight, the captain announced that we would be landing in 30 minutes - in Düsseldorf, because of some unspecified technical problems. Three hours after the original departure time, the plane made another attempt, and we made it to Vienna.

On the plane I had already met Dirk Krautschick who had the great honor of bringing Slonik (in the form of a big extra bag) to the conference, and we took a taxi to the hotel. On the taxi, the next surprise happened: Hans-Jürgen Schönig unfortunately couldn't make it to the conference, and his talks had to be replaced. I had submitted a talk to the conference, but it was not accepted, and neither queued on the reserve list. But two speakers on the reserve list had cancelled, and another was already giving a talk in parallel to the slot that had to be filled, so Pavlo messaged me if I could hold the talk - well of course I could. Before, I didn't have any specific plans for the evening yet, but suddenly I was a speaker, so I joined the folks going to the speakers dinner at the Wiener Grill Haus two corners from the hotel. It was a very nice evening, chatting with a lot of folks from the PostgreSQL community that I had not seen for a while.

Thursday was the conference day. The hotel was a short walk from the venue, the Apothekertrakt in Vienna's Schloss Schönbrunn. The courtyard was already filled with visitors registering for the conference. Since I originally didn't have a talk scheduled, I had signed up to volunteer for a shift as room host. We got our badge and swag bag, and I changed into the "crew" T-shirt.

The opening and sponsor keynotes took place in the main room, the Orangerie. We were over 100 people in the room, but apparently still not enough to really fill it, so the acoustics with some echo made it a bit difficult to understand everything. I hope that part ca

[...]

pgstream v0.8.1: hstore transformer, roles snapshotting, CLI improvements and more
Posted by Ahmet Gedemenli in Xata on 2025-09-10 at 14:15
Learn how pgstream v0.8.1 transforms hstore data and improves snapshot experience with roles snapshotting and excludedtables option

My experience at PGIbz 2024 by Jesús Espino
Posted by Álvaro Hernández in Fundación PostgreSQL on 2025-09-09 at 14:14

This article has been published with explicit permission from Jesús Espino. It's a copy of a post originally published on his LinkedIn account on September 16, 2024.

This has been my first time talking at a Postgres conference, and it has been amazing. I must admit that I felt really small there, surrounded by speakers from projects like Omnigres, Timescale, OnGres, or Yugabyte. Still, I think there was value in my talk about understanding the inner workings of a SQL query inside Postgres.

One important thing to notice is that Ibiza is a great place for a conference. It is a gorgeous environment, very well connected internationally, and with a very relaxed vibe. It sounds like Ibiza is for a party, but I think that is a huge misrepresentation of what Ibiza is.

But let's jump into the conference itself. It was a relatively small conference regarding people—I would say less than 100 (I don't know the exact numbers). What was amazing was having more opportunity to talk with almost everybody. Also, the conference was a single-track conference with pros and cons, but I have to admit that I enjoyed almost every talk, so my main concern with single-track conferences is gone.

The first day

We started the conference with a quick introduction to the event by Álvaro Hernández Tortosa and 🎙Bart Farrell (Who was also the MC and photographer for the conference).

The first day of the conference started with Tania Ash and Andrei Zaichikov sharing their experience with massive migrations to PostgreSQL in big regulated organizations. They talked about the methodology they used there (Manada). They also discussed the technical problem and the organizational part and how important it is to tackle it all together.

The second one was from Mayuresh B., who discussed the possible tricky situations when migrating from Oracle to PostgreSQL. For example, Oracle RAC is not equivalent to Patroni. Or the fact that Oracle has some subquery caching that doesn't exist in PostgreSQL can catch you off-guard after wha

[...]

3 Features I am Looking Forward to in PostgreSQL 18
Posted by Umair Shahid in Stormatics on 2025-09-09 at 09:15

It is that time of the year again. The first release candidate of PostgreSQL 18 is out, and things look promising. We should expect General Availability in the next 2-4 weeks. 

Exciting times! 

Over the past many years and as many releases, the PostgreSQL community has done a phenomenal job of being disciplined about the annual release process. And we have done so averaging 150+ new features with each release!

For the upcoming v18, here are the top three features I am most excited about:

#1 – Asynchronous I/IO

PostgreSQL 18 introduces a significant under‑the‑hood change with its Asynchronous I/O (AIO) subsystem, which fundamentally alters how the database performs disk reads.

What asynchronous I/O does

Historically, PostgreSQL issued synchronous disk reads: each backend process would call the operating system, wait for the data to arrive, and then continue processing. This “one book at a time” model left CPUs idle whenever storage was slow (especially on network‑attached disks), limiting throughput. The new subsystem lets a backend queue multiple read requests at once, allowing other work to continue while data is being fetched. When the requested blocks are ready, PostgreSQL copies them directly into shared buffers, eliminating reliance on the kernel’s readahead heuristics.

How it works

1. io_method – A new server setting chooses the implementation:

  • sync replicates the old synchronous behaviour.
  • worker uses dedicated I/O worker processes. The main backend enqueues read requests and continues execution while these workers interact with the kernel. You can configure the number of workers via io_workers.
  • io_uring (Linux only) leverages the kernel’s high‑performance io_uring API to submit and complete I/O requests without separate worker processes.

2. Tuning and monitoring – New variables io_combine_limit and io_max_combine_limit determine how many adjacent

[...]

Can Collations Be Used Over citext?
Posted by Umut TEKIN in Cybertec on 2025-09-09 at 04:00

Introduction

Recently, I read Laurenz Albe's blog about case insensitive string search. He recommended case insensitive collations saying, "it won't be any more expensive (and maybe more intuitive) than the current techniques of using upper() or the citext extension are today". In other words, a custom collation can be used instead of citext to achieve case insensivity with a lower performance penalty. Today we will be testing it.

Test Setup

CREATE COLLATION case_insensitive_accent_sensitive (
provider = icu,
locale = 'und-u-ks-level2',
deterministic = false
);

CREATE TABLE demo1 (
word TEXT COLLATE case_insensitive_accent_sensitive
);

We created the collation as case insensitive but accent sensitive beceause citext works in the same way.

create extension citext;

CREATE TABLE demo2 (
word citext
);

INSERT INTO demo1 VALUES ('apple'), ('APPLE'), ('Åpple'), ('Apple');

INSERT INTO demo2 VALUES ('apple'), ('APPLE'), ('Åpple'), ('Apple');

Then duplicate all these records until we have ~32m rows for both tables.

insert into demo1 select * from demo1;
insert into demo1 select * from demo1;
. . .

select count(*) from demo2;
count
33554432
(1 row)

select count(*) from demo5;
count
33554432
(1 row)

create index x on demo1(word);
create index y on demo2(word);

update demo1 set word = 'applex' where ctid = '(0,1)';
update demo2 set word = 'applex' where ctid = '(0,1)';

Comparison For Different Operators

"=" Operator

The following results show the average execution time after executing them 20 times:

explain(analyze) select * from demo1 where word  = 'applex';
                                                   QUERY PLAN                                                    
-----------------------------------------------------------------------------------------------------------------
 Index Only Scan using x on demo1  (cost=0.56..4.58 rows=1 width=6) (actual time=0.092..0.096 rows=1.00 loops=1)
   Index Cond: (word = 'applex'::text)
   Heap Fetches: 1
   Index Searches: 1
   Buffers: sh
[...]

pgEdge goes Open Source
Posted by Dave Page in pgEdge on 2025-09-08 at 17:15

In November last year after nearly two decades at my previous gig, I came to the conclusion that I didn’t want to work at what seemed to be rapidly becoming an AI-focused company and moved to pgEdge where the focus is well and truly on distributed PostgreSQL and Postgres generally. Distributed databases (and particularly Postgres of course) have always been a passion of mine – even being a key topic of my master’s dissertation many years ago.Moving to pgEdge was a breath of fresh air. Not only did I get to work with some outstanding engineers and other folks on Postgres, but a good number of them were friends and colleagues that I’d worked with in the past. I’ve since had the privilege of hiring even more colleagues from the Postgres world, and look forward to expanding the team even further with more fantastic engineers from the PostgreSQL and wider database communities.There was a wrinkle in my ideal view of how things should be though - the key components of pgEdge were “source available” and not Open Source. That means the source code to our replication engine known as Spock and key extensions such as Snowflake which provides cluster-wide unique sequence values and Lolor which enables logical replication of large objects, had a proprietary licence – known as the pgEdge Community License – which allowed you to view and modify the source code, but limited how you could actually use it. Well, I’m pleased to be able to say that that is no longer the case. All the core components of pgEdge Distributed Postgres, along with any other pgEdge repositories that previously used the pgEdge Community License have now been re-licenced under the permissive PostgreSQL License, as approved by the Open Source Initiative!We’re proud to be able to make this change to support Open Source software and contribute to the PostgreSQL ecosystem, and I’m looking forward to seeing us continue to expand our contributions as much as we can.So, if you want to try out multimaster distributed Postgres, and get involved with the development

[...]

PostgreSQL Case-Insensitive Search: Handling LIKE with Nondeterministic Collations
Posted by Deepak Mahto on 2025-09-06 at 08:43

Case-insensitive search is one of the most common issues I encounter when helping customers migrate from Oracle or SQL Server to PostgreSQL. Unlike Oracle (with NLS_SORT) or SQL Server (with case-insensitive collations), PostgreSQL does not natively support case-insensitive search. Developers typically rely on workarounds such as:

Developers typically rely on workarounds such as:

  • Wrapping comparisons with LOWER() or UPPER()
  • leverage citext data type (not recommended)
  • Leveraging ICU-based nondeterministic collations

Why citext is not recommended.
Reference – https://www.postgresql.org/docs/current/citext.html

In this post, I’ll focus on nondeterministic collations and their behavior with LIKE or ILIKE operators.

NonDeterministic Collations

In PostgreSQL, a nondeterministic collation lets text comparisons ignore case/accents, enabling flexible searches like case-insensitive equality.

creating sample collation to support case insensitive search.

CREATE COLLATION collate_ci (provider = icu, deterministic = false, locale = 'en-US-u-ks-level2');

Let’s us create a sample tables with columns having collate collate_ci.

create table testcollation(col1 text collate collate_ci); 
insert into testcollation values ('a');
insert into testcollation values ('A');

select * from testcollation where col1 = 'a';
 col1 
------
 a
 A
(2 rows)

using nondeterministic collation help’s us implement case insensitive comparison with equality operator’s. But let’s see how it will works for like or ilike operators for wild card based searches.


→ Handle Wild Operators Failure (Like).

Using nondeterministic collations help us achieve case-sensitives search but when used with wildcards operators (like or ilike) it fails as it is not supported till PostgreSQL 17 version.
postgres=> select * from testcollation where col1 like 'a%';
ERROR:  nondeterministic collations are not supported for LIKE

PostgreSQL 17 does not support LIKE/ILIK

[...]

Contributions for the week 36 (2025-09-01 - 2025-09-06)
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-09-06 at 08:25

PGDay Austria took place on September 4 in Vienna.

Organised by

  • Patricia Horvath
  • Ilya Kosmodemiansky
  • Hans-Jürgen Schönig
  • Cornelia Biascics

CfP Committee

  • Rafia Sabih
  • Oleksii Vasiliev
  • Ryan Booz
  • Pavlo Golub (non voting member)

Speakers

  • Marko Coha
  • Jan Karremans
  • Tanmay Sinha
  • Álvaro Herrera
  • Teresa Lopes
  • Peter Hofer
  • Dirk Krautschick
  • Ants Aasma
  • Federico Campoli
  • Emre Baransel
  • Utku Demir
  • Gabriele Quaresima
  • Gülçin Yıldırım Jelinek
  • Christoph Berg
  • Teresa Lopes
  • Derk van Veen
  • Mayuresh Bagayatkar
  • Patrick Lauer
  • Ryan Booz
  • Floor Drees

Lightning Talk Speakers

  • Pavlo Golub
  • Frederico Campoli
  • Sovenath Shaw
  • Klaus Aschenbrenner

A "TPC-C"-like "extension-pack" for pgbench
Posted by Kaarel Moppel on 2025-09-05 at 21:00
TPC-C is supposedly the most objective performance measurement of OLTP database systems… and I’ve used it quite a fair bit over the years as well…yet, running it on Postgres is sadly not exactly as smooth as things typically are with Postgres 😔 If to compare at least with the wonderful...

Postgres Ibiza 2025: October 15-17th
Posted by Álvaro Hernández in Fundación PostgreSQL on 2025-09-05 at 14:01

Postgres Ibiza 2025: October 15-17th

Postgres Ibiza 2025 is back in October. A three-day event split into:

  • Conference: 2 days, October 15th and 16th.
  • Unconference: 1 day, October 17th, following the spirit of the best open spaces.

The event will be held in the same conference venue as the last editions, the spectacular Palacio de Congresos de Ibiza which happens to be a few steps away from many hotels… and the beach!

Postgres Ibiza 2025 builds on the success of the last editions (2024, 2023 and 2019, before the pandemic) and brings a refreshing, open and diverse Postgres conference. Where all ideas can be shared, the future of Postgres discussed and stronger connections can be made in a truly unique environment.

Call For Papers

Call For Papers is still open, until September 10th. The topic selection is quite open, but here are a few suggested themes to consider:

  • Core Innovations: Cutting-edge advancements in Postgres core development and key technologies.
  • Creative Extensions: New and innovative extensions that enhance Postgres.
  • Containers & Kubernetes: Best practices for running Postgres in containerized environments.
  • Compatible Forks: Insights into fully Postgres-compatible variants and forks.
  • Cloud Databases: Exploring the benefits of cloud databases that support Postgres.
  • Protocol Compatibility: Databases that utilize Postgres-compatible protocols.
  • Innovative Projects: Any unique and novel projects related to Postgres.
  • Database Comparisons: Demonstrate what other databases do better and how they compare to Postgres.

If you have any questions about the CFP or simply want to get in touch with the Committee, please contact us. If you are unsure about submitting a talk, it’s your first time, or just need some help, we will be happy to assist and provide guidance.

Call for Sponsors

Postgres Ibiza is organized by the non-profit Fundación PostgreSQL, and is a PostgreSQL Community Event. Every sponsorship or donation is key for th

[...]

Best Practices for Achieving High Availability in PostgreSQL with Patroni and Pgpool
Posted by semab tariq in Stormatics on 2025-09-04 at 17:09

I recently completed a training session for one of our customer on best practices for achieving HA clusters with Patroni and Pgpool in PostgreSQL. During the sessions, different points were discussed, but I’d like to highlight a few that should be carefully considered when designing HA clusters.

1. Use HTTPS for Patroni and ETCD Communication

It’s important to secure communication between data nodes and ETCD nodes to prevent man-in-the-middle attacks. Using HTTPS is recommended for this purpose. You can generate certificates and configure them on all nodes to enable secure HTTPS communication.

2. Use ETCD3 in Patroni.yml File

ETCD is a distributed and reliable key-value store that holds critical data for a distributed system, including the current state of the cluster and details about the primary node. In the patroni.yml file, we must specify the location of the ETCD node. Since ETCD version 2 is no longer supported, we recommend using ETCD version 3, which is more stable, production-ready, and improves overall reliability.

3. Use scram-sha-256 Authentication method for pg_hba.conf file

Using the SCRAM-SHA-256 authentication method in the pg_hba.conf file is highly recommended, as it provides stronger security compared to MD5. SCRAM-SHA-256 protects against brute-force and replay attacks by using salted and iterated password hashing, making stored credentials much harder to crack.

4. Run the Latest – 1 Version of PostgreSQL

Running the latest minus one version of PostgreSQL is generally recommended for production environments. This ensures that you benefit from recent performance improvements, security patches, and stability enhancements while avoiding the risks that may come with very new major releases.

5. Should We Install All Software on Data Nodes or Separate Nodes?

There are pros and cons to both approaches, and the choice depends on

[...]

PGConf.EU 2025 - PostgreSQL Conference Europe 2025 Schedule Published
Posted by Karen Jex in PostgreSQL Europe on 2025-09-04 at 15:33

The programme for PGConf.EU 2025, taking place on 21–24 October in Riga, is now live!

Discover the exciting lineup of world-class PostgreSQL speakers and exciting topics that await you on the schedule for this year.

This year’s conference also features a Community Events Day on Tuesday, with limited spaces available.

We look forward to seeing you in Riga in October!

New Presentation
Posted by Bruce Momjian in EDB on 2025-09-04 at 15:15

I just gave a new presentation at PGConf.Brazil titled Three Key Attributes of Postgres. It is similar to my existing The Postgres Trajectory talk, but with more of a business and global focus. It also a psql output slide full of puns at the beginning, and hints to the puns at the end. I will be presenting my Databases in the AI Trenches talk tomorrow.

FOSSY 2025 and RAGtime with Postgres
Posted by Jimmy Angelakos on 2025-09-04 at 13:57

Obligatory photo from FOSSY 2025

I've just returned from the rapidly growing Free and Open Source Yearly conference, or FOSSY 2025, which took place from July 31st - August 3rd, 2025, in Portland, Oregon. Organized by the incredible team at the Software Freedom Conservancy, the event was an awesome gathering of minds dedicated to the principles of free and open-source software.

I gladly took the opportunity to connect with the community again, and remind myself of the passion and dedication that fuels FOSS. The work that the Software Freedom Conservancy does is important for the health and sustainability of the ecosystem, and it was inspiring to be surrounded by people who share that vision.

PostgreSQL @ FOSSY 2025

PostgreSQL was also present with a dedicated booth (thanks to PgUS, and of course Postgres-related talks. I was thrilled to contribute my own talk, about using Postgres for AI work:

Exploration: CNPG Kubectl Plugin
Posted by Umut TEKIN in Cybertec on 2025-09-04 at 04:30

Introduction

We have explored how to create cluster, take backups, connect to the cluster and run psql commands in our CNPG series. However, one might feel overwhelmed because of those day - to - day operations. That is why CNPG provides a kubectl plugin. CloudNativePG' s plugin enriches kubectl with a set of PostgreSQL - focused commands, making easier to inspect clusters, trigger backups, promote a new instance, run pgbench and run psql commands without leaving existing terminal. Even though it is a pretty simple and straightforward topic, but it is important for completeness of our CNPG series.

Pugin Installation

There are different ways to install the plugin, but I found installing using the script easiest for me:

curl -sSfL \
  https://github.com/cloudnative-pg/cloudnative-pg/raw/main/hack/install-cnpg-plugin.sh | \
  sudo sh -s -- -b /usr/local/bin
cloudnative-pg/cloudnative-pg info checking GitHub for latest tag
cloudnative-pg/cloudnative-pg info found version: 1.27.0 for v1.27.0/linux/x86_64
cloudnative-pg/cloudnative-pg info installed /usr/local/bin/kubectl-cnpg

Commands

The pluging provides a variety of commands. "--help" is useful to get help for exploring available commands. For example;

kubectl cnpg --help

If a help is needed for a specific command then;

kubectl cnpg promote --help

Install

This command is used to generate a yaml manifest that is used for the installation of the operator. In this way, we can modify the default settings of the operator such as # replica and installation namespace.

kubectl cnpg install generate -n cnpg-system --replicas 3 --watch-namespace "$my_namespace_2_watch" > install_operator.yaml

Status

Status command show us the current status of respective cluster:

kubectl cnpg status cluster-example-backup
Cluster Summary
Name                 cnpg-system/cluster-example-backup
System ID:           7545128324982542354
PostgreSQL Image:    ghcr.io/cloudnative-pg/postgresql:17.5
Primary instance:    cluster-example-backup-1
Primary s
[...]

From DBA to DB Agents
Posted by Gülçin Yıldırım Jelínek in Xata on 2025-09-03 at 15:30
Discover how a PostgreSQL DBA’s decade of expertise evolved into designing Xata’s AI-powered Postgres agent, automating monitoring and observability.

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.