Postgres Ibiza 2025 is back in October. A three-day event split into:
The event will be held in the same conference venue as the last editions, the spectacular Palacio de Congresos de Ibiza which happens to be a few steps away from many hotels… and the beach!
Postgres Ibiza 2025 builds on the success of the last editions (2024, 2023 and 2019, before the pandemic) and brings a refreshing, open and diverse Postgres conference. Where all ideas can be shared, the future of Postgres discussed and stronger connections can be made in a truly unique environment.
Call For Papers is still open, until September 10th. The topic selection is quite open, but here are a few suggested themes to consider:
If you have any questions about the CFP or simply want to get in touch with the Committee, please contact us. If you are unsure about submitting a talk, it’s your first time, or just need some help, we will be happy to assist and provide guidance.
Postgres Ibiza is organized by the non-profit Fundación PostgreSQL, and is a PostgreSQL Community Event. Every sponsorship or donation is key for th
[...]The programme for PGConf.EU 2025, taking place on 21–24 October in Riga, is now live!
Discover the exciting lineup of world-class PostgreSQL speakers and exciting topics that await you on the schedule for this year.
This year’s conference also features a Community Events Day on Tuesday, with limited spaces available.
We look forward to seeing you in Riga in October!
I just gave a new presentation at PGConf.Brazil titled Three Key Attributes of Postgres. It is similar to my existing The Postgres Trajectory talk, but with more of a business and global focus. It also a psql output slide full of puns at the beginning, and hints to the puns at the end. I will be presenting my Databases in the AI Trenches talk tomorrow.
I've just returned from the rapidly growing Free and Open Source Yearly conference, or FOSSY 2025, which took place from July 31st - August 3rd, 2025, in Portland, Oregon. Organized by the incredible team at the Software Freedom Conservancy, the event was an awesome gathering of minds dedicated to the principles of free and open-source software.
I gladly took the opportunity to connect with the community again, and remind myself of the passion and dedication that fuels FOSS. The work that the Software Freedom Conservancy does is important for the health and sustainability of the ecosystem, and it was inspiring to be surrounded by people who share that vision.
PostgreSQL was also present with a dedicated booth (thanks to PgUS, and of course Postgres-related talks. I was thrilled to contribute my own talk, about using Postgres for AI work:
pgvector
extension to store embeddings and implement powerful, context-aware Retrieval-Augmented Generation (RAG) pipelines, transforming the database into a core component of an intelligent advisor stack.
We have explored how to create cluster, take backups, connect to the cluster and run psql commands in our CNPG series. However, one might feel overwhelmed because of those day - to - day operations. That is why CNPG provides a kubectl plugin. CloudNativePG' s plugin enriches kubectl with a set of PostgreSQL - focused commands, making easier to inspect clusters, trigger backups, promote a new instance, run pgbench and run psql commands without leaving existing terminal. Even though it is a pretty simple and straightforward topic, but it is important for completeness of our CNPG series.
There are different ways to install the plugin, but I found installing using the script easiest for me:
curl -sSfL \ https://github.com/cloudnative-pg/cloudnative-pg/raw/main/hack/install-cnpg-plugin.sh | \ sudo sh -s -- -b /usr/local/bin cloudnative-pg/cloudnative-pg info checking GitHub for latest tag cloudnative-pg/cloudnative-pg info found version: 1.27.0 for v1.27.0/linux/x86_64 cloudnative-pg/cloudnative-pg info installed /usr/local/bin/kubectl-cnpg
The pluging provides a variety of commands. "--help" is useful to get help for exploring available commands. For example;
kubectl cnpg --help
If a help is needed for a specific command then;
kubectl cnpg promote --help
This command is used to generate a yaml manifest that is used for the installation of the operator. In this way, we can modify the default settings of the operator such as # replica and installation namespace.
kubectl cnpg install generate -n cnpg-system --replicas 3 --watch-namespace "$my_namespace_2_watch" > install_operator.yaml
Status command show us the current status of respective cluster:
kubectl cnpg status cluster-example-backup Cluster Summary Name cnpg-system/cluster-example-backup System ID: 7545128324982542354 PostgreSQL Image: ghcr.io/cloudnative-pg/postgresql:17.5 Primary instance: cluster-example-backup-1 Primary s[...]
© Laurenz Albe 2025
I recently helped a customer with a slow query. Eventually, an ANALYZE
on a partitioned table was enough to fix the problem. This came as a surprise for the customer, since autovacuum was enabled. So I decided to write an article on how PostgreSQL collects partitioned table statistics and how they affect PostgreSQL's estimates.
I am talking about optimizer statistics as defined in my article about preserving statistics during an upgrade. These are the data that the query optimizer uses to estimate the cost of execution plans. You can view the optimizer statistics for table columns in the system view pg_stats
.
In PostgreSQL, a partitioned table is a logical construct, somewhat similar to a view. A partitioned table does not hold any data itself. The data reside in the partitions of the partitioned table. Inserting a row into a partitioned table actually inserts the row into the appropriate partition. In PostgreSQL, partitions are ordinary tables that just have a “side job” as a partition of a partitioned table.
When I talk about partitioned table statistics, I mean the optimizer statistics for the partitioned table itself, as opposed to the optimizer statistics of its partitions.
To demonstrate statistics on a partitioned table, I am going to use a simple example:
CREATE TABLE tab ( pk integer NOT NULL ) PARTITION BY LIST ((pk % 2)); CREATE UNLOGGED TABLE tab_even PARTITION OF tab FOR VALUES IN (0); CREATE UNLOGGED TABLE tab_odd PARTITION OF tab FOR VALUES IN (1);
I used unlogged tables for better performance, because we don't need crash recovery. Each of the partitions will receive half of the rows I INSERT
into the table:
INSERT INTO tab SELECT * FROM generate_series(1, 1000000);
With the default configuration, the autovacuum launcher will sleep for up to a minute before becoming active. Then it will p
[...]The PostGIS Team is pleased to release PostGIS 3.6.0! Best Served with PostgreSQL 18 Beta3 and recently released GEOS 3.14.0.
This version requires PostgreSQL 12 - 18beta3, GEOS 3.8 or higher, and Proj 6.1+. To take advantage of all features, GEOS 3.14+ is needed. To take advantage of all SFCGAL features, SFCGAL 2.2.0+ is needed.
Cheat Sheets:
This release includes bug fixes since PostGIS 3.5.3 and new features.
Are you in the Pacific Northwest?
Want to learn more about topics related to this blog? At 3:15p on Thu Nov 13 in KubeCon Atlanta, I’ll be speaking with Leonardo Cecchi about distributed systems theory applied to standard open source postgres cluster reconfigurations.
Jepsen is a testing framework for distributed systems that verifies safety guarantees by subjecting clusters to faults (e.g., network partitions, crashes, and failovers) and checking for consistency violations. In this lab exercise, we will use Jepsen to illustrate data loss when synchronous replication is disabled and cluster reconfigurations occur involving postgres failover.
Postgres synchronous replication ensures that transactions are only committed once changes are written to both the primary and a synchronous standby, which is crucial for preventing data loss during automated failovers by guaranteeing no acknowledged transaction is lost if the primary crashes.
In addition to crashes, synchronous replication is (and always has been) the only protection against data loss during network partitions or split brains with Postgres – across all Postgres HA frameworks. There was a little confusion recently around Patroni’s “failsafe” feature. Failsafe is a useful feature but it does not prevent data loss during network partitions – Patroni relies on postgres sync replication to prevent data loss during network partitions, sim
[...]Just recently, we announced the production-ready release of pg_tde, bringing open source Transparent Data Encryption (TDE) to PostgreSQL. Now, I may have spoiled the fun a little with the title, but take a look at the word puzzle below—can you guess the announcement? Bear with me… and my sense of humor, which might be a bit too dry for some :)
At the time of the TDE GA release, one of the most common questions was:
In the previous post, I discussed GSoC, my road to acceptance at PostgreSQL, and my project deliverables.
Today, I will share my experience with the PostgreSQL community, and the work I did.
The first phase of GSoC is community bonding, where the mentee gets to know the community of his organization.
For Postgres, I started searching for the communication channels used by contributors, I found out about Postgres mailing lists which unfortunately I hadn’t been involved into it that much till now, but I also discovered the PostgreSQL Hacking Discord channel and the Hacking Workshops they host every month to study and discuss PostgreSQL topics, which have been a great learning resource for me since then.
Clearly it's summer time in the Northern Hemisphere and we haven't been as punctual with our recognition posts! We'll resume to a more regular schedule, promised!
On Aug, 28 Peter Zaitsev and Emma Saroyan organised PG Armenia x Percona University.
Speakers: * Peter Zaitsev * Andrey Borodin * Emma Saroyan * Konstantin Trushin * Dmitrii Kochetov * Alexey Palazhchenko * Alex Demidoff * Eugene Klimov * Kim Saroyan * Alexander Zaitsev * Aya Guseinova * Piotr Kalmukhyan
Hyderabad Postgres Days 2025 took place on August 21-22.
Organisers: * Ameen Abbas * Hari Kiran P * Rajesh Madiwale
CfP Committee: * Deepak Maht * Gayathri Varadarajan * HariKrishna B * Jobin Augustine * Pavlo Golub
Workshops: *Jobin Augustine * Bikash Chandra * Sashikanth Pattanaik * Neeta Goel * Ramanathan Natarajan * Ameen Abbas * Rajesh Madiwale * Padma MP * Pranay Shree
Speakers: *Akshat Jaimini * Shreya Aithal * Aswini Kumar Tummak * Sai Krishna Namburu * Sivasankar Prasad * Kattamuri Kowshiq * Ragesh Hajela * Chiranmoy Bhattacharya * InduTeja Aligeti * Bhavani Akundi * Kshetrapaldesi Tutika * Yogesh Jain * Nikhil Hegde * Vikas Gupta * Rajesh Madiwale * Aryan Sharma * Lahari Giddi * Keerthi Seetha * Amit Jain * Shameer Bhupathy * Jobin Augustine
On Aug, 21th, Emma Saroyan organised the Armenian PostgreSQL User Group.
Speakers: * Andrey Borodin * Maksim Dubakin
On Aug, 14th, Emma Saroyan and Vladimir Poghosyan organised and spoke at the Armenia PostgreSQL User Group Meetup
Postgres Woman India Virtual Summit happened on August 6th, 2025.
Organised by: * Hari Kiran * Aarti Nadekar * Kalyani Madipadga * Praveena Sivasankar
Speakers: * Emma Saroyan * Tarunikaa Kumar * Taticherla Begum * Lahiri Giddi * Keerthi Seetha
The PostgreSQL development group released the second Beta version of PostgreSQL 18 in July; the GA version is expected later in 2025 (around the September/October timeframe). The PostgreSQL development group and its community is very dedicated and ensures several minor releases during the year and major releases every year.
Row-level security (RLS) is a great feature. It allows restricting access to rows by applying filters defined by a policy. It’s a tool useful for cases when the data set can’t be split into separate databases.
Sadly, using RLS may be quite cumbersome. RLS requires some sort of “trusted context” for the RLS policies. The policies need to filter using data the user can’t change. If the filter uses some sort of “tenant ID”, and the user can change it to an arbitrary value, that would break the RLS concept.
This is why solutions like using GUCs are flawed, because the access control for GUC is very limited. The traditional solution is to use roles, which derives the trust from authentication.
It occurred to me it should be possible to build a trusted context on cryptography, independently of authentication. I’ll explain the basic idea, and discuss a couple interesting variations. I’ve also published an experimental extension jwt_context, implementing this using JWT.
I’m interested in all kinds of feedback. Is it a good idea to use JWT this way, as a basis for RLS context? Did I miss some fundamental issue? Are there interesting improvements?
The PostGIS Team is pleased to release PostGIS 3.6.0rc2! Best Served with PostgreSQL 18 Beta3 and recently released GEOS 3.14.0.
This version requires PostgreSQL 12 - 18beta3, GEOS 3.8 or higher, and Proj 6.1+. To take advantage of all features, GEOS 3.14+ is needed. To take advantage of all SFCGAL features, SFCGAL 2.2.0+ is needed.
Cheat Sheets:
This release is a beta of a major release, it includes bug fixes since PostGIS 3.5.3 and new features.
The middle of August tends to be a quieter time in the PostgreSQL development cycle, so there's not much in the way of new developmentst to report.
This week's main item of interest is the quarterly release of PostgreSQL minor version updates.
PostgreSQL 19 changes this weekPostgreSQL 19 beta3 was released this week.
This is the second part of the series I started on Learning PostgreSQL from AI and JSON exploration: Part 1. For this 2nd part, I decided to try gpt-oss the 14GB model which was just released in the past week. My first impression, "When will this ai shut up about its thinking process?". Phi4 on the other hand gave a bit of a narrative about how it came up with answers but it did not write a novel about it and it also seemed more personable, using "My and I" instead of a distanced "User wants to". However gpt-os did become less chatty with a bit of coaxing. So anyway personality wise I prefer phi4. That is not to say that Gpt-oss doesn't have some wow factor. It created perfect SQL each time. I did not need to correct it. One of the other neat features of this model which I haven't had time to explore is ability to have it do brower searches and other agentic like stuff.
Again for this size model, speed of answering with my crappy graphics card and processor was not bad, but it was noticeably slower than Phi4. Given the impressive accuracy, I didn't mind too much and I suspect can be improved by tweaking some of its settings.
Continue reading "Learning PostgreSQL from AI and JSON exploration: Part 2"In our CNPG series, we have explained how to create a PostgreSQL cluster and how to customize it. However, most of the time, we don't have the luxury to start to use a database from scratch. We might have already started a project without high availability and we might be looking for an easy-to-manage environment on cloud native environment. In this case, CNPG is the answer, but how do we migrate our existing cluster to CNPG? In today's article, we will explain that.
There are a couple ways to migrate an existing cluster to CNPG; pg_basebackup, import, and logical replication. We will examine pg_basebackup and import methods today. Depending on the use case or the requirement, we might need to use a different method.
pg_basebackup, as usual, does let us create a physical copy from a source cluster. Because we are creating a physical copy of the existing cluster, the limitations are a bit more strict than for the import method. The requirements for pg_basebackup are as follows;
As of now, CNPG version v1.27 has only one limitation for pg_basebackup, snapshot copy. In order to start creating the physical copy pg_basebackup takes a snapshot of the source cluster. Concurrently, it also streams WAL segments to the target cluster from the beginning of the process until the end of the backup. Once this process is completed, the cluster diverges and starts on a new timeline from the source. That is why all write operations on the source must be stopped before starting the migration process.
Note: this limitation only applies when the ta
[...]When I started working with Postgres (or databases in general) 25 years ago, storage systems looked very different. All storage was “spinning rust” - rotational disks with various interfaces (SATA/SAS/…) and speeds (7.2K/10k/15k/…). The spindle speed was the main performance determining feature, and everyone knew what IOPS and bandwidth to expect from a disk. The general behavior was pretty much the same.
With SSDs it’s more complicated. The interface may be the same, but the hardware “inside” the device can be very different. There’s nothing like the “spindle speed”, a single feature determining fundamental behavior. The flash memory is subject to various limits, but manufacturers may (and do) make different tradeoffs (much more cache, more spare space, etc.). And the hardware changes a lot over time too.
While working on the index prefetching patch, I ran into a couple weird differences between “very similar” queries. And we speculated it might be due to how SSDs handle the different I/O patterns. I did testing on my SSD devices, and there definitely are some very surprising differences in behavior, contradicting (reasonable) expectations. Let’s look at the results, and how it can result in strange query timings.
A modern-day Postgres instance creates robust and comprehensive logs for nearly every facet of database and query behavior. While Postgres logs are the go-to place for finding and debugging critical errors, they are also a key tool in application performance monitoring.
Today let’s get set up with logging for Postgres - starting with the basics of what to log, how to log what you want, and as reward for your hard work - how to use these to monitor and improve performance. The Postgres docs on logs are excellent, so please consult those for the most up to date and comprehensive configurations. This blog reads between the lines a bit beyond the docs to offer some practical advice and settings. As always, your mileage may vary.
Let’s get into it and talk about:
WAL note: This post is limited to the server’s message and error logging - not the transaction Write Ahead Log (WAL). While that is a log, the purpose of WAL is to log all data and schema changes for backups, disaster recovery, and replication streaming.
First thing, out of the box Postgres just sends logs to the terminal. To turn on sending things to log files, turn on the logging collector.
logging_collector = on
The log message formatting is determined by log_destination
parameter, which can be set to one or more of: stderr
, csvlog
, jsonlog
log, and syslog
. stderr
is the default. Use commas to separate values when using more than one logging destination:
-- setting multiple log destinations
log_destination = 'stderr,json'
If logging_collector = 'on'
, then stderr
, csvlog
, and jsonlog
logging will go to files in the directory specified by log_directory
, with csv
and json
requiring that the logging collector be enabled.
There are many reasons why logs may be written in multiple file fo
[...]Number of posts in the past two months
Number of posts in the past two months
Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.