Latest Blog Posts

FOSDEM PGDay 2026 - Call for papers and registration open
Posted by Magnus Hagander in PostgreSQL Europe on 2025-11-26 at 11:52

The call for papers is now open for FOSDEM PGDay 2026. The CfP will run until December 15, 2025 at 11:59 CET. We will aim to notify speakers by December 17th, 2025 and publish the schedule before Christmas.

FOSDEM PGDay 2026 is a one-day conference that will be held ahead of the main FOSDEM event in Brussels, Belgium, on Friday, January 30st, 2025. This will be a PostgreSQL-focused event. This year, FOSDEM PGDay will feature two tracks of talks! This conference day will be for-charge with a registration of EUR 75, and will be held at the Brussels Marriott Hotel. Registration is required to attend and since we have a limited number of seats available for this event, we urge everybody to register as soon as possible once open.

PostgreSQL Europe will not have its usual devroom, but we invite all community members to visit the PostgreSQL booth and the Databases devroom at the main FOSDEM conference. No main FOSDEM events require registration.

For full details about the conference, venue and hotel, see the event website.

We also have a special negotiated room rate with the Brussels Marriott Hotel. For details, see the venue page.

We hope to see you in Brussels!

pgstream v0.9.0: Better schema replication, snapshots and cloud support
Posted by Esther Minano in Xata on 2025-11-26 at 09:00
Bringing connection retries, anonymizer features, memory improvements and solid community input.

KubeCon NA Atlanta 2025: a recap and CloudNativePG’s path to CNCF Incubation
Posted by Gabriele Bartolini in EDB on 2025-11-26 at 06:26

This blog post recaps my eventful participation in KubeCon + CloudNativeCon North America 2025 in Atlanta, highlighting the key decision by maintainers to formally apply for CNCF Incubation for the CloudNativePG operator. I had the pleasure of delivering three presentations: a CNPG Lightning Talk focused on community contribution; a deep-dive with Yoshiyuki Tabata on implementing modern PostgreSQL authorisation using Keycloak and OAuth for robust database security; and a session with Jeremy Schneider introducing the new quorum-based consistency feature in CNPG 1.28 for safer cluster reconfigurations. Links to the videos of all three talks are shared within the full article.

ParadeDB 0.20.0: Simpler and Faster
Posted by Philippe Noël in ParadeDB on 2025-11-26 at 00:00
Introducing search aggregation, V2 API as default, and performance improvements that eliminate the complexity between search and analytics in a single Postgres-native system.

Announcing Release 20 of the PostgreSQL Build Farm Client
Posted by Andrew Dunstan in EDB on 2025-11-25 at 13:46

 Features:

  • New branches_to_build keyword UP_TO_ to allow building only certain older branches. e.g. UP_TO_REL_16_STABLE
  • Collect pg_config.h for meson builds
  • Handle new --restrict-key requirement for dump testing
  • Trim collection of log files
    Some files were collected multiple times, and some large log files were collected even when of little value, as the test succeeded. Reducing this will alleviate disk space pressure on the server.
  • Minor bug fixes.

Because of the reduced log collection, buildfarm owners are highly encouraged to  upgrade to the latest release.

The release can be obtained from Github or Buildfarm Server

Super fast aggregations in PostgreSQL 19
Posted by Hans-Juergen Schoenig in Cybertec on 2025-11-25 at 05:40

PostgreSQL 18 has just been born and we are already talking about the blessings of PostgreSQL 19 and beyond? Well, yes, and there is a good reason for it.

Recently, an important series of changes have been committed, which some of our folks (and many others around the world) have been working on for many many years and which have the power to seriously speed up aggregations for basically everyone out there. What is even better is that you don't have to change your code, there is no need to adjust parameters or anything of that sort. You can simply run your code as it is and enjoy the benefits of this important improvement.

Aggregating data in PostgreSQL

To understand why this improvement in the query optimizer is so important, we first have to understand how PostgreSQL has handled grouping up to now. The most simple rule was: "Join first, aggregate later". What does that mean? Consider the following example:

SELECT        j.gender_name, count(*)
FROM    person AS p, gender AS j
WHERE   p.gender_id = j.gender_id
GROUP BY j.gender_name

Let us assume that we only store a handful of genders but millions of people. The way everything before PostgreSQL 19 is handling this type of query is the following. Keep in mind that the example is of course highly simplified to make understanding easy:

  • Read every entry in the person table
  • For every entry, look up the gender_name and add to the count in the desired group
  • Display the result

What is fundamentally wrong with this approach? Actually nothing. This is how most systems would handle this type of operation. However, there is an inefficiency here: In a large table, millions of people might be female - what the system does is to look up the name for each gender_id again and again. In case almost every id is different, this is fine. However, if there are only a handful of different ids, the operation takes way too long and becomes pretty repetitive.

The breakthrough: Aggregate first - join later

The big breakthrough in Postgr

[...]

PostgreSQL Conference Japan 2025
Posted by Andreas Scherbaum on 2025-11-24 at 22:00
Back in July I submitted talks to the annual PostgreSQL Conference Japan 2025 (PostgreSQL カンファレンス 2025). This year is the 20th year of the conference, I was told. That is even longer running than our “own” PostgreSQL Conference Europe. To my surprise, one of my submitted talks was accepted. Which made me plan a trip to Japan and South Korea, to combine the conference

PostgreSQL Contributor Story: Akshay Joshi
Posted by Floor Drees in EDB on 2025-11-24 at 14:08
Earlier this year we started a program ("Developer U") to help colleagues who show promise for PostgreSQL Development to become contributors. Meet one of the participants: Akshay Joshi, a Principal Engineer and Engineering Manager at EDB.

PgPedia Week, 2025-11-23
Posted by Ian Barwick on 2025-11-24 at 13:30
PostgreSQL 19 changes this week COPY ... TO is now used for the initial synchronization of partitioned tables pg_dependencies type output format converted to a more user-readable JSON array of objects pg_ndistinct type output format converted to a more user-readable JSON array of objects pg_stat_subscription_stats new column seq_sync_error_count renamed to  sync_seq_error_count column sync_error_count renamed to sync_table_error_count PostgreSQL 18 articles Postgres 18: Skip Scan - Breaking Free from the Left-Most Index Limitation (2025-11-18) - Ahsan Hadi / pgEdge Highlights of PostgreSQL 18 (2025-08-28) - Ahsan Hadi / pgEdge

more...

PostgreSQL Berlin November 2025 Meetup
Posted by Andreas Scherbaum on 2025-11-23 at 22:00
On November 13th, 2025, we had the PostgreSQL November Meetup in Berlin. Redgate Software hosted us again, this time in a larger meeting room at The Social Hub Berlin. We had two speakers, and very tasty food. The event space is in Alexanderstraße 40, in Berlin Mitte. just five minutes away from Alexanderplatz. It is a event and gym venue, with several meeting rooms in various sizes, office area and a gym.

Contributions for week 47, 2025
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-11-23 at 20:22

Seven PostgreSQL contributors completed this year of Google Summer Of Code:

Contributors:

  • Ahmad Gouda
  • Ashutosh Sh
  • Bassam Adnan
  • Gaurav Patidar
  • Mankirat Singh
  • Mohab Yasser
  • Tejas Tyagi

Mentors:

  • Akshat Jaimini
  • Pavlo Golub
  • Haoran Zhang
  • Saurav Pal
  • Jesper Pedersen
  • Rajiv Harlalka
  • David Wheeler
  • Shahryar Soltanpour
  • Luca Ferrari

Community Blog Posts:

Community Meetups:

Postgres Bangalore (PGBLR) happened on Meetup November 15th 2025

Speakers:

  • Srinath Reddy Sadipiralla
  • Nitin Jadhav
  • Narayana Jayaram and Sushmitha Mendon

Organisers:

  • Amit Kapila
  • Kuntal Ghosh
  • Sivji Kumar Jha
  • Vigneshwaran

Volunteers:

  • Jitendra Kumar
  • Nisha Moond
  • Shlok Kumar Kyal
  • Vasudev Sharma

Mumbai PostgreSQL UserGroup on November 20 2025 organised by:

  • Aarti Nadekar
  • Deepak Mahto
  • Sovenath Shaw

Speakers:

  • Trushar Borse
  • Jignesh Shah
  • Suraj Kharage
  • Rushikesh Ghantewar
  • Aniket Jadhav
  • Ajit Gadge
  • Jessica Shelar

On November 21 2025 Laura Minen & Martín Marqués organised Postgres Meetup en Valencia Noviembre 2025

PostgreSQL Buildfarm Members: A status update
Posted by Robins Tharakan on 2025-11-22 at 22:29
The PostgreSQL Buildfarm is a global network of machines that continuously test PostgreSQL across a wide range of operating systems, architectures, compilers, and branches. Over the past few years, I have created and maintained several buildfarm members, each with its own quirks and strengths. In this post, I’ll share a status-update working on the following animals: alligator, dodo,

Two Database Videos
Posted by Bruce Momjian in EDB on 2025-11-22 at 20:15

In the past few months, two new database videos were released on YouTube. From Hollerith to Stonebraker, from IMS to Postgres, the first video does a great job of walking through the history of databases in a way that I have never seen before. It focuses on the forces that drove database technology forward, like the U.S Census, airline reservation requirements, and planet-scale data storage. I have been in the database industry for so long, I sometimes forget how and why we got to here. This video will be helpful to show to your friends who don't understand the purpose of database technology.

The second video focuses on SQL and its history as breaking away from data models that required programmers to traverse fixed paths to access data, and SQL's ubiquity as the universal data-access language.

Cleanup Time
Posted by Bruce Momjian in EDB on 2025-11-22 at 16:45

With 3k slides and 64 presentations (and more two pending), the Presentations section of my website was getting cluttered. (Okay, I guess it has been cluttered for a while.) I have restructured the sections, adding two, removing one, and moving superseded presentations to Old. Hopefully this will help website visitors more easily find useful information.

PostGIS Performance: Data Sampling
Posted by Paul Ramsey in Crunchy Data on 2025-11-21 at 13:00

One of the temptations database users face, when presented with a huge table of interesting data, is to run queries that interrogate every record. Got a billion measurements? What’s the average of that?!

One way to find out is to just calculate the average.

SELECT avg(value) FROM mytable;

For a billion records, that could take a while!

Fortunately, the “Law of Large Numbers” is here to bail us out, stating that the average of a sample approaches the average of the population, as the sample size grows. And amazingly, the sample does not even have to be particularly large to be quite close.

Here’s a table of 10M values, randomly generated from a normal distribution. We know the average is zero. What will a sample of 10K values tell us it is?

CREATE TABLE normal AS
  SELECT random_normal(0,1) AS values
    FROM generate_series(1,10000000);

We can take a sample using a sort, or using the random() function, but both of those techniques first scan the whole table, which is exactly what we want to avoid.

Instead, we can use the PostgreSQL TABLESAMPLE feature, to get a quick sample of the pages in the table and an estimate of the average.

SELECT avg(values)
  FROM normal TABLESAMPLE SYSTEM (1);

I get an answer – 0.0031, very close to the population average – and it takes just 43 milliseconds.

Can this work with spatial? For the right data, it can. Imagine you had a table that had one point in it for every person in Canada (36 million of them) and you wanted to find out how many people lived in Toronto (or this red circle around Toronto).

alt

SELECT count(*)
  FROM census_people
  JOIN yyz
    ON ST_Intersects(yyz.geom, census_people.geom);

The answer is 5,010,266, and it takes 7.2 seconds to return. What if we take a 10% sample?

SELECT count(*)
  FROM census_people TABLESAMPLE SYSTEM (10)
  JOIN yyz
    ON ST_Intersects(yyz.geom, census_people.geom);

The sample is 10%, and the answer comes back as 508,292 (near one tenth of our actual measurement) in 2.2 seconds. What abou

[...]

Performing Standby Datacentre Promotions of a Patroni Cluster
Posted by Robert Bernier in Percona on 2025-11-20 at 13:15
Performing Standby Datacentre Promotions of a Patroni ClusterUnlike the standard multi-node Postgres replication cluster, when managed by Patroni, all failovers are automatically executed. However, this is not the case when dealing with inter-datacentre failovers when for instance a standby datacentre must take over from a failed primary. The following describes the mechanisms required to perform such a procedure when the case arises. […]

Prairie PostgreSQL User Group November Meetup
Posted by Henrietta Dombrovskaya on 2025-11-20 at 03:23

On Tuesday, we had our last meetup of 2025, and we want to thank everyone who attended, and who supported us during 2025. An of course, very special thanks to Jay Miller, who present at both our first and last meetups! Everyone enjoyed the presentation, as well as pre-and post-conversations and pizza! Here as the presentation slides (Jay, thank you for sharing!):

Prairie Postgres mission is promoting Postgres best practices to application developers who use Postgres as their backend, and address their needs, so this talk could not be more relevant! Everyone had a great time, and the event recording will be available shortly.

Our next meetup is tentatively planned for January 13, but watch for announcements! Check us out at our official website, LinkedIn, and the meetup page!

Happy Holidays!

Announcing the inaugural PostgreSQL Edinburgh meetup
Posted by Jimmy Angelakos on 2025-11-19 at 14:00

The Old College at the University of Edinburgh The beautiful Old College building at the University of Edinburgh. Photo by LWYang from USA (CC BY 2.0).

I'm thrilled to announce that the PostgreSQL Edinburgh meetup is finally here! 🐘

We are launching our new PostgreSQL Edinburgh community with an inaugural event on Thursday, December 11th. We'll be gathering at the University of Edinburgh's beautiful Old College building to talk tech, eat pizza, and get to know each other.

Whether you're a seasoned DBA, an app developer just getting started with databases, or simply curious about how databases and AI & vector search tie in, this meetup is for you. All levels are welcome!

The Agenda

Here's the schedule for our first outing, featuring two great technical talks.

  • 18:00: Doors Open, Pizza & Networking 🍕
  • 18:55: Introductions & Community Announcements
  • 19:00: PostgreSQL Tips & Tricks (For App Devs)Chris Ellis (Nexteam)
  • 19:40: Break
  • 20:00: RAGtime with Postgres: AI Power with pgvector and Retrieval-Augmented GenerationJimmy Angelakos (pgEdge)
  • 20:45: Event End

The Talks

PostgreSQL Tips & Tricks (For App Devs)
Chris Ellis will take us through a look at use cases he's run into over the years. Postgres has a huge range of features—maybe too many—but making use of them can reduce application complexity significantly. Chris will cover solutions for event scheduling, task execution, searching, geolocation, and handling unknown data.

RAGtime with Postgres
I'll be presenting my own talk on Retrieval-Augmented Generation (RAG). As a Postgres nerd (and definitely not an AI expert), I'll be explaining in simple terms how to dip your toes into AI using our favourite database. We'll look at how to use pgvector to store embeddings, connect them with LLMs, and build intelligent apps without relying on expensive external services.

Join Us!

This event is completely free, but registration is required so we can get the numbers right for the food & refreshments!

Register

[...]

Data Retention Policy Implementation – How and Why
Posted by Jobin Augustine in Percona on 2025-11-19 at 13:54
Data Retention Policy Implementation - How and WhyIs your PostgreSQL database Feeling Sluggish? Are SQL statements taking more time than in earlier days? Are you experiencing performance cliffs (Unexpected, sudden drops in performance)? Are backups taking a long time to complete? Are you getting a bigger bill for storage? Are standby rebuilds and development refreshes becoming a herculean task? Are the auditors […]

Postgres 18: Skip Scan -Breaking Free from the Left-Most Index Limitation
Posted by Ahsan Hadi in pgEdge on 2025-11-18 at 06:35

Postgres 18, released on September 25, 2025, introduces an exciting set of performance improvements and new capabilities. Postgres has grown remarkably over the years, and with each major release has become a more robust, reliable, and responsive database for both mission critical and non-mission critical enterprise applications. I’ve been writing about these enhancements since the release, and today I’m diving into two more features that are especially useful in real-world applications. I previously blogged about a major new performance feature, the Asynchronous I/O (AIO) sub-system feature. AIO boosts I/O throughput during sequential scans, bitmap heap scans, and VACUUM operations, providing a performance boost for essentially everyone who uses Postgres. On Linux (with io_uring), this can offer 2–3× performance improvements by overlapping disk access with processing. Please see my blog for more details: https://www.pgedge.com/blog/highlights-of-postgresql-18This week I'll add to my new feature series with a discussion of two more features from the latest release that focus on improving everyday performance and developer efficiency. Both the enhanced clause and Skip Scan optimization represent the kind of improvements developers use every day—these features make queries faster, code cleaner, and applications simpler without any schema redesign or complex tuning. I have picked these features from among the other exciting features due to their impact on the performance and optimization required by the application developer.

  • clause: Now you can access both
  • and
  • row values in
  • ,
  • ,
  • , and
  • statements — this is perfect for auditing, API responses, and ETL workflows. This feature reduces round trips, ensures atomicity, and keeps your SQL self-contained and elegant.
  • : This optimization allows Postgres to use
  • B-tree indexes even when leading columns aren’t filtered, unlocking major performance boosts for real-world analytical queries and reports — all without adding new indexe
[...]

Storing products, prices and orders in PostgreSQL
Posted by Hans-Juergen Schoenig in Cybertec on 2025-11-18 at 06:30

When creating data models in PostgreSQL (or any other powerful relational database), several common problems can arise that hinder effective database design. One frequent issue is the incorrect normalization of tables, leading to redundant data, inefficient querying capabilities, or even simply plain wrong semantics. 

This post addresses one of the most common mistakes made by people new to the subject: Over-optimistic normalization. Especially developers relying too heavily on AI-generated data models are known to face this issue.

Creating a simple data model

To describe what often goes wrong, we try to store something simple: Products and sales. Consider the following data model including some sample data.

CREATE TABLE product (

product_number text PRIMARY KEY,

product_name text NOT NULL,

product_price numeric(10, 2)

NOT NULL

);

INSERT INTO product VALUES

('PROD001', 'Bread', '3.50'),

('PROD002', 'Sausage', '6.59'),

('PROD003', 'Cheese', '5.79')

;

CREATE TABLE sales (

sales_time timestamptz NOT NULL,

product_number text REFERENCES product(product_number)

ON UPDATE CASCADE

ON DELETE CASCADE

NOT NULL,

units int NOT NULL

);

INSERT INTO sales VALUES 

('2025-09-01 14:30', 'PROD001', 8),

('2025-09-02 19:46', 'PROD002', 5),

('2025-09-14 07:32', 'PROD003', 7)

;

The model contains a table, which contains a unique identifier for a product as well as a name and a price. What is worth noting here is the fact that we use numeric (not float4 respectively float8) to store the data. This is important to avoid floating point errors during calculations.

Let us focus our attention on the other table. The first column contains a timestamp - this is fine, because we definitely want to know when a sale happened. 

However, the next column is already a big problem: We reference to the first table (product) and mark it as  ON UPDATE CASCADE ON DELETE CASCADE

To understand why this is wrong, we have to understand what it does first: The UPDATEoption means that, in case the p

[...]

Third Place
Posted by Bruce Momjian in EDB on 2025-11-17 at 23:30

I recently watched a video about Starbucks and how their third place concept made them successful, and how their abandonment of it has caused problems. I started to realize that Postgres conferences also function as a "third place" — not home, not work, but somewhere people gather to see see old faces, new people, and to talk. Our conferences seem to meet most of the third place criteria. Sometimes I have asked in conference hallway tracks, "What if we did this forever?" I guess that would be more like a third place since its location would be stable and always accessible, unlike conferences. Anyway, this gave me new insight into the attraction of conferences.

On a related note, the recent Riga conference had a panel discussion titled, "How to Work with Other Postgres People." The title looked interested but I didn't read the description, and I was surprised by the content. The panelists were transparent about their struggles in dealing with others, and that encouraged the audience to also be open and sharing. For me, it was more like a church prayer meeting than a conference talk, and I know the attendees were deeply impacted by their involvement. I told the panelists they should have this kind of talk at every conference.

Why Application Developers Using AI Is Great For DBA Job Security
Posted by Mayur B. on 2025-11-17 at 14:09

Everyone’s freaking out about AI taking their jobs. Meanwhile, I’m a DBA sitting in the corner thinking: “If programmers of the future are AI agents, companies will need 100x more human DBAs to clean up the mess in production.”

This rant blogpost is my attempt to explain why.

LLMs: Optimizing for the Next Token, Not for Reality

Let’s start with the core problem:

LLMs don’t optimize for truth.
They optimize for “what word statistically looks good next?”

Give a model the text:

“Postgres DBAs love”

It might happily continue with:

“Oracle”

From there, it feeds its own output back in and keeps predicting the next token again and again. At no point does it pause and say: “Wait, does this actually represent reality?”

It has no built-in concept of reality or verification. That’s your job.

As long as this is the operating model, hallucinations are not a bug but they are a feature. They’re what you get when you combine probability with confidence and zero shame. We’re basically wiring a very polite, very confident intern to production and asking it architectural questions.

What could go wrong?

“Hallucinations” Is Just a Fancy Word for “We Noticed the Lie”

I liked one line from an MIT Tech Review piece: “It’s all hallucination, but we just call it that when we notice.”

Most of the time, when the answer is “good enough”, we call it “AI magic”.
When it’s wrong in a way we understand, suddenly it becomes “hallucination”.

From a former physics student perspective, this is just non zero probability in action. We already live with quantum tunneling and in theory there’s a very very small but finite probability your laptop falls through the table while reading this paragraph. So the

[...]

PgPedia Week, 2025-11-16
Posted by Ian Barwick on 2025-11-17 at 12:13

This year's final round of minor releases was released on schedule last week, with PostgreSQL 13 receiving its final update.

PostgreSQL 19 articles Waiting for PostgreSQL 19 – Sequence synchronization in logical replication. (2025-11-11) - Hubert 'depesz' Lubaczewski PostgreSQL 18 articles How to use UUIDv7 in Python, Django and PostgreSQL (2025-11-14) - Paolo Melchiorre PostgreSQL 18: More performance with index skip scans (2025-11-11) - Hans-Jürgen Schönig / CYBERTEC PostgreSQL 18 - Temporal Constraints (2025-11-06) - Vinicius Negrisolo / Hashrocket PostgreSQL 18 - VIRTUAL Generated Columns (2025-11-04) - Vinicius Negrisolo / Hashrocket PostgreSQL 18 - Track What Changed By Your UPSERTs (2025-10-30) - Vinicius Negrisolo / Hashrocket PostgreSQL 18's UUIDv7: Faster and Secure Time-Ordered IDs (2025-10-28)

more...

PostgreSQL License: Free to Use, Enterprise-Ready, and Cost-Efficient in Production
Posted by Umair Shahid in Stormatics on 2025-11-17 at 09:55

Do you need a PostgreSQL license for critical production use?
Short answer: No. 

The open-source PostgreSQL database is free to download, use, modify, and distribute. There are no per-CPU, per-core, per-socket, or per-instance license fees.

What you do need is a realistic plan for operational costs and expertise, the parts that make PostgreSQL truly production-grade. Many teams search for “PostgreSQL license” while budgeting for a new system or replacing proprietary databases. They want to know whether PostgreSQL is free like a hobby project or free like a platform you can trust with revenue. It is the latter – enterprise-reliable and secure, provided you run it with the right architecture and operational discipline.

What “Free” Really Means

PostgreSQL uses a permissive, BSD-style license. You can use it commercially without paying a vendor for database licenses.

You may still pay for:

  • Cloud compute and storage (IaaS or managed services)
  • People and time (internal team or external experts for design, operations, and incident response)
  • Optional support subscriptions (from vendors or consultancies) and complementary tooling

Think of it this way: license savings shift the budget toward reliability, security, and performance; the outcomes the business actually cares about.

The Realistic TCO Checklist

Let’s break down where your budget should actually go when deploying PostgreSQL at scale. Use this list to plan your total cost of ownership (TCO):

  • Compute & storage: VM or bare-metal sizing, disks (NVMe/SAN), snapshots, backups
  • Networking: Private links, cross-AZ/region data transfer for HA/DR, load balancers
  • Observability: Metrics, logs, traces, long-term retention (Prometheus, Grafana, ELK/OpenSearch, or vendor equivalents)
[...]

Google Summer of Code 2025 - A Journey of Growth and Achievement!
Posted by Pavlo Golub in Cybertec on 2025-11-17 at 06:00

What a journey! I’m very happy to announce that all seven Google Summer of Code 2025 contributors successfully passed their final evaluations and made great contributions to the PostgreSQL community! 🎉

Back in May, I welcomed these talented people to our community. Now, six months later, I’m proud to celebrate not just the code they wrote, but the journey they made and the community members they’ve become.

A Year Like No Other

This year’s GSoC was something special. We received 213 applications for the PostgreSQL organization alone—the highest number in our 21 years of participating in GSoC! But with this big number came challenges: a lot of spam and AI-generated applications made the selection process very difficult. We requested 9 project slots, but received only 7 because of program-wide limitations.

But from those 213 applications, we found seven great people who achieved a 100% success rate. Not a single contributor failed to complete their project. Think about that for a moment. 💯

The Human Story Behind the Code

What I like most about this year’s group isn’t just the great technical work—though there was a lot of it. It’s the personal growth, the determination, and the way each contributor became part of something bigger than themselves.

Let me introduce you to our seven successful contributors and their remarkable journeys:

Ashutosh Sharma worked on the difficult task of implementing incremental backup for PostgreSQL 13-16 in pgmoneta. When the WAL parser gave unexpected errors, he didn’t just file a bug report—he added debug prints to PostgreSQL’s source code itself to understand the exact byte offsets. That’s the kind of determination that makes a true contributor.

Mohab Yasser created pgmoneta-walfilter, a tool for precise WAL stream manipulation. His words say something important: “This project provided deep exposure to PostgreSQL internals… Reading through PostgreSQL source code to understand WAL record formats was challenging but instructive.” This is what GSoC

[...]

pgBackRest TLS server mode for a primary-standby setup with a repository host
Posted by Stefan Fercot in Data Egret on 2025-11-16 at 23:00

The TLS server provides an alternative to using SSH for protocol connections to remote hosts.

In this demo setup, the repository host is named backup-srv, and the two PostgreSQL nodes participating in Streaming Replication are pg1-srv and pg2-srv. All nodes run on AlmaLinux 10.

If you’re familiar with Vagrant, here is a simple Vagrantfile that provisions three virtual machines using these names:

# Vagrantfile
Vagrant.configure(2) do |config|
    config.vm.box = 'almalinux/10'
    config.vm.provider 'libvirt' do |lv|
        lv.cpus = 1
        lv.memory = 1024
    end

    # share the default vagrant folder
    config.vm.synced_folder ".", "/vagrant", type: "nfs", nfs_udp: false

    nodes  = 'backup-srv', 'pg1-srv', 'pg2-srv'
    nodes.each do |node|
        config.vm.define node do |conf|
            conf.vm.hostname = node
        end
    end
end

Installation

On all servers, begin by configuring the PGDG repositories:

sudo dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-10-x86_64/pgdg-redhat-repo-latest.noarch.rpm

Then install PostgreSQL on pg1-srv and pg2-srv:

sudo dnf install -y postgresql18-server

Create a basic PostgreSQL instance on pg1-srv:

sudo /usr/pgsql-18/bin/postgresql-18-setup initdb
sudo systemctl enable postgresql-18
sudo systemctl start postgresql-18

Install pgBackRest on every host and confirm the version:

sudo dnf install -y epel-release
sudo dnf config-manager --enable crb
sudo dnf install -y pgbackrest
$ pgbackrest version
pgBackRest 2.57.0

Create a dedicated user on the repository host

The pgbackrest user will own the backups and WAL archive repository. Although any user can own the repository, it’s best not to use postgres to avoid confusion.

Create the user and repository directory on backup-srv:

sudo mkdir -p /backup_space
sudo mkdir -p /var/log/pgbackrest

sudo groupadd pgbackrest
sudo useradd -d /backup_space -M -g pgbackrest pgbackrest

sudo chown -R pgbackrest: /backup_space
sudo ch
[...]

KubeCon 2025: Bookmarks on Memory and Postgres
Posted by Jeremy Schneider on 2025-11-16 at 22:55

Just got home from KubeCon.

One of my big goals for the trip was to make some progress in a few areas of postgres and kubernetes – primarily around allowing more flexible use of the linux page cache and avoiding OOM kills with less hardware overprovisioning. When I look at Postgres on Kubernetes, I think there are idle resources (both memory and CPU) on the table with the current Postgres deployment models that generally use guaranteed QoS.

Ultimately this is about cost savings. I think we can still run more databases on less hardware without compromising the availability and reliability of our database services.

The trip was a success, because I came home with lots of reading material and homework!

Putting a few bookmarks here, mostly for myself to come back to later:

[...]

Contributions for week 46, 2025
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-11-16 at 21:31

Berlin PostgreSQL Meetup on Thursday, November 13 2025 organised by Sergey Dudoladov and Andreas Scherbaum

Speaker: Josef Machytka

New York City PostgreSQL Meetup on Wednesday, November 12, 2025.

Speaker: Bruce Momjian Organised by Mialolai “Mila” Zhou, Jonathan Katz and Chirag Dave. And Mila Zhou wrote a blog post about it

San Francisco Bay Area PostgreSQL Meetup Group, met on Tuesday November 11 2025, organised by Stacey Haysler, Christophe Pettus & Katharine Saar

Speaker: Kacey Holston

Pascal Scherbaum made & published a video about the Community Board at PGConf.eu 2025.

New Community - Blog Posts:

PostGIS Performance: Intersection Predicates and Overlays
Posted by Paul Ramsey in Crunchy Data on 2025-11-14 at 13:00

In this series, we talk about the many different ways you can speed up PostGIS. A common geospatial operation is to clip out a collection of smaller shapes that are contained within a larger shape. Today let's review the most efficient ways to query for things inside something else.

alt

Frequently the smaller shapes are clipped where they cross the boundary, using the ST_Intersection function.

alt

The naive SQL is a simple spatial join on ST_Intersects.

SELECT ST_Intersection(polygon.geom, p.geom) AS geom
  FROM parcels p
  JOIN polygon
    ON ST_Intersects(polygon.geom, p.geom);

When run on the small test area shown in the pictures, the query takes about 14ms. That’s fast, but the problem is small, and larger operations will be slower.

There is a simple way to speed up the query that takes advantage of the fact that boolean spatial predicates are faster than spatial overlay operations.

What?

  • “Boolean spatial predicates” are functions like ST_Intersects and ST_Contains. They take in two geometries and return “true” or “false” for whether the geometries pass the named test.
  • “Spatial overlay operations” are functions like ST_Intersection or ST_Difference that take in two geometries, and generate a new geometry based on the named rule.

Predicates are faster because their tests often allow for logical short circuits (once you find any two edges that intersect, you know the geometries intersect) and because they can make use of the prepared geometry optimizations to cache and index edges between function calls.

The speed-up for spatial overlay simply observes that, for most overlays there is a large set of features that can be added to the result set unchanged – the features that are fully contained in the clipping shape. We can identify them using ST_Contains.

alt

Similarly, there is a smaller set of features that cross the border, and thus do need to be clipped. These are features that ST_Intersects but are not ST_Contains.

alt

The higher performance function u

[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.