Latest Blog Posts

How to Build Postgres with Zig: A Drop-In GCC/Clang Alternative
Posted by Álvaro Hernández in OnGres on 2025-05-23 at 14:52

What’s Zig and Zig’s C/C++ compiler?

In case you are not familiar with it, Zig is a programming language. Among other characteristics, Zig prides itself on being a low-level / systems programming language with great interoperability with C and C++. Unlike other comparable languages like Rust, it does explicit memory allocation / freeing (even though it adds cool features like the defer keyword), so it seems to have a mental model closer to C, in which Postgres is programmed. This also makes it a very interesting language for developing Postgres extensions in Zig, and there’s the pgzx framework to help with that.

But other than the extensions, what’s Zig bringing to Postgres and what’s this post really about? Zig’s compiler. It’s quite an advanced piece of technology that, apart from compiling Zig code, can also compile C/C++ code, and does so really well. There’s a mind-blowing blog post from Andrew Kelly, creator of Zig, that I’d recommend reading, about using Zig as a C/C++ compiler, claiming it is a powerful drop-in replacement for GCC/Clang.

zig cc, the command line for Zig’s C compiler, is included with the Zig distribution, which is by itself a self-contained, small downloadable package (41-50MiB on Linux, depending on the architecture). zig ccsupports the same options as Clang, which, in turn, supports the same options as GCC”, making it a drop-in replacement. To achieve this level of compatibility, zig cc uses LLVM behind the scenes (it’s technically a layer on top of an LLVM frontend). As a curiosity, Andrew’s post details how it’s possible that Zig’s distribution is (significantly) smaller than even LLVM binaries!

So if it is a drop-in replacement, building Postgres with zig cc should be easy, right? Let’s give it a try.

Building Postgres with zig cc

It turns out to be quite straightforward.

First we need to download Zig. Zig is statically linked ("Zig’s Linux tarballs are fully statically linked, and therefore work correctly on all Linux distributions.").

Download a

[...]

Big Problems From Big IN lists with Ruby on Rails and PostgreSQL
Posted by Andrew Atkinson on 2025-05-23 at 14:30

Introduction

If you’ve created web apps with relational databases and ORMs like Active Record (part of Ruby on Rails), you’ve probably experienced database performance problems after a certain size of data and query volume.

In this post, we’re going to look at a specific type of problematic query pattern that’s somewhat common.

We’ll refer to this pattern as “Big INs,” which are queries with an IN clause that has a big list of values. As data grows, the length of the list of values will grow. These queries tend to perform poorly for big lists, causing user experience problems or even partial outages.

We’ll dig into the origins of this pattern, why the performance of it is poor, and explore some alternatives that you can use in your projects.

IN clauses with a big list of values

The technical term for values are a parenthesized list of scalar expressions.

For example in the SQL query below, the IN clause portion is WHERE author_id IN (1,2,3) and the list of scalar expressions is (1,2,3).

SELECT * FROM books
WHERE author_id IN (1, 2, 3);

The purpose of this clause is to perform filtering. Looking at a query execution plan in Postgres, we’ll see something like this fragment below:

Filter: (author_id = ANY ('{1,2,3}'::integer[]))

This of course filters the full set of books down to ones that match on author_id.

Filtering is a typical database operation. Why are these slow?

Parsing, planning, and executing

Remember that our queries are parsed, planned, and executed. A big list of values are treated like constants, and don’t have associated statistics.

Queries with big lists of values take more time to parse and use more memory.

Without pre-collected table statistics for planning decisions, PostgreSQL is more likely to mis-estimate cardinality and row selectivity.

This can mean the planner chooses a sequential scan over an index scan, causing a big slowdown.

How do we create this pattern?

Creating this pattern directly

In Active Record

[...]

Collation Torture Test versus Debian
Posted by Jeremy Schneider on 2025-05-23 at 05:37

Collation torture test results are finally finished and uploaded for Debian.

https://github.com/ardentperf/glibc-unicode-sorting

The test did not pick up any changes in en_US sort order for either Bullseye or Bookworm 🎉

Buster has glibc 2.28 so it shows lots of changes – as expected.

The postgres wiki had claimed that Jessie(8) to Stretch(9) upgrades were safe. This is false if the database contains non-english characters from many scripts (even with en_US locale). I just now tweaked the wording on that wiki page. I don’t think this is new info; I think it’s the same change that showed up in the Ubuntu tables under glibc 2.21 (Ubuntu 15.04)

FYI – the changelist for Stretch(9) does contain some pure ascii words like “3b3” but when you drill down to the diff, you see that it’s only moving a few lines relative to other strings with non-english characters:

@@ -13768521,42 +13768215,40 @@ $$.33
༬B༬
3B༬
3B-༬
-3b3
3B༣
3B-༣
+3B٣
+3B-٣
+3b3
3B3

In the process of adding Debian support to the scripts, I also fixed a few bugs. I’d been running the scripts from a Mac but now I’m running them from a Ubuntu laptop and there were a few minor syntax things that needed updating for running on Linux – even though, ironically, when I first started building these scripts it was on another Linux before I switched to Mac. I also added a file size sanity check, to catch if the sorted string-list file was only partly downloaded from the remote machine running some old OS (… realizing this MAY have wasted about an hour of my evening yesterday …)

The code that sorts the file on the remote instance is pretty efficient. It does the sort in two stages and the first stage is heavily parallelized to utilize whatever CPU is available. Lately I’ve mostly used c6i.4xlarge instances and I typically only need to run them for 15-20 minutes to get the data and then I terminate them. The diffs and table generation run locally. On my poor old laptop, the diff for buster ran at 100% cpu and 10°C hotter than the idle co

[...]

Community Summit at PGConf.dev
Posted by Henrietta Dombrovskaya on 2025-05-23 at 02:58

Last week, I presented at the PGConf.dev for the first time and participated in a community summit for the first time. The idea was pitched by Teresa Giacomini, and that’s how this event was described in the program:

Community building, support, and maintenance are all critical to the future of Postgres. There are many types and layers to community building from events, podcasts, & meetups to extracurricular fun like chess & karaoke; recognition & rewards to Postgres booths at non-Postgres conferences; getting started in smaller communities to wrangling a global one.

In this 3-hour summit we will:

  • Have short presentations from the hosts on different aspects of community
  • Perform a short exercise to gather the group’s thoughts on some key questions:
    • What does community mean?
    • How do we make it easier for people to get involved?
    • What community initiatives already exist? What’s missing? How can we improve them?
  • Break into smaller groups to tackle areas the group believes are most important
  • Report out the larger group by each small group
  • Each group adds their results to the PostgreSQL Wiki
  • Determine a way for us to track our progress moving forward

Pre-work: We will gather some interest prior to the summit on topics for discussion.

Due to the interactive nature of the summit participation is limited to 60 people. Participants should be committed to build, support, or maintain a community in some way, and be ready to leave the summit with concrete action items to move the Postgres community forward. While the hosts from this summit are from the US & Europe, we hope that folks from less established Postgres communities will join us.

Pat Wright and Andreas Scherbaum were the other two organizers. We started by asking the conference organizers to email the conference participants a questionnaire with a list of topics they would be interested in discussing. Then, we analyzed

[...]

Postgres Extensions Day 2025 Kicks Off with a Successful Debut in Montréal
Posted by cary huang in Hornetlabs Technology on 2025-05-22 at 18:48

Introduction

PostgreSQL Extension Day 2025 made its successful debut on May 12, 2025, just one day before the start of pgconf.dev 2025. This focused one-day event brought together the community around a single theme: the PostgreSQL extension ecosystem. From innovative ideas and development insights to discussions on safer extension building and delivery, the day was all about “everything extensions.”

The conference featured 14 tightly packed 25-minute talks, making for a long but highly productive day. For those unable to attend in person, the event was also live-streamed on YouTube.

Thanks to the hard work of the organizers and volunteers, PostgreSQL Extension Day 2025 turned out to be a great success. In this blog, I’ll walk through some of the key highlights and takeaways from this event.

Conference Highlights

Community and Volunteer Driven

Since this was the first-ever pgext.day conference, organized by Yurii Rashkovski, there was plenty of room for things to go sideways. Fortunately, a small but dedicated team—including Grant Zhou, Sweta Vooda, Charis Charalampidi, and myself—volunteered to support Yurii with setting up the live streaming and recording equipment early in the morning. Together, we handled the camera setup, microphones, projector, and streaming rig, and quickly got up to speed on how to operate the entire system before the event began.

I have to say, by the time the conference started, I felt surprisingly confident running the live streaming, camera work, and digital recording gear—a fun learning experience in itself!

Social

The social aspect of a conference is just as important as the sessions themselves—it’s where connections are made, ideas are exchanged, and the community truly comes alive. At pgext.day 2025, we had the chance to enjoy dinner together both before and after the conference, giving everyone time to relax, share

[...]

Adventures in Extension Packaging
Posted by David Wheeler in Tembo on 2025-05-22 at 17:31

I gave a presentation at PGConf.dev last week, Adventures in Extension Packaging. It summarizes stuff I learned in the past year in developing the PGXN Meta v2 RFC, re-packaging all of the extensions on pgt.dev, and experimenting with the CloudNativePG community’s proposal to mount extension OCI images in immutable PostgreSQL containers.

Turns out a ton of work and experimentation remains to be done.

I’ll post the link to the video once it goes up, but in meantime, here are the slides:

Previous work covers the first half of the talk, including:

The rest of the talk encompasses newer work. Read on for details.

Automated Packaging Challenges

Back in December I took over maintenance of the Trunk registry, a.k.a., pgt.dev, refactoring and upgrading all 200+ extensions and adding Postgres 17 builds. This experience opened my eyes to the wide variety of extension build patterns and configurations, even when supporting a single OS (Ubuntu 22.04 “Jammy”). Some examples:

  • pglogical requires an extra make param to build on PostgreSQL 17: make -C LDFLAGS_EX="-L/usr/lib/postgresql/17/lib"
  • Some pgrx extensions require additional params, for example:
  • pljava needs a pointer to libjvm: mvn clean install -Dpljava.libjvmdefault=/usr/lib/x86_64-linux-gnu/libjvm.so
  • plrust needs files to be moved arou
[...]

Best Practices for TimescaleDB Massive Delete Operations
Posted by semab tariq in Stormatics on 2025-05-22 at 13:30

Welcome to the second part of our TimescaleDB best practices series! In the first part, we explored how to perform massive backfill operations efficiently, sharing techniques to optimize performance and avoid common pitfalls. If you haven’t had a chance to read the first part yet, you can check it out using this link

In today’s blog, we will discuss another crucial aspect of time-series data management: massive delete operations.

As your data grows over time, older records often lose their relevance but continue to occupy valuable disk space, potentially increasing storage costs and might degrade the performance if not managed well. 

Let’s walk through some strategies to clean up or downsample aged data in TimescaleDB, helping you maintain a lean, efficient, and cost-effective database.

Prerequisites for Massive Delete Operations

Here are a few important steps to follow for performing a large-scale delete on production and to ensure we are prepared in case something goes wrong.

Tune Autovacuum Settings 

In PostgreSQL, VACUUM is a maintenance process that removes dead tuples, obsolete row versions left behind by UPDATE or DELETE operations. These dead tuples occupy space but are no longer visible to any active transactions. Vacuuming reclaims this space, helping to reduce table bloat and maintain database performance.

The autovacuum feature automates this process by periodically running in the background, ensuring that dead tuples are cleaned up without manual intervention. This is especially important after large delete operations, where a significant number of dead rows can accumulate. If not handled promptly, this can lead to table bloat, increased I/O, and slower query performance.

However, its effectiveness depends heavily on how well it is configured. Without proper tuning, autovacuum may run too infrequently or too slowly, allowing dead tuples to pile up and impact performance.

Here is a list of important autovacuum parameters along with their recommended values th

[...]

pg_dump speed across versions
Posted by Hubert 'depesz' Lubaczewski on 2025-05-22 at 07:44
Got interested recently in speed of pg_dump. Specifically, if, over the years, it has became faster, and if yes, how much. Couple of years ago I was in position where we needed to run pg_dump, and found some inefficiencies, which got later patched. This was around the version 12. So, how does the situation look … Continue reading "pg_dump speed across versions"

pgconf.dev 2025 Wraps Up with Great Success in Montreal
Posted by cary huang in Hornetlabs Technology on 2025-05-21 at 23:31

Introduction

pgconf.dev 2025 just wrapped up in Montreal, Canada, following its successful debut in Vancouver last year—and once again, it delivered a fantastic mix of deep technical content and strong community social activities.

As always, the focus was on both the current state and future direction of PostgreSQL, with over 40 thoughtfully curated technical talks covering everything from performance, storage, extensions and new features. The week wasn’t just about technical talks though—there were plenty of chances to connect through community events like Meet & Eat, the Social Run, and group dinners, making the experience as social as it was informative.

Montreal brought its own unique charm to the event. With its French-speaking culture, beautiful Old Town, and scenic waterfront, the city felt a little like Europe—laid-back, stylish, and totally different from the west coast vibe of Vancouver. Oh, and the food? Absolutely amazing!

WARNING: long blog post

Conference Highlights

Here are some personal highlights from pgconf.dev 2025, based on my own experience and participation throughout the week. I’ve made an effort to capture key takeaways from the talks I attended. and included photos from the conference to give you a feel for the energy, community, and atmosphere of the event.

Sponsor Swags

At the conference sign-in desk, a colorful array of sponsor swag was neatly displayed alongside the official pgconf.dev T-shirts. From stickers and pens to notebooks, socks, and other branded goodies, the table was a treasure trove for attendees. Everyone was welcome to help themselves and take as many items as they needed — a small but thoughtful way for sponsors to share their appreciation and for participants to bring home a piece of the event. The generous assortment added a lively and welcoming touch to the registration area, setting a positive tone from the moment attendees walked in.

Have you

[...]

Xata Agent v0.3.1: Custom tools via MCP, Ollama integration, support for reasoning models & more
Posted by Gülçin Yıldırım Jelínek in Xata on 2025-05-21 at 12:30
Version 0.3.1 of the open-source Xata Agent adds support for custom MCP servers and tools, introduces Ollama as a local LLM provider, and includes support for reasoning models O1 and O4-mini.

pdot 1.0.0: Exploring Databases Visually, Part III
Posted by Dian Fay on 2025-05-21 at 00:00

In what I can't say isn't a tradition at this point, we're in an odd-numbered year so there's news on the pdot front! Get it here!

The biggest change (and the reason for the big 1-0-0) is simplifying usage: rather than requiring a shell function to plug the graph body into a template for interactive use, pdot now outputs the entire digraph or flowchart markup. The old behavior is still available with the --body flag, but the new default means it's a lot easier to get started -- pdot postgres_air fks | dot -Tpng | wezterm imgcat and go. You only need scripting to do the pipelining for you, or to customize the graph's appearance.

Other notable updates along the way:

  • PGHOST, PGDATABASE, PGUSER, and PGPASSWORD environment variables are honored
  • new policies graph, and many improvements to others especially triggers and function refs
  • usable as a Rust library!

Late last year I also presented at PGConf.EU in Athens, should you be interested.

Data archiving and retention in PostgreSQL. Best practices for large datasets
Posted by DARIA NIKOLAENKO in Data Egret on 2025-05-20 at 17:08

Just over a week ago, I attended PGConf.DE 2025 in Berlin with the rest of the Data Egret team and gave a talk titled “Data Archiving and Retention in PostgreSQL: Best Practices for Large Datasets.” This post is a written version of my talk for those who couldn’t attend.

Below, you’ll find each slide from the talk — along with what was said.

I’ve started talking about something that happens with almost every Postgres database — the slow, steady growth of data. Whether it’s logs, events, or transactions — old rows pile up, performance suffers, and managing it all becomes tricky. My talk was focusing on  practical ways to archive, retain, and clean up data in PostgreSQL, without breaking queries or causing downtime.

As you can see my work with Postgres focuses a lot on monitoring, performance and automation. I do that at Data Egret, where we help teams run Postgres reliably, both on-prem and in the cloud.

We specialise entirely in Postgres and involved a lot in the community. We help companies with scaling, migrations, audits, and performance tuning — everything around making Postgres run better.

I was also excited to share that Data Egret is now а part of a new initiative in the Postgres ecosystem: The Open Alliance for PostgreSQL Education. It’s an effort to build open, independent, community-driven certification.  

Then I dived into the topic of my talk.

Postgres can handle big tables, but once data starts piling up, it doesn’t always degrade gracefully:

  • queries slow down,
  • VACUUM takes longer,
  • indexes grow,
  • backups get heavier.

>And often, you’re keeping old data around for reporting, audits, or just in case. And that’s OKAY.  Because the issue isn’t really volume — it’s how we manage it.

This isn’t about discarding data — it’s about managing it wisely. Frequently used, or ‘hot’ data, should remain readily accessible and fast to query, without being archived or moved to storage.
And cold data? Move, c

[...]

Short alphanumeric pseudo random identifiers in Postgres
Posted by Andrew Atkinson on 2025-05-20 at 16:00

Introduction

In this post, we’ll cover a way to generate short, alphanumeric, pseudo random identifiers using native Postgres tactics.

These identifiers can be used for things like transactions or reservations, where users need to read and share them easily. This approach is an alternative to using long, random generated values like UUID values, which have downsides for usability and performance.

We’ll call the identifier a public_id and store it in a column with that name. Here are some example values:

SELECT public_id
FROM transactions
ORDER BY random()
LIMIT 3;

 public_id
-----------
 0359Y
 08nAS
 096WV

Natural and Surrogate Keys

In database design, we have design our schema to use natural and surrogate keys to identify rows.

For our public_id identifier, we’re going to generate it from a conventional surrogate integer primary key called id. We aren’t using natural keys here.

The public_id is intended for use outside the database, while the id integer primary key is used inside the database to be referenced by foreign key columns on other tables.

Whle public_id is short which minimizes space and speeds up access, the main reason for it is for usability.

With that said, the target for total space consumption was to be fewer bytes than a 16-byte UUID. This was achieved with an integer primary key and this additional 5 character generated value, targeting a smaller database where this provides plenty of unique values now and into the future.

Let’s get into the design details.

Design Properties

Here were the desired design properties:

  • A fixed size, 5 characters in length, regardless of the size of the input integer (and within the range of the integer data type)
  • Fewer bytes of space than a uuid data type
  • An obfuscated value, pseudorandom, not easily guessable. While not easily guessable, this is not meant to be “secure”
  • Reversibility back into the original integer
  • Only native Postgres capabilities, no extensions, client web app langu
[...]

Auto-Release PostgreSQL Extensions on PGXN
Posted by David Wheeler in Tembo on 2025-05-20 at 15:49

I last wrote about auto-releasing PostgreSQL extensions on PGXN back in 2020, but I thought it worthwhile, following my Postgres Extensions Day talk last week, to return again to the basics. With the goal to get as many extensions distributed on PGXN as possible, this post provides step-by-step instructions to help the author of any extension or Postgres utility to quickly and easily publish every release.

TL;DR

  1. Create a PGXN Manager account
  2. Add a META.json file to your project
  3. Add a pgxn-tools powered CI/CD pipeline to publish on tag push
  4. Fully-document your extensions

Release your extensions on PGXN

PGXN aims to become the defacto source for all open-source PostgreSQL extensions and tools, in order to help users quickly find and learn how to use extensions to meet their needs. Currently, PGXN distributes source releases for around 400 extensions (stats on the about page), a fraction of the ca. 1200 known extensions. Anyone looking for an extension might exist to solve some problem must rely on search engines to find potential solutions between PGXN, GitHub, GitLab, blogs, social media posts, and more. Without a single trusted source for extensions, and with the proliferation of AI Slop in search engine results, finding extensions aside from a few well-known solutions proves a challenge.

By publishing releases and full documentation — all fully indexed by its search index — PGXN aims to be that trusted source. Extension authors provide all the documentation, which PGXN formats for legibility and linking. See, for example, the pgvector docs.

If you want to make it easier for users to find your extensions, to read your documentation — not to mention provide sources for binary packaging systems — publish every release on PGXN.

Here’s how.

Create an Account

Step one: create a PGXN Manager account. The Emai

[...]

pgstream v0.5.0 update
Posted by Esther Minano in Xata on 2025-05-20 at 12:00
Improved user experience with new transformers, YAML configuration, CLI refactoring and table filtering.

Welcome new GSoC 2025 contributors!
Posted by Pavlo Golub in Cybertec on 2025-05-20 at 06:00

I’m pleased to welcome seven new Google Summer of Code 2025 contributors to the Postgres community!

I encourage you to welcome contributors during these first weeks to get them excited and invested in our community. You will meet them on mailing lists, Slack, Discord, and other media.

The table below details information about this year’s project, contributors, and mentors!

Project Title Contributor Assigned Mentors
Enhancements to pgwatch v3 RPC integration Ahmad Gouda Akshat Jaimini, Pavlo Golub
pgmoneta: Incremental backup for PostgreSQL 13-16 Ashutosh Sh Haoran Zhang, Jesper Pedersen
Extension Support for pgexporter Bassam Adnan Saurav Pal, Jesper Pedersen
Upgrade pgwatch Grafana dashboards to v11 Gaurav Patidar Rajiv Harlalka, Pavlo Golub
ABI Compliance Checker Mankirat Singh David Wheeler, Pavlo Golub
pgmoneta: WAL Filtering Mohab Yasser Shahryar Soltanpour, Jesper Pedersen
Enhancing Pgagroal Security Tejas Tyagi Luca Ferrari, Jesper Pedersen

We expect GSoC contributors to actively participate in the Community Bonding period from May 8th to June 1st. This period’s goal is to prepare contributors to begin their project work effectively on June 2nd. So please help them accommodate.

It was an insane start to the year! The GSoC program had the highest number of proposals ever, as well as the highest number of spam and AI-generated applications. Due to the high volume of new organizat

[...]

PgPedia Week, 2025-05-18
Posted by Ian Barwick on 2025-05-19 at 12:43

A very short edition this week...

PostgreSQL 18 changes this week

Following last week's beta1 release , things seem to have been quite quiet on all fronts, which hopefully means people are busy testing and not finding issues. From previous experience, this is the point in the release cycle where I start to review the changes over the past year and work out what I've missed ( feedback always welcome!).

PostgreSQL 18 articles Good time to test io_method (for Postgres 18) (2025-05-12) - Tomas Vondra discusses io_method and io_workers

more...

Hacking Workshop for June 2025
Posted by Robert Haas in EDB on 2025-05-19 at 12:00

Next month, I'll be hosting 2 or 3 discussions of Masahiko Sawada's talk, PostgreSQL meets ART - Using Adaptive Radix Tree to speed up vacuuming, from 2024.pgconf.dev. If you're interested in joining us, please sign up using this form and I will send you an invite to one of the sessions.

Read more »

Mini Summit 5 Transcript: Improving the PostgreSQL Extensions Experience in Kubernetes with CloudNativePG
Posted by David Wheeler in Tembo on 2025-05-19 at 03:05
Orange card with large black text reading “Extension Management in CNPG”. Smaller text below reads “Gabriele Bartolini (EDB)” and that is the date, “05.07.2025”.

The final PostgresSQL Extension Mini-Summit took place on May 7. Gabriele Bartolini gave an overview of PostgreSQL extension management in CloudNativePG (CNPG). This talk brings together the topics of several previous Mini-Summits — notably Peter Eisentraut on implementing an extension search path — to look at the limitations of extension support in CloudNativePG and the possibilities enabled by the extension search path feature and the Kubernetes 1.33 ImageVolume feature. Check it out:

Or read on for the full transcript with thanks to Floor Drees for putting it together.

Introduction

Floor Drees.

On May 7 we hosted the last of five (5) virtual Mini-Summits that lead up to the big one at the Postgres Development Conference (PGConf.Dev), taking place next week, in Montreal, Canada. Gabriele Bartolini, CloudNativePG maintainer, PostgreSQL Contributor, and VP Cloud Native at EDB, joined to talk about improving the Postgres extensions experience in Kubernetes with CloudNativePG.

The organizers:

The stream and the closed captions available for the recording are supported by PGConf.dev and their gold level sponsors, Google, AWS, Huawei, Microsoft, and EDB.

Improving the Postgres extensions experience in Kubernetes with CloudNativePG

Gabriele Bartolini.

Hi everyone. Thanks for this opportunity, and thank you Floor and David for inviting me today.

I normally start every presentation with a question, and this is actually the question that has been hitting me and the other maintainers of CloudNati

[...]

Taming ReorderBufferWrite - Boost Logical Decoding in Postgres
Posted by Robins Tharakan on 2025-05-18 at 00:05
Taming ReorderBufferWrite - Boost Logical Decoding in PostgresPerformance bottlenecks in Postgres logical replication or Change Data Capture (CDC) stream can be subtle, but one specific wait event, ReorderBufferWrite, often points directly at how your application interacts with the database during these processes. Let's unpack this wait-event and see how your application's workload patterns can

PostGIS 3.6.0alpha1
Posted by Regina Obe in PostGIS on 2025-05-18 at 00:00

The PostGIS Team is pleased to release PostGIS 3.6.0alpha1! Best Served with PostgreSQL 18 Beta1 and GEOS 3.13.1.

This version requires PostgreSQL 12 - 18beta1, GEOS 3.8 or higher, and Proj 6.1+. To take advantage of all features, GEOS 3.12+ is needed. To take advantage of all SFCGAL features, SFCGAL 2.1.0+ is needed.

3.6.0alpha1

This release is an alpha of a major release, it includes bug fixes since PostGIS 3.5.3 and new features.

PostGIS 3.5.3
Posted by Regina Obe in PostGIS on 2025-05-17 at 00:00

The PostGIS Team is pleased to release PostGIS 3.5.3.

This version requires PostgreSQL 12 - 18beta1, GEOS 3.8 or higher, and Proj 6.1+. To take advantage of all features, GEOS 3.12+ is needed. SFCGAL 1.4+ is needed to enable postgis_sfcgal support. To take advantage of all SFCGAL features, SFCGAL 1.5+ is needed.

3.5.3

This release is a bug fix release that includes bug fixes since PostGIS 3.5.1.

How to Safely Perform Backfill Operations in TimescaleDB
Posted by semab tariq in Stormatics on 2025-05-16 at 10:27

Backfilling data into a TimescaleDB hypertable in production can be very tricky, especially when automated processes like compression policies are involved. From past experience, we have seen that if backfill operations aren’t handled properly, they can interfere with these automated tasks, sometimes causing them to stop working altogether. 

This blog covers a safer and more reliable approach to backfilling hypertables, along with best practices to prevent disruptions to compression and other background processes.

What is a Backfill Operation?

Backfilling means adding old or missing data into the database table after some time has already passed. 

Imagine you are collecting temperature readings every hour, but your system was down for a day and didn’t save any data. Later, you get that missing data from the local storage of the device or cloud storage, and want to put it back in the right hypertable, which is called backfilling. 

In TimescaleDB, this is common with time-series data, but it needs to be done carefully. That’s because TimescaleDB might already be doing things in the background, like compressing old data to save space. If we are not careful, backfilling can mess up these automatic tasks.

The post How to Safely Perform Backfill Operations in TimescaleDB appeared first on Stormatics.

Contributions for the week of 2025-05-05 (Week 19)
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-05-15 at 14:27

An Vercammen, Gregory Gioffredi and Wim Bertels organized PGConf.BE 2025, with help from Vera Demaiter and Aimée Lynn Backiel.

  • Speakers are: Teresa Lopes, Guy Gyles, Emrah Becer, Mayuresh Bagayatkar, Gülçin Yıldırım Jelinek, Boriss Mejías, Gianni Ciolli, Martín Marqués, Stefan Fercot, Dwarka Rao

May 7 the Postgres Extensions Ecosystem Mini-Summit #4 hosted CloudNativePG maintainer Gabriele Bartolini, to talk about Improving the PostgreSQL Extensions Experience in Kubernetes with CloudNativePG (recording, transcript).

Andreas Kretschmer, Andreas Scherbaum, Andreas Schmitz, Daniel Westermann, Danilo Endesfelder, Julia Gugel, and Kai Wagner organized PGConf.DE 2025.

[...]

From VACUUM to Deutsche Bahn: The Most Unexpected Moments at PGConf.DE 2025
Posted by Christoph Berg in Cybertec on 2025-05-14 at 13:31

Pre-Conference Travels: Debian & Berlin

The week before this year's PGConf.DE, I attended the Debian MiniDebConf in Hamburg. Adding Berlin to the itinerary made for a busy two weeks on the road. Still, it gave me time to prep for my PGConf talk while catching up on Debian-related discussions.

Conference Kickoff & Venue

We hit the road to Berlin the day before the conference. My colleague Bernd Helmle drove, and after a relaxed six-hour ride, we arrived. The venue was familiar — the same as PGConf.EU 2022 — a hotel with a large conference floor built around a sprawling atrium. While rooms were somewhat dispersed, the layout helped accommodate the crowd of 340 attendees comfortably.

Andreas "ads" Scherbaum kicked off the event with characteristic German punctuality: “We are in Germany, let’s start on time!”

Day 1: Talk Highlights

Practical Lessons & Query Life Cycles

First up, I sat in on Bernd Patolla's session on migrating from Oracle to PostgreSQL. Honestly, I was distracted, as I was still polishing slides for my own talk.

I then partially tuned into Sergey Dudoladov’s session on the life cycle of a query in PostgreSQL — again, nerves and last-minute edits dominated.

However, I fully engaged with András Váczi’s practical insights into running PostgreSQL on Windows. From backup headaches to tool compatibility, it was a reality check on cross-platform database management.

Read Andras Vaczi's talk on running PostgreSQL on Windows here.

Modern VACUUM: My Talk

In my own session, Modern VACUUM, I explored the evolution of PostgreSQL’s VACUUM and autovacuum processes. The talk stemmed from internal CYBERTEC consulting team discussions: VACUUM has changed a lot, and I needed to catch up and share that knowledge.

Highlights from my talk included:

  • Why VACUUM exists
  • Base features from 8.3
  • Key milestones:
    • Visibility map (8.4, 9.6)
    • Parallelism (9.5, 13)
    • Index skipping (12, 14)
    • IO improvements (12)
[...]

Xata: Postgres with data branching and PII anonymization
Posted by Tudor Golubenco in Xata on 2025-05-14 at 11:00
Relaunching Xata as "Postgres at scale". A Postgres platform with Copy-on-Write branching, data masking, and separation of storage from compute.

2025 GSOC: Mankirat Singh — ABI Compliance Reporting
Posted by David Wheeler in Tembo on 2025-05-13 at 18:25

I’m pleased to welcome Mankirat Singh to the Postgres community as a 2025 Google Summer of Code contributor. Mankirat will be developing an ABI compliance checker and reporting system to help identify and prevent unintentional ABI changes in future minor Postgres releases. This follows on the heels of the addition of ABI and API guidance in Postgres 18, as well as the ABI-breaking Postgres 17.1 release. What timing!

Please follow Mankirat’s blog as he develops the project this summer, under the mentorship of myself and Pavlo Golub. It should also soon be on Planet PostgreSQL. We’ve also set up the #gsoc2025-abi-compliance-checker channel on the community Slack for ad-hoc discussion. Join us!

Step by Step Guide on Setting Up Physical Streaming Replication in PostgreSQL
Posted by Umair Shahid in Stormatics on 2025-05-13 at 13:37

Physical streaming replication in PostgreSQL allows you to maintain a live copy of your database on a standby server, which continuously receives updates from the primary server’s WAL (Write-Ahead Log). This standby (or hot standby) can handle read-only queries and be quickly promoted to primary in case of failover, providing high availability and disaster recovery.

In this guide, I will walk through provisioning a primary PostgreSQL 16 server and a standby server on Linux, configuring them for streaming replication, and verifying that everything works. I assume you are an experienced engineer familiar with Linux, but new to PostgreSQL replication, so I will keep it friendly and straightforward.

Figure: Real-time data streaming from a primary PostgreSQL server (left) to a standby server (right). The standby constantly applies WAL records received from the primary over a network connection, keeping an up-to-date copy of the database ready for failover.

Step 1: Prepare Two Linux Servers and Install PostgreSQL 16
Before diving into PostgreSQL settings, set up two Linux servers (virtual or physical). One will act as the primary database server, and the other as the standby (read replica). For a smooth replication setup, both servers should be as similar as possible in OS, hardware, and PostgreSQL version. In particular, ensure the following prerequisites:

PostgreSQL 16 is installed on both servers via the official PostgreSQL repositories. Both servers must run the same major PostgreSQL version and architecture (mixing different versions won’t work for physical replication). If you haven’t installed PostgreSQL yet, do so now (e.g., on Ubuntu: sudo apt install postgresql-16, or on RHEL/CentOS: use the PostgreSQL Yum repository). Make sure the PostgreSQL service is running on the primary server.

Network connectivity: The standby must be able to reach the primary on the PostgreSQL port (default 5432). If the servers are in a cloud environment like AWS EC2, configure the security group or firewall

[...]

PostgreSQL Conference Germany 2025
Posted by Josef Machytka in credativ on 2025-05-13 at 06:00

PGConf.DE 2025, the 9th Annual PostgreSQL Conference Germany, was held on May 8–9, 2025, at the Marriott Hotel near Potsdamer Platz in Berlin. The event interconnected PostgreSQL enthusiasts, developers, DBAs, and industry sponsors for two days of fascinating talks across four parallel tracks. It was the biggest event so far, with 347 attendees. The whole conference was very well organized, and therefore special thanks are due to all the organizers—in particular Andreas Scherbaum, the main organizer—for their efforts and hard work.

Our company, credativ GmbH, being independent again, participated as a gold sponsor. The credativ CTO Alexander Wirt, Head of Sales & Marketing Peter Dreuw and team leader of Database team Tobias Kauder, were available for attendees at the credativ booth. Many thanks to our team colleague Sascha Spettmann for delivering all the stands and billboards to the conference and back again.

    

In total, we held four talks at the conference. Michael Banck, technical leader of our database team, presented the German-language talk “PostgreSQL Performance Tuning.” He provided a deep and comprehensive overview of the most important performance-tuning parameters in PostgreSQL and explained how they influence the database’s behavior. His talk attracted a large audience and was very well received.

  

I had an absolutely unique opportunity to present three different talks in the English track. In my regular talk “PostgreSQL Connections Memory Usage: How Much, Why and When,” I presented the results of my research and tests on PostgreSQL connections’ memory usage. After explaining the most important aspects of Linux memory management and measurements of memory usage reported by standard commands, I detailed PostgreSQL connection memory usage during query execution based on numbers reported in smaps files. I intend to publish detailed blog posts about my findings soon. My other talk, “Building a Data Lakehouse with PostgreSQL,” was originally c

[...]

PgPedia Week, 2025-05-11
Posted by Ian Barwick on 2025-05-12 at 10:59

This week saw the routine quarterly round of PostgreSQL minor version updates , so now is your chance to upgrade. Note that PostgreSQL 13 is now officially EOL , so now is your chance to start planning upgrades.

PostgreSQL 18 changes this week

As expected, the PostgreSQL beta1 release is now available, and the PostgreSQL 18 documentation is now available, together with the release notes . Note that as the REL_18_STABLE branch has not yet been created, the PostgreSQL 18 and devel documentation are currently identical.

PostgreSQL 18 articles PostgreSQL 18 Beta Preview – Export or Amend Statistics with Ease (2025-05-10) - Deepak Mahto Waiting for Postgres 18: Accelerating Disk Reads with Asynchronous I/O (2025-05-07) - Lukas Fittl / pganalyze Waiting for PostgreSQL 18 - Add function to get memory context stats for processes (2025-05-05) - Hubert 'depesz' Lubaczewski discusses pg_get_process_memory_contexts()

more...

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.