Latest Blog Posts

Prairie PUG March 17 recording
Posted by Henrietta Dombrovskaya on 2026-03-25 at 13:35

PaulWhalen’s talk recording is now available! Enjoy!

The Real Shift in Data Platforms Is Not Just AI. It Is Fewer Seams.
Posted by Vibhor Kumar on 2026-03-24 at 23:14

Introduction: The Market Is Talking About AI, but the Deeper Change Is Architectural

The database market is full of confident declarations right now. One vendor says the cloud data warehouse era is ending. Another argues that AI is redrawing the database landscape. A third claims that real-time analytics is now the center of gravity. Each story contains some truth, and each vendor naturally presents itself as the answer.

But there is a risk in taking these narratives too literally. The deeper shift in enterprise data platforms is not simply that AI is changing databases. It is that modern platforms are being forced to reduce the seams between systems. That is the more important architectural story, and it is the one that will matter long after today’s product positioning slides have been replaced by tomorrow’s.

For years, enterprises tolerated fragmented data architectures because the fragmentation felt manageable. One system handled transactions. Another handled analytics. A streaming layer was added for movement and enrichment. Dashboards sat elsewhere. Then machine learning appeared, followed by vector stores, feature stores, observability engines, and lakehouse layers. For a while, the industry treated this as normal evolution. Eventually, however, many teams discovered that they were not building a platform so much as negotiating peace between products.

That is why this moment matters. AI may be accelerating the conversation, but the real pressure is architectural. Enterprises are trying to simplify how data flows, how systems interact, and how teams operate. In other words, they are trying to remove seams.

The Problem: The Cost of Separation Has Become Too High

The old world was built around separation. In one sense, that separation was rational. Different workloads genuinely do have different requirements. Transactions need integrity and predictability. Analytics often need scale and throughput. Observability workloads have different ingestion and retention patterns. AI expe

[...]

PostgreSQL High Availability on OCI: Why Your Failover Passes Every Test But Breaks in Production
Posted by Umair Shahid in Stormatics on 2026-03-24 at 09:59

Your PostgreSQL HA cluster promotes a new primary. Patroni says everything is healthy. But your application is still talking to the old, dead node. Welcome to the OCI VIP problem.

If you have built PostgreSQL high availability clusters on AWS or Azure, you have probably gotten comfortable with how virtual IPs work. You assign a VIP, your failover tool moves it, and your application reconnects to the new primary. Clean. Simple. Done.

Then you try the same thing on Oracle Cloud Infrastructure and something quietly goes wrong.

The cluster promotes. Patroni (or repmgr, or whatever you are using) does its job. The standby becomes the new primary. But the VIP does not follow. Your application keeps sending traffic to the old node — the one that just failed. From the outside, it looks like the database is down. From the inside, everything looks green.

This is one of the more frustrating failure modes we have worked through in production. Not because it is hard to fix, but because it is hard to catch. It passes every test you throw at it right up until the moment it matters.

Let me walk you through why this happens, how to fix it, and how to pick the right approach for your environment.

Why OCI Handles VIPs Differently

On AWS, a secondary private IP can float between instances within a subnet. You call assign-private-ip-addresses and it moves. On Azure, you update a NIC’s IP configuration. In both cases, your failover tool can handle this natively, or with a small callback script.

OCI does not work that way.

On OCI, a virtual IP (implemented as a secondary private IP on a VNIC) is explicitly bound to a specific instance’s Virtual Network Interface Card. It cannot float between instances the way it does on AWS or Azure. When your primary fails and the standby gets promoted, the VIP stays attached to the old instance’s VNIC. It does not move on its own, and the stand

[...]

EXPLAIN's Other Superpowers
Posted by Richard Yen on 2026-03-23 at 08:00

Introduction

Most people who work with PostgreSQL eventually learn two commands for query tuning: EXPLAIN and EXPLAIN ANALYZE.

EXPLAIN shows the planner’s chosen execution plan, and EXPLAIN ANALYZE runs the query and adds runtime statistics. For most tuning tasks, this already provides a wealth of information.

But what many people don’t realize is that EXPLAIN has a handful of other options that can make troubleshooting much easier. In some cases they answer questions that EXPLAIN ANALYZE alone cannot.

In this post we’ll take a look at a few of those lesser-known options.


BUFFERS: Where Did the Data Come From?

One common question during performance analysis is whether data came from: shared buffers (cache), disk, or temporary buffers. This is where the BUFFERS option comes in handy. Output can look something like this:

EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM mytable WHERE id = 123;
[...]
  Index Scan using mytable_pkey on mytable
  Buffers: shared hit=5 read=2
[...]

In the example above, we see:

  • shared hit – pages already in cache (i.e., cache hit)
  • shared read – pages fetched from disk (i.e., cache miss)

Note that buffers in this context are 8 kilobyte blocks of memory (standard block size for most storage systems)

This is extremely useful when trying to determine if performance problems are related to: cold cache, excessive disk reads, or insufficient memory (i.e., cache is too crowded to keep all the data being worked with).

Especially for index scans, this information confirms whether a query that should be index-friendly is actually pulling large portions of the table into memory.


MEMORY: Memory used by the

This is a new feature introduced in version 18. It is different from BUFFERS in the sense that it tracks the amount of memory consumed during the query planning phase, not execution. Output would appear at the bottom of EXPLAIN output like this:

EXPLAIN (ANALYZE, TIMING OFF)
SELECT * FROM mytable WHERE id = 123;
[...]
 Pla
[...]

Contributions for week 11, 2026
Posted by Cornelia Biacsics in postgres-contrib.org on 2026-03-23 at 07:16

The PostgreSQL User Group Vienna met for the very first time on Wednesday, March 18 2026, organised by Cornelia Biacsics.

Speakers:

  • Ranjeet Kumar
  • Jan Karremans
  • Pavlo Golub

On Thursday, March 19 2026, the AMS DB came together for talks and networking.

Organised by:

  • Floor Drees
  • Jessie Tu
  • Mattias Jonsson
  • Daniël van Eeden

Speakers:

  • Matthias van de Meent
  • Martin Alderete
  • León Castillejos

The PostgreSQL England North Meetup Group met also for the very first time on Thursday, March 19 2026. It was organised by Daniel Chapman and Stephen Wood.

Speakers:

  • Kate Gowron
  • Lauro Ojeda

The PostgreSQL User Group, Paris met on Thursday, March 19 2026.

Organised by:

  • Sébastien DELOBEL
  • Vincent Mercier
  • Vik Fearing

Speakers:

  • Lætitia Avrot
  • Vincent Mercier

Claire Giorano and Aaron Wislang hosted and published a new podcast episode on March 20 2026, “Building Postgres Services on Azure with Charles Feddersen” from the Talking Postgres series.

Program Committee for PGDay Armenia has completed voting:

  • Emma Saroyan (Voting Chair)
  • Boriss Mejías
  • Derk van Veen
  • Dian Fay
  • Floor Drees
  • Gülçin Yıldırım Jelínek
  • Ilya Kosmodemiansky
  • Laurenz Albe
  • Teresa Lopes
  • Vik Fearing

Community Blog Posts:

MM-Ready - An Origin Story
Posted by Antony Pegg in pgEdge on 2026-03-23 at 06:34

I'm a Product Manager. Not a developer. I want to be upfront about that because everything that follows only makes sense if you understand that I have no business writing software - and I did it anyway.I built MM-Ready, an open-source CLI tool that scans a PostgreSQL database and tells you exactly what needs to change before you can run multi-master replication with pgEdge Spock. It checks 56 things across your schema, replication config, extensions, sequences, triggers, and SQL patterns. It gives you a severity-graded report - CRITICAL, WARNING, CONSIDER, INFO - with specific remediation steps for each finding. It runs against a live database, a schema dump file, or an existing Spock installation.I built the first version in about four hours using Claude Code while operating in a zombie-like half-asleep state. Not so much vibe-coding as trance-coding.

The Priority that's never priority enough

Every customer evaluating the move to multi-master replication has the same question: "What, if anything, do I need to change in my database before I can turn this on?"The answer could be "nothing", or it could potentially touch dozens of things. Tables without primary keys are insert-only replication - with logical replication, UPDATE and DELETE require a unique row identifier. Foreign keys with CASCADE actions can fire on the origin and create conflicts because foreign keys aren't validated at the subscriber. Certain sequence types don't replicate well. Some extensions aren't compatible. Deferrable constraints get silently skipped for conflict resolution. doesn't guarantee uniqueness for logical replication conflict resolution. So on and so forth. These things aren't bugs or issues - they're a natural consequence of a shift in operational mindset. It's akin to moving from a one-steering-wheel driving experience, and getting in a skid-steer with its highly maneuverable double-joystick.Our Customer Success team was having to do this analysis mostly by hand. They had scripts they'd written, tribal knowledge passed bet[...]

Waiting for PostgreSQL 19 – Add pg_plan_advice contrib module.
Posted by Hubert 'depesz' Lubaczewski on 2026-03-22 at 13:44
On 12nd of March 2026, Robert Haas committed patch: Add pg_plan_advice contrib module.   Provide a facility that (1) can be used to stabilize certain plan choices so that the planner cannot reverse course without authorization and (2) can be used by knowledgeable users to insist on plan choices contrary to what the planner believes … Continue reading "Waiting for PostgreSQL 19 – Add pg_plan_advice contrib module."

pg_regresql: truly portable PostgreSQL statistics
Posted by Radim Marek on 2026-03-21 at 14:15

The previous article showed that PostgreSQL 18 makes optimizer statistics portable, but left one gap open:

It's not worth trying to inject relpages as the planner checks the actual file size and scales it proportionally.

The planner doesn't trust pg_class.relpages. It calls smgrnblocks() to read the actual number of 8KB pages from disk. Your table is 74 pages on disk but pg_class.relpages says 123,513? The planner uses the ratio to scale reltuples down to match the actual file size. The selectivity ratios stay correct, plan shapes mostly survive, but the absolute cost estimates are off.

For debugging a single query, that's usually fine. For automated regression testing where you compare EXPLAIN costs across runs, it breaks things. A cost threshold of 2× means something different when the baseline was computed from fake-scaled numbers.

As part of my work on RegreSQL I'm happy to announce pg_regresql extension which fixes this by hooking directly into the planner.

Why the planner ignores relpages

When PostgreSQL's planner calls get_relation_info() in plancat.c, it delegates to estimate_rel_size() which ends up in table_block_relation_estimate_size() in tableam.c. There, the actual page count comes from the storage manager:

curpages = RelationGetNumberOfBlocks(rel);

The function then computes a tuple density from pg_class (reltuples / relpages) and multiplies it by curpages to estimate tuples. So pg_class.reltuples isn't ignored, it's scaled to match the real file size. The reasoning is sound for normal operation: the catalog might be stale, but the file system is always current.

The same applies to indexes. The planner reads their actual sizes from disk too.

What pg_regresql does

The extension hooks into get_relation_info_hook, a planner callback that runs after PostgreSQL reads the physical file stats. The hook replaces the file-based numbers with the values stored in pg_class:

  • rel->pagespg_class.relpages
  • rel->tuplespg_class.reltuples
[...]

PostgreSQL Sequence Reset: START WITH vs RESTART WITH vs SETVAL Explained
Posted by Deepak Mahto on 2026-03-21 at 12:58

Recently during one of the Oracle to PostgreSQL migration with enterprise customer while designing cutover runbook, we were evaluating steps to perform Sequence value reset to match it as per Source so that every new value request using NextVal is an new one and does not incur transactional failure.

It is one of the critical steps of any database migrations, as in most cases sequence LAST_VALUE is not migrated at target implicitly and sequence values need to be matching as per source so that new transaction never fails and application work post cutover.

We used ora2pg’s SEQUENCE_VALUES export to generate DDL command to set the last values of sequences.

Sample Ora2pg Command for exporting Sequence DDL.

ora2pg -t SEQUENCE_VALUES -c ora2pg_conf.conf -o sequence_values.sql

The Bug That Started This

Ora2pg generates this for sequence migration


  
ALTER SEQUENCE seq_sample START WITH 1494601;

ora2pg’s SEQUENCE_VALUES export generates ALTER SEQUENCE ... START WITH. That command does not reset the sequence.

Expected: next nextval() = 1494601. Actual: sequence continues from wherever it was.
Silently wrong. The kind that surfaces as a “balance mismatch” as part of post-cutover steps.

Understand Different Sequence Reset Options.

Any sequence in PostgreSQL has three internal fields:
start_valuereference point for bare RESTART
last_valuewhere the counter actually is (nextval reads this)
is_calledhas last_value been consumed yet?

We have couple of options to reset last_value for an sequences,

Following table summarize with all options in comparison with START WITH.

Command start_value last_value is_called Next nextval()
START WITH n ✓ updated ✗ unchanged ✗ unchanged
[...]

pg_clickhouse 0.1.5
Posted by David Wheeler on 2026-03-20 at 19:15

I’ve been busy with an internal project at work, but have responded to a few pg_clickhouse reports for a couple crashes and vulnerabilities, thanks to pen testing and a community security report. These changes drive the release of v0.1.5 today.

Get it from the usual sources:

Appreciation to my employer, ClickHouse, for championing this extension.

Using Patroni to Build a Highly Available Postgres Cluster—Part 3: HAProxy
Posted by Shaun Thomas in pgEdge on 2026-03-20 at 06:47

Welcome to Part three of our series for building a High Availability Postgres cluster using Patroni! Part one focused entirely on establishing the DCS using etcd to provide the critical DCS backbone for the cluster, and part two added Patroni and Postgres to the software stack. While it's entirely possible to stop at that point and use the cluster as-is, there's one more piece that will make it far more functional overall.New connections need a way to reach the primary node easily and consistently. Patroni provides a REST interface for interrogating each node for its status, making it a perfect match for any software or load-balancer layer compatible with HTTP checks. Part three focuses on adding HAProxy to fill that role, completing the cluster with a routing layer.Hopefully you still have the three VMs where you installed etcd, Postgres, and Patroni. We will need those VMs for the final stage, so if you haven't already gone through the steps in part one and two, come back when you're ready.Otherwise, let's complete the cluster!

What HAProxy adds

HAProxy is one of the most common HTTP proxies available, but it also has a hidden superpower: it can transparently redirect raw TCP connections as well. This means it can also act as a proxy for any kind of service such as Postgres. Here's how it works:
  • HAProxy connects to the Patroni REST interface and gets the status for the "/" URL.
  • Patroni will only respond with a "200 OK" status on the primary node. All other nodes will produce a "500" error of some kind.
  • HAProxy marks nodes that respond with errors as unhealthy.
  • All connections get routed to the only "healthy" node: the primary for the cluster.
Of course that's not the end of it; the Patroni REST API is incredibly powerful, as it provides multiple additional endpoints. For example a check against:
  • /replica will succeed if the node is a healthy streaming replica of the primary, a good match for offloading intensive read queries from the primary node.
  • /read-only works on
[...]

Prairie Postgres March Meetup
Posted by Henrietta Dombrovskaya on 2026-03-19 at 22:57

Yes, it was St. Patrick’s Day, and also Illinois Primaries, and the weather was beyond bad, but we still had a good crowd!

Pizza always comes first :), because nobody is going to go hungry! Whether you stay for Postgres or not is up to you, so I am assuming that when people are coming and staying, it’s not just for pizza 🍕

On a more serious note, huge thanks to Paul Whalen! Paul presented the talk Early Learnings Building Apps on Postgres in the Claude Code Era: The Importance of the Iteration Loop. It was incredibly interesting, and the attendees kept asking questions for another 40 minutes after the talk. I hope we will keep this conversation going at our Developers’ Summit on April 14! Sign up to join us – here is the RSVP link! And please share what topics you would like to discuss – here!

PostgreSQL HA Without SSH: Why Open Source efm_extension Matters in a Zero-Trust World
Posted by Vibhor Kumar on 2026-03-19 at 18:14

Introduction: The End of “Just SSH Into the Box”

There was a time when High Availability in PostgreSQL came with an implicit assumption: if something important happened, an administrator could log into the server, inspect the state of the cluster, and run the command that steadied the ship. That assumption is fading fast. In many modern enterprises, direct OS-level access is no longer part of the operating model. SSH is locked down, bastion access is tightly controlled, and every administrative pathway is examined through the lens of zero-trust security.

And yet—High Availability doesn’t wait.

That shift creates a very real operational question for database teams: how do you maintain control of a PostgreSQL HA environment when the traditional control surface has been deliberately removed?

This is where the refreshed vision for efm_extension becomes interesting. It is an open-source PostgreSQL extension, released under the PostgreSQL License, designed to expose EDB Failover Manager (EFM) operations directly through SQL—bringing operational control into a governed, auditable layer.


The HA Landscape: Patroni, repmgr, and EFM

PostgreSQL High Availability has never been a one-size-fits-all story.

Some teams lean toward Patroni, embracing distributed coordination and cloud-native patterns. Others prefer repmgr, valuing its simplicity and DBA-centric workflows. And then there is EDB Failover Manager (EFM)—a mature, enterprise-grade solution designed to monitor streaming replication clusters and orchestrate failover with predictability and control.

Each of these tools reflects a different philosophy. But they share one quiet assumption:

Operational control happens at the OS level.

And that assumption is exactly where modern security models push back.


The Real Problem: Control Without Access

In today’s enterprise environments, responsibilities are deliberately separated.

Platform and SRE teams own the servers.

Application DBAs own the databases.

[...]

Waiting for PostgreSQL 19 – Introduce the REPACK command
Posted by Hubert 'depesz' Lubaczewski on 2026-03-19 at 18:07
On 10th of March 2026, Álvaro Herrera committed patch: Introduce the REPACK command   REPACK absorbs the functionality of VACUUM FULL and CLUSTER in a single command. Because this functionality is completely different from regular VACUUM, having it separate from VACUUM makes it easier for users to understand; as for CLUSTER, the term is heavily … Continue reading "Waiting for PostgreSQL 19 – Introduce the REPACK command"

An Ultimate Guide to Upgrading Your PostgreSQL Installation: From 17 to 18
Posted by Ilya Kosmodemiansky in Data Egret on 2026-03-19 at 11:27

 

PostgreSQL major version upgrades are one of those tasks that every DBA has to deal with regularly. They are routine — but they are also full of small, potentially dangerous details that can turn a straightforward maintenance window into an incident. Having performed hundreds of upgrades across different environments over the years, I want to share a comprehensive, practical guide to upgrading from PostgreSQL 17 to 18, with particular focus on what has changed and what has finally improved in the upgrade process itself.

This article is based on my PGConf.EU 2024 talk, updated to cover the PostgreSQL 17→18 upgrade path and the significant improvements that landed in version 18. This time of the year we usually recommend our customers to upgrade: current release 18.3 is stable enough.

Why Upgrades Matter

Let me start with a reality check. PostgreSQL major versions are supported for five years. If you are running a version that is past its end-of-life, you are exposed to unpatched security vulnerabilities and bugs that the community will never fix. But even within the support window, newer versions bring performance improvements, new features, and better tooling. The question is not whether to upgrade, but how to do it safely and with minimal downtime.

The upgrade itself is not rocket science. The tricky part is the combination of small details — replication slots, extension compatibility, configuration drift between old and new clusters, statistics collection, and the behavior of your connection pooler during the switchover. Any one of these can bite you if you are not paying attention.

Preparation

Read release notes. If you want to jump from lets say version 13 to version 18, read them all. There could be some manual steps you need to make.

Plan and test. Upgrades are simple and straightforward, even major upgrades. The problem is in small details which are easy to overlook. Collation, checksums, extensions, forgotten checkpoint, customized statistics targets could make your li

[...]

Hacking Workshop for April/May 2026
Posted by Robert Haas in EDB on 2026-03-18 at 19:47

I'm planning to hold a single hacking workshop for April and May combined, covering Masahiko Sawada's talk, Breaking away from FREEZE and Wraparound, given at PGCon 2022. If you're interested in joining us, please sign up using this form and I will send you an invite to one of the sessions. Thanks to Sawada-san for agreeing to join us.

Read more »

Waiting for PostgreSQL 19 – Allow table exclusions in publications via EXCEPT TABLE.
Posted by Hubert 'depesz' Lubaczewski on 2026-03-18 at 11:15
On 4th of March 2026, Amit Kapila committed patch: Allow table exclusions in publications via EXCEPT TABLE.   Extend CREATE PUBLICATION ... FOR ALL TABLES to support the EXCEPT TABLE syntax. This allows one or more tables to be excluded. The publisher will not send the data of excluded tables to the subscriber.   To … Continue reading "Waiting for PostgreSQL 19 – Allow table exclusions in publications via EXCEPT TABLE."

RAG With Transactional Memory and Consistency Guarantees Inside SQL Engines
Posted by Ibrar Ahmed in pgEdge on 2026-03-18 at 06:04

Most RAG systems were built for a specific workload: abundant reads, relatively few writes, and a document corpus that doesn't change much. That model made sense for early retrieval pipelines, but it doesn't reflect how production agent systems actually behave. In practice, multiple agents are constantly writing new observations, updating shared memory, and regenerating embeddings, often at the same time. The storage layer that worked fine for document search starts showing cracks under that kind of pressure.The failures that result aren't always obvious. Systems stay online, but answers drift. One agent writes a knowledge update while another is mid-query, reading a half-committed state. The same question asked twice returns different answers. Embeddings exist in the index with no corresponding source text. These symptoms get blamed on the model, but the model isn't the problem. The storage layer is serving up an inconsistent state, and no amount of prompt engineering can fix that.This isn't a new class of problem. Databases have been solving concurrent write correctness for decades, and PostgreSQL offers guarantees that meet those agent memory needs.

What RAG Systems Are Missing Today

RAG systems depend on memory that evolves over time, but most current architectures were designed for static document search rather than stateful reasoning, creating fundamental correctness, consistency, and reproducibility problems in production environments.

Stateless Retrieval Problems and Solutions

Most RAG pipelines treat retrieval as a stateless search over embeddings and documents. The system pulls the top matching chunks with no awareness of how memory has evolved, what the agent's current session context is, or where a piece of information sits on a timeline. For static document search, that limitation rarely matters. For agent memory, where knowledge changes continuously, it is a real problem.Without stateful awareness, retrieval starts mixing facts from different points in time. One query might retrieve yester[...]

Local LLM with OpenWeb UI and Ollama
Posted by Ryan Lambert on 2026-03-18 at 05:01

Like much of the world, I have been exploring capabilities and realities of LLMs and other generative tools for a while now. I am focused on using the technology with the framing of my technology-focused work, plus my other common scoping on data privacy and ethics. I want basic coding help (SQL, Python, Docker, PowerShell, DAX), ideation, writing boilerplate code, and leveraging existing procedures. Naturally, I want this available offline in a private and secure environment. I have been focused on running a local LLM with RAG capabilities and having control over what data goes where, and how it is used. Especially data about my conversations with the generative LLM.

This post collects my notes on what my expectations and goals are, and outlines the components I am using currently, and thoughts on my path forward.

SCaLE 23x and CloudNativePG: Robust, Self-Healing PostgreSQL on Kubernetes
Posted by Jimmy Angelakos on 2026-03-17 at 13:37

Obligatory selfie from SCaLE 23x Obligatory selfie from SCaLE 23x

The 23rd edition of the Southern California Linux Expo, or SCaLE 23x, took place from March 5-8, 2026, in Pasadena, California. It was another fantastic community-run event with talks you don't get to hear anywhere else, and that incredible open-source community spirit.

One of the major bonuses of the event was attending one of the legendary fathers of the Internet's closing keynote, Doug Comer's talk "Software Distribution Now and Then: Why and How the Internet Changed".

While I didn't broadcast any live streams from the conference floor this year, I did end up catching some great talks (fortunately everything's recorded!) and having some deeply rewarding hallway track conversations. A highlight was catching up with folks from LPI, the legendary Jon "maddog" Hall, and Henrietta Dombrovskaya. We had a great discussion around our ongoing PostgreSQL Compatibility initiative: We are continuing to define what "Postgres Compatible" truly means to prevent market confusion and ensure a reliable "standard" for users. If you are interested in contributing to this effort, come join the conversation on our new Discord server:

PostgreSQL @ SCaLE 23x

Postgres once again had a stellar presence at SCaLE with a dedicated PostgreSQL track. We had an entire lineup of trainings and talks, plus a PostgreSQL Booth in the expo hall and the always-popular SCaLE 23x PostgreSQL Ask Me Anything session. A massive thank you to the organizers of PostgreSQL@SCaLE, the trainers, speakers, and all the volunteers who made it happen!

On Friday, March 6th, I had the pleasure of delivering my talk, "CloudNativePG: Robust, Self-Healing PostgreSQL on Kubernetes".

Great slide transition as Jimmy is presenting CloudNativePG at SCaLE 23x Great slide transition as Jimmy is presenting CloudNativePG at SCaLE 23x. Many thanks to Josh Lee for the photo!

The session offered insight into how we can stop treating database instances like delicate flowers and utilize modern infrastructure-

[...]

Waiting for PostgreSQL 19 – Add non-text output formats to pg_dumpall
Posted by Hubert 'depesz' Lubaczewski on 2026-03-17 at 10:29
On 26th of February 2026, Andrew Dunstan committed patch: Add non-text output formats to pg_dumpall   pg_dumpall can now produce output in custom, directory, or tar formats in addition to plain text SQL scripts. When using non-text formats, pg_dumpall creates a directory containing: - toc.glo: global data (roles and tablespaces) in custom format - map.dat: … Continue reading "Waiting for PostgreSQL 19 – Add non-text output formats to pg_dumpall"

We skipped the OLAP stack and built our data warehouse in vanilla Postgres
Posted by Noémi Ványi in Xata on 2026-03-17 at 09:00
Skip the OLAP stack. See how we built a product analytics warehouse on vanilla Postgres with materialized views, pg_cron, and copy-on-write branches.

pgNow Instant PostgreSQL Performance Diagnostics in Minutes
Posted by Hamza Sajawal in Stormatics on 2026-03-17 at 06:53

pgNow is a lightweight PostgreSQL diagnostic tool developed by Redgate that provides quick visibility into database performance without requiring agents or complex setup. It connects directly to a PostgreSQL instance and delivers real-time insights into query workloads, active sessions, index usage, configuration health, and vacuum activity, helping DBAs quickly identify performance bottlenecks. Because it runs as a simple desktop application, pgNow is particularly useful for quick troubleshooting and point-in-time diagnostics when a full monitoring platform is not available. 

The tool is currently free to use, and its development is actively maintained by Redgate, with potential future enhancements expected as the project evolves. It analyzes workload behavior using PostgreSQL system views and extensions such as pg_stat_activity and pg_stat_statements.

Prerequisites

Enable pg_stat_statements in PostgreSQL

Most PostgreSQL distributions already include the pg_stat_statements extension. You only need to enable it in shared_preload_libraries and create the extension in the database

Create the extension
CREATE EXTENSION pg_stat_statements;
Verify the Extension

Configure Statement Tracking:

Set the tracking level to capture all statements:

ALTER SYSTEM SET pg_stat_statements.track = 'all';
Reload configurations:
SELECT pg_reload_conf();

Verify the setting:

CREATE USER monitor_user WITH PASSWORD '123#abc';

-- Grant connection permission

GRANT CONNECT ON DATABASE myoddodb TO monitor_user;

-- Grant usage on schema (adjust schema name if needed)

GRANT USAGE ON SCHEMA public TO monitor_user;

-- Grant specific table permissions for monitoring

GRANT SELECT ON pg_stat_activity TO monitor_user;

GRANT SELECT ON pg_stat_database TO monitor_user;

GRANT SELECT ON pg_stat_all_tables TO monitor_user;

GRANT SELECT ON pg_stat_user_tables TO monitor_user;

[...]

COMMENT to the MCP Rescue
Posted by Bruce Momjian in EDB on 2026-03-16 at 22:15

The COMMENT command has been in Postgres for decades. It allows text descriptions to be attached to almost any database object. During its long history, it was mostly seen as a nice-to-have addition to database schemas, allowing administrators and developers to more easily understand the schema. Tools like pgAdmin allow you to assign and view comments on database objects.

Now, in the AI era, there is something else that needs to understand database schemas — MCP clients. Without database object comments, MCP clients can only use the database schemas, object names, and constraints. With database comments, database users can supply valuable information to allow MCP clients to more effectively match schema objects to user requests and potentially generate better SQL queries. If database users don't want do add such comments, it might be possible for generative AI to create appropriate comments, perhaps by analyzing data in the tables.

What Is in pg_gather Version 33 ?
Posted by Jobin Augustine in Percona on 2026-03-16 at 17:03
It started as a humble personal project, few years back. The objective was to convert all my PostgreSQL notes and learning into a automatic diagnostic tool, such that even a new DBA can easily spot the problems. The idea was simple, a simple tool which don’t need any installation but do all possible analysis and […]

Contributions for week 10, 2026
Posted by Cornelia Biacsics in postgres-contrib.org on 2026-03-16 at 08:20

On Tuesday March 10, 2026 PUG Belgium met for the March edition, organized by Boriss Mejias and Stefan Fercot.

Speakers:

  • Esteban Zimanyi
  • Thijs Lemmens
  • Yoann La Cancellera

Robert Haas organized a Hacking Workshop on Tuesday March 10, 2026. Tomas Vondra discussed questions about one of his talks.

PostgreSQL Edinburgh meetup Mar 2026 met on Thursday March 12, 2026

Speakers:

  • Radim Marek
  • Jimmy Angelakos

FOSSASIA Summit 2026 took place from Sunday March 8 - Tuesday March 10, 2026 in Bangkok.

PostgreSQL speakers:

  • Koji Annoura
  • Charly Batista
  • Gary Evans
  • Joe Conway
  • Suraj Kharage
  • Robert Treat
  • Sameer Kumar
  • Roneel Kumar
  • Sivaprasad Murali
  • Yugo Nagata
  • Denis Smirnov
  • Vaibhav Dalvi
  • Gyeongseon Park
  • Bo Peng
  • Brian McKerr
  • Chris Travers
  • Jirayut Nimsaeng
  • Gilles Darold
  • Rajni Baliyan

PostgreSQL Conference India took place in Bengaluru (India) from March 11 - March 13, 2026.

Organizers:

  • Pavan Deolasee
  • Ashish Kumar Mehra
  • Nikhil Sontakke
  • Hari Kiran
  • Rushabh Lathia

Talk Selection Committee:

  • Amul Sul
  • Dilip Kumar
  • Marc Linster
  • Thomas Munro
  • Vigneshwaran c

Speakers:

  • Abhijeet Rajurkar
  • Aditya Duvuri
  • Ajit Awekar
  • Amit Kumar Singh
  • Amogh Bharadwaj
  • Amul Sul
  • Andreas Scherbaum
  • Ashutosh Bapat
  • Avinash Vallarapu
  • Boopathi Parameswaran
  • Claire Giordano
  • Danish Khan
  • Deepak R Mahto
  • Dilip Kumar
  • Divya Bhargov
  • Dr. M. J. Shankar Raman
  • Franck Pachot
  • Hari Kiran
  • Hari Prasad
  • Harish Perumal
  • Jayant Haritsa
  • Jim Mlodgenski
  • Jobin Augustine
  • Joe Conway
  • Kanthanathan S
  • Kevin Biju
  • Koji Annoura
  • Kranthi Kiran Burada
  • Lalit Choudhary
  • Michael Zhilin
  • Mithun Chicklore Yogendra
  • Mohit Agarwal
  • NarendraSingh Tawar
  • Neel Patel
  • Neeta Goel
  • Nikhil Chawla
  • Nikhil Sontakke
  • Nishad Mankar
  • Palak
[...]

Learning AI Fast with pgEdge's RAG
Posted by Richard Yen on 2026-03-16 at 08:00

Introduction

If you’ve been paying attention to the technology landscape recently, you’ve probably noticed that AI is everywhere. New frameworks, new terminology, and a dizzying array of acronyms and jargon: LLM, RAG, embeddings, vector databases, MCP, and more.

Honestly, it’s been difficult to figure out where to start. Many tutorials either dive deep into machine learning theory (Bayesian transforms?) or hide everything behind a single API call to a hosted model. Neither approach really explains how these systems actually work.

Recently I spent some time experimenting with the pgEdge AI tooling after hearing Shaun Thomas’ talk at a PrairiePostgres meetup. He talked about how to set up the various components of an AI chatbot system, starting from ingesting documents into a Postgres database, vectorizing the text, setting up a RAG and then an MCP server.

When I got home I wanted to try it out for myself – props to the pgEdge team for making it all free an open-source! What surprised me most was not just that everything worked, but how easy it was to get a complete AI retrieval pipeline running locally. More importantly, it turned out to be one of the clearest ways I’ve found to understand how modern AI systems are constructed behind the scenes. Thanks so much, Shaun!


The pgEdge AI Components

The pgEdge AI ecosystem provides several small tools that fit together naturally. I’ll go through them real quickly here

  • Doc Converter – The doc-converter normalizes documents into a format that is easy to process downstream. Whether the input is PDF, HTML, Markdown, or plain text, the converter produces clean text output suitable for ingestion.
  • Vectorizer – The vectorizer handles the process of converting text chunks into embeddings. These embeddings are numeric representations of text that capture semantic meaning. Once generated, they can be stored inside PostgreSQL using pgvector and queried with similarity search.
  • Retrieval-Augmented Generation (RAG) Server – The R
[...]

AI Features in pgAdmin: AI Insights for EXPLAIN Plans
Posted by Dave Page in pgEdge on 2026-03-16 at 06:31

This is the third and final post in a series covering the new AI functionality in pgAdmin 4. In the first post, I covered LLM configuration and the AI-powered analysis reports, and in the second, I introduced the AI Chat agent for natural language SQL generation. In this post, I'll walk through the AI Insights feature, which brings LLM-powered analysis to PostgreSQL EXPLAIN plans.Anyone who has spent time optimising PostgreSQL queries knows that reading EXPLAIN output is something of an acquired skill. pgAdmin has long provided a graphical EXPLAIN viewer that makes the plan tree easier to navigate, along with analysis and statistics tabs that surface key metrics, but interpreting what you're seeing and deciding what to do about it still requires a solid understanding of the query planner's behaviour. The AI Insights feature aims to bridge that gap by providing an expert-level analysis of your query plans, complete with actionable recommendations.

Where to Find It

AI Insights appears as a fourth tab in the EXPLAIN results panel, alongside the existing Graphical, Analysis, and Statistics tabs. It's only visible when an LLM provider has been configured, so if you don't see it, check that you've set up a provider in Preferences (as described in the first post). The tab header simply reads 'AI Insights'.To use it, run a query with EXPLAIN (or EXPLAIN ANALYZE for the most useful results, since actual execution timings give the AI much more to work with), and then click on the AI Insights tab. The analysis starts automatically when you switch to the tab, or you can trigger it manually with the Analyze button.

What the Analysis Provides

The AI Insights analysis produces three sections:

Summary

A concise paragraph providing an overall assessment of the query plan's performance characteristics. This gives you a quick sense of whether the plan is generally healthy or has significant issues worth investigating. For well-optimised queries, the summary will confirm that the plan looks reasonable; for problematic o[...]

PostgreSQL 19: part 4 or CommitFest 2026-01
Posted by Pavel Luzanov in Postgres Professional on 2026-03-16 at 00:00

Continuing the series of CommitFest 19 reviews, today we’re covering the January 2026 CommitFest.

The highlights from previous CommitFests are available here: 2025-07, 2025-09, 2025-11.

  • Partitioning: merging and splitting partitions
  • pg_dump[all]/pg_restore: dumping and restoring extended statistics
  • file_fdw: skipping initial rows
  • Logical replication: enabling and disabling WAL logical decoding without server restart
  • Monitoring logical replication slot synchronization delays
  • pg_available_extensions shows extension installation directories
  • New function pg_get_multixact_stats: multixact usage statistics
  • Improvements to vacuum and analyze progress monitoring
  • Vacuum: memory usage information
  • vacuumdb --dry-run
  • jsonb_agg optimization
  • LISTEN/NOTIFY optimization
  • ICU: character conversion function optimization
  • The parameter standard_conforming_strings can no longer be disabled

...

Professional karma
Posted by Ashutosh Bapat on 2026-03-14 at 05:48

In the very early days of my career, an incident made me realise that perfoming my job irresponsibily will affect me adversely, not because it will affect my position adversely, but because it can affect my life otherwise also. I was part a team that produced a software used by a financial institution where I held my account. A bug in the software caused a failure which made several accounts, including my bank account, inaccessible! Fortunately I wasn't the one who introduced that bug and neither was other software engineer working on the product. It has simply crept through the cracks that the age-old software had developed as it went through many improvements. Something that happens to all the architectures, software or otherwise in the world. That was an enlightening and eve opening experience. But professional karma is not always bad; many times it's good. When the humble work I do for earning my living also improves my living, it gives me immense satisfaction. It means that it's also improving billions of lives that way across the globe.

When I was studying post-graduation in IIT Bombay, I often travelled by train - local and intercity. The online ticketing system for long distant trains was still in its early stages. Local train tickets were still issued at stations and getting one required standing in a long queue. Fast forward to today, you can buy a local train ticket on a mobile App or at a kiosk at the station by paying online through UPI. In my recent trip to IIT Bombay I bought such a ticket using GPay in a few seconds. And know what, UPI uses PostgreSQL as an OLTP database in its system. I didn't have to go through the same experience thank to the same education and the work I am doing. Students studying in my alma-matter no more have to go through the same painful experience now, thanks to many PostgreSQL contributors who once were students and might have similar painful experiences in their own lives.



In PGConf.India, Koji Annoura, who is a Graph database expert talked about o

[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.