I'm planning to hold a single hacking workshop for April and May combined, covering Masahiko Sawada's talk, Breaking away from FREEZE and Wraparound, given at PGCon 2022. If you're interested in joining us, please sign up using this form and I will send you an invite to one of the sessions. Thanks to Sawada-san for agreeing to join us.
Read more »
Most RAG systems were built for a specific workload: abundant reads, relatively few writes, and a document corpus that doesn't change much. That model made sense for early retrieval pipelines, but it doesn't reflect how production agent systems actually behave. In practice, multiple agents are constantly writing new observations, updating shared memory, and regenerating embeddings, often at the same time. The storage layer that worked fine for document search starts showing cracks under that kind of pressure.The failures that result aren't always obvious. Systems stay online, but answers drift. One agent writes a knowledge update while another is mid-query, reading a half-committed state. The same question asked twice returns different answers. Embeddings exist in the index with no corresponding source text. These symptoms get blamed on the model, but the model isn't the problem. The storage layer is serving up an inconsistent state, and no amount of prompt engineering can fix that.This isn't a new class of problem. Databases have been solving concurrent write correctness for decades, and PostgreSQL offers guarantees that meet those agent memory needs.
Like much of the world, I have been exploring capabilities and realities of LLMs and other generative tools for a while now. I am focused on using the technology with the framing of my technology-focused work, plus my other common scoping on data privacy and ethics. I want basic coding help (SQL, Python, Docker, PowerShell, DAX), ideation, writing boilerplate code, and leveraging existing procedures. Naturally, I want this available offline in a private and secure environment. I have been focused on running a local LLM with RAG capabilities and having control over what data goes where, and how it is used. Especially data about my conversations with the generative LLM.
This post collects my notes on what my expectations and goals are, and outlines the components I am using currently, and thoughts on my path forward.
Obligatory selfie from SCaLE 23x
The 23rd edition of the Southern California Linux Expo, or SCaLE 23x, took place from March 5-8, 2026, in Pasadena, California. It was another fantastic community-run event with talks you don't get to hear anywhere else, and that incredible open-source community spirit.
One of the major bonuses of the event was attending one of the legendary fathers of the Internet's closing keynote, Doug Comer's talk "Software Distribution Now and Then: Why and How the Internet Changed".
While I didn't broadcast any live streams from the conference floor this year, I did end up catching some great talks (fortunately everything's recorded!) and having some deeply rewarding hallway track conversations. A highlight was catching up with folks from LPI, the legendary Jon "maddog" Hall, and Henrietta Dombrovskaya. We had a great discussion around our ongoing PostgreSQL Compatibility initiative: We are continuing to define what "Postgres Compatible" truly means to prevent market confusion and ensure a reliable "standard" for users. If you are interested in contributing to this effort, come join the conversation on our new Discord server:
Postgres once again had a stellar presence at SCaLE with a dedicated PostgreSQL track. We had an entire lineup of trainings and talks, plus a PostgreSQL Booth in the expo hall and the always-popular SCaLE 23x PostgreSQL Ask Me Anything session. A massive thank you to the organizers of PostgreSQL@SCaLE, the trainers, speakers, and all the volunteers who made it happen!
On Friday, March 6th, I had the pleasure of delivering my talk, "CloudNativePG: Robust, Self-Healing PostgreSQL on Kubernetes".
Great slide transition as Jimmy is presenting CloudNativePG at SCaLE 23x. Many thanks to Josh Lee for the photo!
The session offered insight into how we can stop treating database instances like delicate flowers and utilize modern infrastructure-
[...]The COMMENT command has been in Postgres for decades. It allows text descriptions to be attached to almost any database object. During its long history, it was mostly seen as a nice-to-have addition to database schemas, allowing administrators and developers to more easily understand the schema. Tools like pgAdmin allow you to assign and view comments on database objects.
Now, in the AI era, there is something else that needs to understand database schemas — MCP clients. Without database object comments, MCP clients can only use the database schemas, object names, and constraints. With database comments, database users can supply valuable information to allow MCP clients to more effectively match schema objects to user requests and potentially generate better SQL queries. If database users don't want do add such comments, it might be possible for generative AI to create appropriate comments, perhaps by analyzing data in the tables.
On Tuesday March 10, 2026 PUG Belgium met for the March edition, organized by Boriss Mejias and Stefan Fercot.
Speakers:
Robert Haas organized a Hacking Workshop on Tuesday March 10, 2026. Tomas Vondra discussed questions about one of his talks.
PostgreSQL Edinburgh meetup Mar 2026 met on Thursday March 12, 2026
Speakers:
FOSSASIA Summit 2026 took place from Sunday March 8 - Tuesday March 10, 2026 in Bangkok.
PostgreSQL speakers:
PostgreSQL Conference India took place in Bengaluru (India) from March 11 - March 13, 2026.
Organizers:
Talk Selection Committee:
Speakers:
If you’ve been paying attention to the technology landscape recently, you’ve probably noticed that AI is everywhere. New frameworks, new terminology, and a dizzying array of acronyms and jargon: LLM, RAG, embeddings, vector databases, MCP, and more.
Honestly, it’s been difficult to figure out where to start. Many tutorials either dive deep into machine learning theory (Bayesian transforms?) or hide everything behind a single API call to a hosted model. Neither approach really explains how these systems actually work.
Recently I spent some time experimenting with the pgEdge AI tooling after hearing Shaun Thomas’ talk at a PrairiePostgres meetup. He talked about how to set up the various components of an AI chatbot system, starting from ingesting documents into a Postgres database, vectorizing the text, setting up a RAG and then an MCP server.
When I got home I wanted to try it out for myself – props to the pgEdge team for making it all free an open-source! What surprised me most was not just that everything worked, but how easy it was to get a complete AI retrieval pipeline running locally. More importantly, it turned out to be one of the clearest ways I’ve found to understand how modern AI systems are constructed behind the scenes. Thanks so much, Shaun!
The pgEdge AI ecosystem provides several small tools that fit together naturally. I’ll go through them real quickly here
This is the third and final post in a series covering the new AI functionality in pgAdmin 4. In the first post, I covered LLM configuration and the AI-powered analysis reports, and in the second, I introduced the AI Chat agent for natural language SQL generation. In this post, I'll walk through the AI Insights feature, which brings LLM-powered analysis to PostgreSQL EXPLAIN plans.Anyone who has spent time optimising PostgreSQL queries knows that reading EXPLAIN output is something of an acquired skill. pgAdmin has long provided a graphical EXPLAIN viewer that makes the plan tree easier to navigate, along with analysis and statistics tabs that surface key metrics, but interpreting what you're seeing and deciding what to do about it still requires a solid understanding of the query planner's behaviour. The AI Insights feature aims to bridge that gap by providing an expert-level analysis of your query plans, complete with actionable recommendations.
Continuing the series of CommitFest 19 reviews, today we’re covering the January 2026 CommitFest.
The highlights from previous CommitFests are available here: 2025-07, 2025-09, 2025-11.
...
In the very early days of my career, an incident made me realise that perfoming my job irresponsibily will affect me adversely, not because it will affect my position adversely, but because it can affect my life otherwise also. I was part a team that produced a software used by a financial institution where I held my account. A bug in the software caused a failure which made several accounts, including my bank account, inaccessible! Fortunately I wasn't the one who introduced that bug and neither was other software engineer working on the product. It has simply crept through the cracks that the age-old software had developed as it went through many improvements. Something that happens to all the architectures, software or otherwise in the world. That was an enlightening and eve opening experience. But professional karma is not always bad; many times it's good. When the humble work I do for earning my living also improves my living, it gives me immense satisfaction. It means that it's also improving billions of lives that way across the globe.
When I was studying post-graduation in IIT Bombay, I often travelled by train - local and intercity. The online ticketing system for long distant trains was still in its early stages. Local train tickets were still issued at stations and getting one required standing in a long queue. Fast forward to today, you can buy a local train ticket on a mobile App or at a kiosk at the station by paying online through UPI. In my recent trip to IIT Bombay I bought such a ticket using GPay in a few seconds. And know what, UPI uses PostgreSQL as an OLTP database in its system. I didn't have to go through the same experience thank to the same education and the work I am doing. Students studying in my alma-matter no more have to go through the same painful experience now, thanks to many PostgreSQL contributors who once were students and might have similar painful experiences in their own lives.
In PGConf.India, Koji Annoura, who is a Graph database expert talked about o
[...]
I previously blogged about ensuring that the “ON CONFLICT” directive is used in order to avoid vacuum from having to do additional work. I also later demonstrated the characteristics of how the use of the MERGE statement will accomplish the same thing.
You can read the original blogs here Reduce Vacuum by Using “ON CONFLICT” Directive and here Follow-Up: Reduce Vacuum by Using “ON CONFLICT” Directive
Now in another recent customer case, I was chasing down why the application was invoking 10s of thousands of Foreign Key and Constraint violations per day and I began to wonder, if these kinds of errors also caused additional vacuum as described in those previous blogs. Sure enough it DEPENDS.
Let’s set up a quick test to demonstrate:
/* Create related tables: */
CREATE TABLE public.uuid_product_value (
id int PRIMARY KEY,
pkid text,
value numeric,
product_id int,
effective_date timestamp(3)
);
CREATE TABLE public.uuid_product (
product_id int PRIMARY KEY
);
ALTER TABLE uuid_product_value
ADD CONSTRAINT uuid_product_value_product_id_fk
FOREIGN KEY (product_id)
REFERENCES uuid_product (product_id) ON DELETE CASCADE;
/* Insert some mocked up data */
INSERT INTO public.uuid_product VALUES (
generate_series(0,200));
INSERT INTO public.uuid_product_value VALUES (
generate_series(0,10000),
gen_random_uuid()::text,
random()*1000,
ROUND(random()*100),
current_timestamp(3));
/* Vacuum Analyze Both tables */
VACUUM (VERBOSE, ANALYZE) uuid_product;
VACUUM (VERBOSE, ANALYZE) uuid_product_value;
/* Verify that there are no dead tuples: */
SELECT
schemaname,
relname,
n_live_tup,
n_dead_tup
FROM
pg_stat_all_tables
WHERE
relname in ('uuid_product_value', 'uuid_product');
schemaname | relname | n_live_tup | n_dead_tup
------------+--------------------+------------+------------
public | uuid_product_value | 10001 | 0
publicWelcome to Part two of our series about building a High Availability Postgres cluster using Patroni! Part one focused entirely on establishing the DCS using etcd, providing the critical layer that Patroni uses to store metadata and guarantee its leadership token uniqueness across the cluster.With this solid foundation, it's now time to build the next layer in our stack: Patroni itself. Patroni does the job of managing the Postgres service and provides a command interface for node administration and monitoring. Technically the Patroni cluster is complete at the end of this article, but stick around for part three where we add the routing layer that brings everything together.Hopefully you still have the three VMs where you installed etcd. Those will be the same place where everything else happens, so if you haven’t already gone through the steps in part one, come back when you’re ready.Otherwise, let’s get started!
Most PostgreSQL tuning advice that folks chase is quick fixes but not on understanding what made planners choose an path or join over others optimal path. !
Tuning should not start with Analyze on tables involved in the Query but with intend what is causing the issue and why planner is not self sufficient to choose the optimal path.
Most fixes we search for SQL tuning are around,
Add an index.
Rewrite the query.
Bump work_mem.
Done.
Except it’s not done. The same problem comes back, different query, different table, same confusion.
A slow query is a symptom. Statistics, DDL, query style, and PG version are the actual culprit’s.
Before you touch anything, you need to answer five questions — in order:
Most developers skip straight to question two. Many skip to indexes without asking any question at all.
I presented this framework at PGConf India yesterday, a room full of developers and DBA , sharp questions, and a lot of “I’ve hit exactly this” moments.
The slides cover core foundations for approaching Query Tuning and production gotchas including partition pruning, SARGability, CTE fences, and correlated column statistics.
Slide – PostgreSQL Query Tuning: A Foundation Every Database Developer Should Build
This article reviews the November 2025 CommitFest.
For the highlights of the previous two CommitFests, check out our last posts: 2025-07, 2025-09.
...
There is a moment in many database reviews when the room becomes a little too quiet.
Someone asks:
“Which columns in this database are encrypted?”
At first, the answers sound reassuring.
“We use TLS.”
“The disks are encrypted.”
“The application handles sensitive fields.”
And then the real picture starts to emerge.
Some values are encrypted in one service but not another.
Some migrations remembered to apply encryption.
Some scripts did not.
Some backups are safe in theory, but no one wants to test that theory the hard way.
That is the uncomfortable truth of database security:
encryption is often present, but not always enforced where the data actually lives.
That is exactly the problem I wanted to explore with the PostgreSQL extension:
column_encrypt: https://github.com/vibhorkum/column_encrypt
This extension provides transparent column-level encryption using custom PostgreSQL datatypes so developers can read and write encrypted columns without changing their SQL queries.
And perhaps the most human part of this project is this:
the idea for this project started back in 2016.
It stayed with me for years as one of those engineering ideas that never quite leaves your mind — the thought that PostgreSQL itself could enforce encryption at the column level.
Now I’ve finally decided to release it.
This is the first public version. It’s a starting point — useful, practical, and hopefully something the PostgreSQL community can explore and build upon.
Encryption conversations often focus first on infrastructure.
All of these are important.
But once data is inside the database, a different question matters:
What happens if someone gains access to the database itself?
That access might come from:
When using AWS RDS Proxy, the goal is to achieve connection multiplexing – many client connections share a much smaller pool of backend PostgreSQL connections, givng more resources per connection and keeping query execution running smoothly.
However, if the proxy detects that a session has changed internal state in a way it cannot safely track, it pins the client connection to a specific backend connection. Once pinned, that connection can never be multiplexed again. This was the case with a recent database I worked on.
In this case, we observed the following:
What was strange about it all was that the queries involved were relatively simple, with max just one join.
To get to the root cause, one option was to look in pg_stat_statements. However, that approach had two problems:
pg_stat_statements normalizes queries and does not expose the values passed to parameter placeholders.
Instead, to see the actual parameters, we briefly enabled log_statement = 'all'. This immediately surfaced something interesting in the logs, which could be downloaded and reviewed on my own time and pace.
What we saw were statements like SELECT set_config($2,$1,$3) with parameters related to JIT configuration – that was the first real clue.
After tracing the behavior through the stack, the root cause turned out to be surprisingly indirect. The application created new connections through SQLAlchemy’s asyncpg dialect, and we needed to drill down into that driver’s behavior.
During connection initialization, SQLAlchemy runs an on_connect hook:
def connect(conn):
For much of Postgres's history, it has lived in the shadow of other relational systems, and for a time even in the shadow of NoSQL systems. Those shadows have faded, but it is helpful to reflect on this outcome.
On the proprietary side, most database products are now in maintenance mode. The only database to be consistently compared to Postgres was Oracle. Long-term, Oracle was never going to be able to compete against an open source development team, just like Sun's Solaris wasn't able to compete against open source Linux. Few people would choose Oracle's database today, so it is effectively in legacy mode. The Oracle shadow is clearly fading. In fact, almost all enterprise infrastructure software is open source today.
The MySQL shadow is more complex. MySQL is not proprietary, since it is distributed as open source, so it had the potential to ride the open source wave into the enterprise, and it clearly did from the mid-1990s to the mid-2000s. However, something changed, and MySQL has been in steady decline for decades. Looking back, people want to ascribe a reason for the decline:
Last December, I was part of a long enterprise discussion centered on PostgreSQL.
On paper, it looked familiar: a new major release, high availability and scale, Aurora migration, monitoring, operational tooling, and the growing conversation around AI-assisted operations.
The usual ingredients were all there.
But somewhere in the middle of that day, the tone of the room changed.
It did not change when we talked about new PostgreSQL capabilities. It changed when the conversation moved to upgrades, patching, monitoring quality, and operational control.
That was the moment I realized this was not really a feature discussion.
It was a trust discussion.
Not trust in PostgreSQL as a database. That question is mostly behind us.
It was trust in something more practical: can this platform evolve without exhausting the team responsible for it? Can it scale without becoming harder to reason about? Can it be upgraded without becoming a quarterly trauma ritual? Can it be monitored without operators drowning in false signals? Can it support modernization without making every change feel dangerous?
That, to me, is where the PostgreSQL conversation has matured.
A modern PostgreSQL platform is not defined only by what it can do. It is defined by how calmly it can change.
This matters because PostgreSQL is no longer entering the enterprise through side doors. In many organizations, it is already trusted with serious workloads and is increasingly central to modernization plans.
That changes the questions.
A few years ago, teams often asked whether PostgreSQL was ready for enterprise use. Today, the better question is whether the operating model around PostgreSQL is ready for enterprise reality.
Because the database can be strong while the surrounding practice is weak.
That is where many teams struggle. They like PostgreSQL, but lag on upgrades. They have HA designs, but unclear failure playbooks. They have monitoring, but poor signal qualit
[...]In the Part 1, we explored the general concepts of MVCC and the implications of storing data snapshots either out-of-place or within heap storage, we can now map these methodologies to specific database engines.
The PostgreSQL MVCC implementation aligns with the DatabaseI model, whereas Oracle and MySQL are closely related to the DatabaseO model. Specifically, Oracle utilizes block versioning and stores older versions in a separate storage area known as UNDO, while PostgreSQL employs row versioning.
These engines further optimize their respective in-place or out-of-place MVCC strategies:
Early in my PostgreSQL journey, I often sensed that a conversation between two Postgres professionals inevitably revolves around vacuuming. That lighthearted observation still remains relevant, as my LinkedIn feeds are often filled with discussions around vacuuming and comparing PostgreSQL’s Multi-Version Concurrency Control (MVCC) implementation to other engines like Oracle or MySQL. Given that people are naturally drawn to the most complex components of a system, I will continue this journey by exploring a detailed comparison of these database architectures focused on the MVCC implementations.
Stone age databases relied on strict locking mechanisms to handle concurrency, which proved inefficient under heavy load. In these traditional models, a read operation required a shared lock that prevented other transactions from updating the record. Conversely, write operations required exclusive locks that blocked incoming reads. This resulted in significant lock contention, where readers blocked writers and writers blocked readers.
To solve this, RDBMS implemented MVCC. The idea was very simple. Rather than overwriting data immediately, maintain multiple versions of data simultaneously. This allows transactions to view a consistent snapshot of the database as it existed at a specific point in time. For instance, if User 1 starts reading a table just before User 2 starts modifying a record, User 1 sees the original version of the data without hindering User 2’s progress. Without MVCC, the system would be forced to either serialize all access — making User 2 wait — or risk data consistency anomalies like dirty or non-repeatable reads where User 1 sees uncommitted changes that might eventually be rolled back.
Database engines utilize various architectures to manage this data versioning. A particularly notable point of discussion is the comparison between “in-place” and “out-of-place” data versioning techniques. Let’s examine these approaches more closely.
Number of posts in the past two months
Number of posts in the past two months
Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.