I am very thankful to everyone who attended yesterday’s meetup! I must be completely honest: I am always pleasantly surprised when people attend our summer meetups. There are so many better things you can do in Chicago in summer!
I am especially thankful to our speaker, Robert Ismo, who delivered a very interactive presentation about AI and Postgres and kept the audience engaged. In fact, the lively discussion lasted until I had to go to catch my train and asked people to relocate to continue their discussion, so I can’t even tell how long it lasted :).
Once again, thank you, everyone, and I will see you on September 10!
When you optimize the CPU time of a transactional database management system, it comes down to one question: how fast can you read a page without breaking consistency? In this post, we explore how OrioleDB avoids locks, trims memory copies, and — starting with beta12 — even bypasses both copying and tuple deforming altogether for fixed-length types during intra-page search. This means that not only are memory copies skipped, but the overhead of reconstructing tuples is also eliminated. The result: an even faster read path, with no manual tuning required.
Every time a PostgreSQL backend descends an OrioleDB B-tree, it needs a consistent view of the target page. Instead of locking the whole page, OrioleDB keeps a 32-bit state word in the page header.
The low bits represent a change count that increments with every data modification; the high bits hold lightweight flags for "exclusive" and "read-blocked" states. A reader copies the necessary bytes, then re-reads the state, and retries if the counter has changed without using locks, yet achieving perfect consistency.
The following pseudo-code illustrates how to copy a consistent page image using a state variable for synchronization.
def read_page(page, image):
while true:
state = read_state(page)
if state_is_blocked(state):
continue
copy_data(image, page)
newState = read_state(page)
if state_is_blocked(state) or get_change_count(newState) != get_change_count(state):
continue
return
Obviously, copying the full 8KB page might be exhausting, especially when you only need it to locate a single downlink to navigate through the tree. That's why the OrioleDB page layout is chunked; the search code first grabs the small high-key strip (one high-key per chunk) to determine which chunk can possibly contain the key and then copies just that chunk.
The following pseudo-code illustrates how to copy only part of the page containi
[...]One of our customers recently asked if they could use their Python application built with SQLAlchemy with pgEdge, and were pleased to learn that they could. But what is SQLAlchemy, and what considerations might there be when working with a distributed multi-master PostgreSQL cluster like pgEdge Distributed Postgres?SQLAlchemy is “the Python SQL Toolkit and Object Relational Mapper” according to its website. Most famously, it is used for its ORM capabilities which allow you to define your data model and to manage the database schema and access from Python, without having to worry about inconveniences like SQL. A good example from my world is pgAdmin, the management tool project for PostgreSQL that I started nearly 30(!) years ago; pgAdmin 4 stores most of its runtime configuration in either a SQLite database, or for larger shared installations, PostgreSQL. Most of the database code for that purpose uses SQLAlchemy both to handle schema creation and upgrades (known as migrations) as it makes it trivial to manage.One of my awesome colleagues, Gil Browdy, took on the task of showing the customer how pgEdge can work in a distributed environment, and started with a simple script. The script shows the very basics of how we might get started working with SQLAlchemy and pgEdge, so let’s take a look at Gil’s example.
So far the conference calendars in https://proopensource.it/conference-calendar/ have been stored in Google Calendar.
As that is a foreign ressource, I decided to move them away from Google.
ProOpenSource OÜ has a Nextcloud instance running nearly from the start of the business. And Nextcloud has a calendare extension.
I switched the private calendars away from Google quite some time ago, but the conference calendars have been a bit of work to get them running in an iframe.
Now, as it is working nicely with the embedded calendars from the ProOpenSource Nextcloud instance, the change is public from 2025-07-16 on.
You can also subscribe directly to the calendars:
Until the end of this year, 2025, I will publish new entries on both calendars, but in 2026 I will publish new calendar entries only Nextcloud, as shown and listed in Conference Calendar.
I will also add a calendar entry on 2025-12-31 with a reminder to inform everyone who has or still is using the old Google calendars, that everyone who is still subscribed to the Google calendars will get a notification.
For those who still want a Google calendar, there is another source available maintained by some people from the community, including me, on the PostgreSQL Person of the Week website.
There is also a difference between the calendare on PostgreSQL Person of the Week and on Conference Calendar:
The one on PostgreSQL Person of the Week is one calendar containing both, conferences and Calls for Papers of conferences (CfP).
And the one on Conference Calendar contains two calendars, one for Calls for Papers of conferences (CfP), and one for conferences.
When it comes to designing and optimizing databases, one of the most critical aspects is the choice of storage options. PostgreSQL, like many other relational databases, provides various storage options that can significantly impact performance, data integrity, and overall database efficiency.
In this case study, we'll delve into each of PostgreSQL's main storage options, their characteristics, and the factors that influence their choice, enabling you to make informed decisions about your database's storage strategy. You will also learn how you can archive data in a hybrid environment for long term storage.
What we will compare:
Each storage type serves a different purpose and can be used to achieve different goals.
Compliance is a key topic in database engineering and handling large volumes of audit data matters. Therefore taking data from the “Oracle unified audit trail” is a good way to demonstrate PostgreSQL capabilities.
For the purpose of this evaluation we have imported roughly 144 million rows into PostgreSQL using a “heap” (which is the default storage method):
lakehouse=# SELECT count(*) FROM t_row_plain;
count
-----------
144417515
(1 row)
When storing a typical Oracle audit trails those 144 million rows will (without indexes) translate to roughly 500 bytes per entry which means that we can expect a table that is roughly 72 GB in size:
lakehouse=# SELECT pg_size_pretty(
pg_total_relation_size('t_row_plain')
);
pg_size_pretty
----------------
72 GB
(1 row)
Heaps have significant advantages but also disadvantages compared to other storage methods. While a normal table needs a lot of space compared to other storage options, we can index at will and access single rows in the most efficient way possible.
When talking about audit logs a he
[...]At this year’s PGConf.dev, the premier gathering for PostgreSQL contributors, developers, and community leaders, Zhijie Hou and I had the opportunity talk about the challenges and solutions around conflict handling in logical replication — a topic increasingly relevant as PostgreSQL adoption continues to grow.
Since our last public update, OrioleDB has continued to evolve with a series of new releases. These updates refine the core engine, extend functionality, and improve performance across a range of workloads. Together, they move us closer to a beta release and lay the groundwork for broader adoption.
OrioleDB is a PostgreSQL storage extension that implements a custom Table Access Method as a drop‑in replacement for the default Heap storage engine. It is designed to address scalability bottlenecks in PostgreSQL’s buffer manager and reduces the WAL, enabling better utilization of modern multi-core CPUs and high‑performance storage systems.
By rethinking core components such as MVCC, page caching, and checkpoints, OrioleDB improves throughput and predictability in transactional workloads without altering PostgreSQL’s user-facing behavior.
Building on this foundation, recent releases have introduced several user-facing enhancements:
B-tree
index types on OrioleDB tables.
fillfactor
support for OrioleDB tables and indexes.
orioledb_tree_stat()
SQL function for space utilization statistics.
These additions improve OrioleDB/PostgreSQL compatibility and provide more flexibility for workloads with diverse schema and indexing requirements.
Alongside these user‑facing additions, significant performance improvements have been made:
B-tree
inner page navigatiPostgreSQL 18 beta2 is will likely be released on July 17th (Thursday): 18beta2 next week .
libpq the PQservice() function added in commit 4b99fed7 has been removed btree_gist the two changes resulting in extension version bumps have been consolidated into version 1.8 PostgreSQL 18 articlesThe cooperative company DALIBO is celebrating its 20th anniversary today, giving me an opportunity to reflect on the reasons behind the success of this collective adventure.
Speaking of DALIBO’s success means first speaking of the PostgreSQL community’s success. When we created the company in 2005 with Jean-Paul Argudo, Dimitri Fontaine and Alexandre Baron, PostgreSQL was a marginal, confidential and unattractive project. Two decades later, it has become the dominant database: an obvious choice, a consensus among most developers, administrators and decision-makers…
So today I could easily tell you the fable of a visionary company, a pioneer that knew before everyone else that Postgres would devour everything in its path… But the truth is that we were lucky to board the right train at the right time :-)
In 2005, even though I had the intuition that this Postgres train would take us far, it was difficult to imagine that the journey would lead us to the very top of the database market… At that time, Oracle was probably the most powerful IT company in the world, Microsoft SQL Server had its own unwavering user base, MySQL was the rising star among web developers and the NoSQL hype was about to begin…
On paper, PostgreSQL seemed to be the ugly duckling of the group: no flashy interface for developers, no outstanding benchmarks, no bombastic press releases…
But in hindsight, the main ingredient for success was already there: an open, decentralized and self-managed community.
When I participated in creating DALIBO, I clearly remember how warm and stimulating the community’s welcome was: people like Bruce Momjian, Simon Riggs and many others supported, encouraged and inspired us.
Because what is so unique about the Postgres community is the sense of community that runs through it.
What I mean by “sense of community” is the ability for individuals to perceive, understand and value what unites them within the same collective. When people manage to grasp together a common objective, s
[...]PUG Stuttgart happened on June, 26th - hosted by Aleshkova Daria
Prague PostgreSQL Meetup on June 23 organized by Gulcin Yildirim
Swiss PGDay 2025 took place on June 26th and 27th in Rapperswil (Switzerland)
At my day job, we use row-level security extensively. Several different roles interact with Postgres through the same GraphQL API; each role has its own grants and policies on tables; whether a role can see record X in table Y can depend on its access to record A in table B, so these policies aren't merely a function of the contents of the candidate row itself. There's more complexity than that, even, but no need to get into it.
Two tables, then.
set jit = off; -- just-in-time compilation mostly serves to muddy the waters here
create table tag (
id int generated always as identity primary key,
name text
);
insert into tag (name)
select * from unnest(array[
'alpha', 'beta', 'gamma', 'delta', 'epsilon', 'zeta', 'eta', 'iota', 'kappa', 'lambda', 'mu',
'nu', 'xi', 'omicron', 'pi', 'rho', 'sigma', 'tau', 'upsilon', 'phi', 'chi', 'psi', 'omega'
]);
create table item (
id int generated always as identity primary key,
value text,
tags int[]
);
insert into item (value, tags)
select
md5(random()::text),
array_sample((select array_agg(id) from tag), trunc(random() * 4)::int + 1)
from generate_series(1, 1000000);
create index on item using gin (tags);
alter table tag enable row level security;
alter table item enable row level security;
We'll set up two roles to compare performance. item_admin
will have a simple policy allowing it to view all items, while item_reader
's access will be governed by session settings that the user must configure before attempting to query these tables.
create role item_admin;
grant select on item to item_admin;
grant select on tag to item_admin;
create policy item_admin_tag_policy on tag
for select to item_admin
using (true);
create policy item_admin_item_policy on item
for select to item_admin
using (true);
create role item_reader;
grant select on item to item_reader;
grant select on tag to item_reader;
-- `set item_reader.allowed_tags = '{alpha,beta}'` and see items tagged
-- alpha or beta
create policy item_reader_tag_policy on tag
for select to item
[...]
Vibhor Kumar and Marc Linster; last updated July 10 2025
Great, big monolithic databases that assembled all the company’s data used to be considered a good thing. When I was Technical Director at Digital Equipment (a long time ago), our business goal was to bring ‘it’ all together into one enormous database instance, so that we could get a handle on the different businesses and have a clear picture of the current state of affairs. We were dreaming of one place where we could see which components were used where, what product was more profitable, and what parts of the business could be evaluated and optimized.
What changed? Why do we now consider monoliths to be dinosaurs that inhibit progress and that should be replaced with a new micro-services architecture?
This article reviews the pros and cons associated with large, monolithic databases, before diving into modular database (micro-)services. We review their advantages and challenges, describe a real-world problem from our consulting background, and outline design principles. The article ends with a discussion of Postgres building blocks for microservices.
Every business I know has been struggling with uniform definitions, such as a uniform price list with historical prices, or a single source of truth, such as the definite list of customers and their purchases. Trying to move all the data into one ginormous system with referential integrity is very tempting, and when it works, it can be very rewarding.
There are also other operational benefits, such as a single maintenance window, a single set of operating instructions, a single vendor, and a single change management process.
However, this centralized approach begins to show its limitations as the database grows to an extreme scale, leading to performance bottlenecks and inflexibility.
The challenges of monolithic systems are significant, and many architects believe that the
[...]When implementing an optimization for derived clause lookup myself, Amit Langote and David Rowley argued about the initial size of hash table (which would hold the clauses). See some discussions around this email on pgsql-hackers.
The hash_create() API in PostgreSQL takes initial size as an argument. It allocates memory for those many hash entries upfront. If more entries are added, it will expand that memory later. The point of argument was what should be the initial size of the hash table, introduced by that patch, containing the derived clauses. During the discussion, David hypothesised that the size of the hash table affects the efficiency of the hash table operations depending upon whether the hash table fits cache line. While I thought it's reasonable to assume so, the practical impact wouldn't be noticeable. I thought that beyond saving a few bytes choosing the right hash table size wasn't going to have any noticeable effects. If an derived clause lookup or insert became a bit slower, nobody would even notice it. It was practically easy to address David's concern by using the number of derived clauses at the time of creating the hash table to decide initial size of the hash table. The patch was committed.
Within a few months, I faced the same problem again when working on resizing shared buffers without server restart. The buffer manager maintains a buffer look table in the form of a hash table to map a page to buffer. When the number of configured buffers changes upon a server restart the size of buffer lookup table also changes. Doing that in a running server would be significant work. To avoid that, we could create a buffer lookup table large enough to accommodate future buffer size needs. Even if the buffer pool shrinks or expands, the size of the buffer lookup table would not change. As long as the expansion is within the buffer lookup table size limit, it could be done without a restart. Buffer lookup table isn't as large as the buffer pool itself, thus wasting a bit of memory can be consi
[...]In Part 1 of this series, we discussed what active-active databases are and identified some “good” reasons for considering them, primarily centered around extreme high availability and critical write availability during regional outages. Now, let’s turn our attention to the less compelling justifications and the substantial challenges that come with implementing such a setup.
Last week I posted about how we often don’t pick the optimal plan. I got asked about difficulties when trying to reproduce my results, so I’ll address that first (I forgot to mention a couple details). I also got questions about how to best spot this issue, and ways to mitigate this. I’ll discuss that too, although I don’t have any great solutions, but I’ll briefly discuss a couple possible planner/executor improvements that might allow handling this better.
PostgreSQL 19 development is now officially under way, so from now on any new features will be committed to that version. Any significant PostgreSQL 18 changes (e.g. reversions or substantial changes to already committed features) will be noted here separately (there were none this week).
PostgreSQL 19 changes this weekThe first round of new PostgreSQL 19 features is here:
new object identifier type regdatabase , making it easier look up a database's OID COPY FROM now supports multi-line headers cross-type operator support added to contrib module btree_gin : non-array variants of function width_bucket() now permit operand input to be NaNA flame graph is a graphical representation that helps to quickly understand where a program spends most of its processing time. These graphs are based on sampled information collected by a profiler while the observed software is running. At regular intervals, the profiler captures and stores the current call stack. A flame graph is then generated from this data to provide a visual representation of the functions in which the software spends most of its processing time. This is useful for understanding the characteristics of a program and for improving its performance.
This blog post explores the fundamentals of flame graphs and offers a few practical tips on utilizing them to identify and debug performance bottlenecks in PostgreSQL.
The content presented in this blog post is based on material found in other articles or blog posts, as well as in Brendan Gregg’s excellent book on system performance. Over the years, I have collected a number of commands in my lab notebook that I typically use when diagnosing PostgreSQL-related performance problems. I have shared these commands in several emails over the years, so I decided to write a whole blog post on this topic.
Flame graphs are based on data captured by a profiler. They aggregate call stacks to make it easier to see where a program spends most of its processing time. Without aggregation, it is difficult to see the big picture in the thousands (or more) of call stacks that a profiler collects.
When a flame graph is created, these call stacks are collapsed, and the time spent in similar call stacks is summed up. Based on this data, the flame graph is created. The idea behind this is as follows: the more time a program spends in a particular code path, the more often those call stacks will appear in the samples. Since the resulting graph consists of call stacks of different heights, and the stacks are usually colored in red to yellow tones, it looks like a flame.
Brendan Gregg states in ‘The Flame Graph’, ACM Queue, Vol 14, N
[...]We are excited to announce the schedule for PGDay UK 2025 has been published. We've got an exciting line up for talks over a range of topics. There will be something for everyone attending.
Take a look at what we have going on: https://pgday.uk/events/pgdayuk2025/schedule/
We'd like to extend our gratitude to the whole CFP team, who did an amazing job selecting the talks to make up the schedule.
Thank you to all speakers whom submitted talks, it's always a shame that we can't accept all, and as ever it's a tough choice to choose the talks for the schedule. Be it your 100th time or 1st time submitting a talk, we hope you submit again in the future and at other PostgreSQL Europe events.
PGDay UK 2025 is taking place in London on September 9th, so don't forget to register for PGDay UK 2025, before it's too late!
The shared presentations are online, as are a couple of recordings and turtle-loading have-a-cup-of-tea locally stored photos.
Using the well known and broadly spread technique of inductive reasoning we came to the conclusion that this fourth PGConf.be conference was a success, as well as the art work. No animals or elephants we’re hurt during this event.
The statistics are
60 attendants
depending on the session, an extra 60 to 150 students attended as well
10 speakers
2 sponsors
This conference wouldn’t have been possible without the help of volunteers.
To conclude a big thank you to all the speakers, sponsors and attendants.
Without them a conference is just a like tee party.
Number of posts in the past two months
Number of posts in the past two months
Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.