PostgreSQL 18 beta2 is will likely be released on July 17th (Thursday): 18beta2 next week .
libpq the PQservice() function added in commit 4b99fed7 has been removed btree_gist the two changes resulting in extension version bumps have been consolidated into version 1.8 PostgreSQL 18 articlesThe cooperative company DALIBO is celebrating its 20th anniversary today, giving me an opportunity to reflect on the reasons behind the success of this collective adventure.
Speaking of DALIBO’s success means first speaking of the PostgreSQL community’s success. When we created the company in 2005 with Jean-Paul Argudo, Dimitri Fontaine and Alexandre Baron, PostgreSQL was a marginal, confidential and unattractive project. Two decades later, it has become the dominant database: an obvious choice, a consensus among most developers, administrators and decision-makers…
So today I could easily tell you the fable of a visionary company, a pioneer that knew before everyone else that Postgres would devour everything in its path… But the truth is that we were lucky to board the right train at the right time :-)
In 2005, even though I had the intuition that this Postgres train would take us far, it was difficult to imagine that the journey would lead us to the very top of the database market… At that time, Oracle was probably the most powerful IT company in the world, Microsoft SQL Server had its own unwavering user base, MySQL was the rising star among web developers and the NoSQL hype was about to begin…
On paper, PostgreSQL seemed to be the ugly duckling of the group: no flashy interface for developers, no outstanding benchmarks, no bombastic press releases…
But in hindsight, the main ingredient for success was already there: an open, decentralized and self-managed community.
When I participated in creating DALIBO, I clearly remember how warm and stimulating the community’s welcome was: people like Bruce Momjian, Simon Riggs and many others supported, encouraged and inspired us.
Because what is so unique about the Postgres community is the sense of community that runs through it.
What I mean by “sense of community” is the ability for individuals to perceive, understand and value what unites them within the same collective. When people manage to grasp together a common objective, s
[...]PUG Stuttgart happened on June, 26th - hosted by Aleshkova Daria
Prague PostgreSQL Meetup on June 23 organized by Gulcin Yildirim
Swiss PGDay 2025 took place on June 26th and 27th in Rapperswil (Switzerland)
At my day job, we use row-level security extensively. Several different roles interact with Postgres through the same GraphQL API; each role has its own grants and policies on tables; whether a role can see record X in table Y can depend on its access to record A in table B, so these policies aren't merely a function of the contents of the candidate row itself. There's more complexity than that, even, but no need to get into it.
Two tables, then.
set jit = off; -- just-in-time compilation mostly serves to muddy the waters here
create table tag (
id int generated always as identity primary key,
name text
);
insert into tag (name)
select * from unnest(array[
'alpha', 'beta', 'gamma', 'delta', 'epsilon', 'zeta', 'eta', 'iota', 'kappa', 'lambda', 'mu',
'nu', 'xi', 'omicron', 'pi', 'rho', 'sigma', 'tau', 'upsilon', 'phi', 'chi', 'psi', 'omega'
]);
create table item (
id int generated always as identity primary key,
value text,
tags int[]
);
insert into item (value, tags)
select
md5(random()::text),
array_sample((select array_agg(id) from tag), trunc(random() * 4)::int + 1)
from generate_series(1, 1000000);
create index on item using gin (tags);
alter table tag enable row level security;
alter table item enable row level security;
We'll set up two roles to compare performance. item_admin
will have a simple policy allowing it to view all items, while item_reader
's access will be governed by session settings that the user must configure before attempting to query these tables.
create role item_admin;
grant select on item to item_admin;
grant select on tag to item_admin;
create policy item_admin_tag_policy on tag
for select to item_admin
using (true);
create policy item_admin_item_policy on item
for select to item_admin
using (true);
create role item_reader;
grant select on item to item_reader;
grant select on tag to item_reader;
-- `set item_reader.allowed_tags = '{alpha,beta}'` and see items tagged
-- alpha or beta
create policy item_reader_tag_policy on tag
for select to item
[...]
Vibhor Kumar and Marc Linster; last updated July 10 2025
Great, big monolithic databases that assembled all the company’s data used to be considered a good thing. When I was Technical Director at Digital Equipment (a long time ago), our business goal was to bring ‘it’ all together into one enormous database instance, so that we could get a handle on the different businesses and have a clear picture of the current state of affairs. We were dreaming of one place where we could see which components were used where, what product was more profitable, and what parts of the business could be evaluated and optimized.
What changed? Why do we now consider monoliths to be dinosaurs that inhibit progress and that should be replaced with a new micro-services architecture?
This article reviews the pros and cons associated with large, monolithic databases, before diving into modular database (micro-)services. We review their advantages and challenges, describe a real-world problem from our consulting background, and outline design principles. The article ends with a discussion of Postgres building blocks for microservices.
Every business I know has been struggling with uniform definitions, such as a uniform price list with historical prices, or a single source of truth, such as the definite list of customers and their purchases. Trying to move all the data into one ginormous system with referential integrity is very tempting, and when it works, it can be very rewarding.
There are also other operational benefits, such as a single maintenance window, a single set of operating instructions, a single vendor, and a single change management process.
However, this centralized approach begins to show its limitations as the database grows to an extreme scale, leading to performance bottlenecks and inflexibility.
The challenges of monolithic systems are significant, and many architects believe that the
[...]When implementing an optimization for derived clause lookup myself, Amit Langote and David Rowley argued about the initial size of hash table (which would hold the clauses). See some discussions around this email on pgsql-hackers.
The hash_create() API in PostgreSQL takes initial size as an argument. It allocates memory for those many hash entries upfront. If more entries are added, it will expand that memory later. The point of argument was what should be the initial size of the hash table, introduced by that patch, containing the derived clauses. During the discussion, David hypothesised that the size of the hash table affects the efficiency of the hash table operations depending upon whether the hash table fits cache line. While I thought it's reasonable to assume so, the practical impact wouldn't be noticeable. I thought that beyond saving a few bytes choosing the right hash table size wasn't going to have any noticeable effects. If an derived clause lookup or insert became a bit slower, nobody would even notice it. It was practically easy to address David's concern by using the number of derived clauses at the time of creating the hash table to decide initial size of the hash table. The patch was committed.
Within a few months, I faced the same problem again when working on resizing shared buffers without server restart. The buffer manager maintains a buffer look table in the form of a hash table to map a page to buffer. When the number of configured buffers changes upon a server restart the size of buffer lookup table also changes. Doing that in a running server would be significant work. To avoid that, we could create a buffer lookup table large enough to accommodate future buffer size needs. Even if the buffer pool shrinks or expands, the size of the buffer lookup table would not change. As long as the expansion is within the buffer lookup table size limit, it could be done without a restart. Buffer lookup table isn't as large as the buffer pool itself, thus wasting a bit of memory can be consi
[...]In Part 1 of this series, we discussed what active-active databases are and identified some “good” reasons for considering them, primarily centered around extreme high availability and critical write availability during regional outages. Now, let’s turn our attention to the less compelling justifications and the substantial challenges that come with implementing such a setup.
Last week I posted about how we often don’t pick the optimal plan. I got asked about difficulties when trying to reproduce my results, so I’ll address that first (I forgot to mention a couple details). I also got questions about how to best spot this issue, and ways to mitigate this. I’ll discuss that too, although I don’t have any great solutions, but I’ll briefly discuss a couple possible planner/executor improvements that might allow handling this better.
PostgreSQL 19 development is now officially under way, so from now on any new features will be committed to that version. Any significant PostgreSQL 18 changes (e.g. reversions or substantial changes to already committed features) will be noted here separately (there were none this week).
PostgreSQL 19 changes this weekThe first round of new PostgreSQL 19 features is here:
new object identifier type regdatabase , making it easier look up a database's OID COPY FROM now supports multi-line headers cross-type operator support added to contrib module btree_gin : non-array variants of function width_bucket() now permit operand input to be NaNWe are excited to announce the schedule for PGDay UK 2025 has been published. We've got an exciting line up for talks over a range of topics. There will be something for everyone attending.
Take a look at what we have going on: https://pgday.uk/events/pgdayuk2025/schedule/
We'd like to extend our gratitude to the whole CFP team, who did an amazing job selecting the talks to make up the schedule.
Thank you to all speakers whom submitted talks, it's always a shame that we can't accept all, and as ever it's a tough choice to choose the talks for the schedule. Be it your 100th time or 1st time submitting a talk, we hope you submit again in the future and at other PostgreSQL Europe events.
PGDay UK 2025 is taking place in London on September 9th, so don't forget to register for PGDay UK 2025, before it's too late!
The shared presentations are online, as are a couple of recordings and turtle-loading have-a-cup-of-tea locally stored photos.
Using the well known and broadly spread technique of inductive reasoning we came to the conclusion that this fourth PGConf.be conference was a success, as well as the art work. No animals or elephants we’re hurt during this event.
The statistics are
60 attendants
depending on the session, an extra 60 to 150 students attended as well
10 speakers
2 sponsors
This conference wouldn’t have been possible without the help of volunteers.
To conclude a big thank you to all the speakers, sponsors and attendants.
Without them a conference is just a like tee party.
Having attended PGConf.DE'2025 and discussed the practice of using Postgres on large databases there, I was surprised to regularly hear the opinion that query planning time is a significant issue. As a developer, it was surprising to learn that this factor can, for example, slow down the decision to move to a partitioned schema, which seems like a logical step once the number of records in a table exceeds 100 million. Well, let's figure it out.
The obvious way out of this situation is to use prepared statements, initially intended for reusing labour-intensive parts such as parse trees and query plans. For more specifics, let's look at a simple table scan with a large number of partitions (see initialisation script):
EXPLAIN (ANALYZE, COSTS OFF, MEMORY, TIMING OFF)
SELECT * FROM test WHERE y = 127;
/*
...
-> Seq Scan on l256 test_256
Filter: (y = 127)
Planning:
Buffers: shared hit=1536
Memory: used=3787kB allocated=4104kB
Planning Time: 61.272 ms
Execution Time: 4.929 ms
*/
In this scenario involving a selection from a table with 256 partitions, my laptop's PostgreSQL took approximately 60 milliseconds for the planning phase and only 5 milliseconds for execution. During the planning process, it allocated 4 MB of RAM and accessed 1,500 data pages. Quite substantial overhead for a production environment! In this case, PostgreSQL has generated a custom plan that is compiled anew each time the query is executed, choosing an execution strategy based on the query parameter values during optimisation. To improve efficiency, let's parameterise this query and store it in the 'Plan Cache' of the backend by executing PREPARE:
PREPARE tst (integer) AS SELECT * FROM test WHERE y = $1;
EXPLAIN (ANALYZE, COSTS OFF, MEMORY, TIMING OFF) EXECUTE tst(127);
/*
...
-> Seq Scan on l256 test_256
Filter: (y = $1)
Planning:
Buffers: shared hit=1536
Memory: used=3772kB allocated=4120kB
Planning Time: 59.525 ms
Execution Time: 5.184 ms
*/
The planning workload remains the same s
[...]Over the last decade, when working on databases where UUID Version 41 was picked as the primary key data type, these databases usually have bad performance and excessive IO.
UUID is a native data type that can be stored as binary data, with various versions outlined in the RFC. Version 4 is mostly random bits, obfuscating information like when the value was created, or where it was generated.
Version 4 UUIDs are easy to work with in Postgres as the gen_random_uuid()
2 function generates values natively since version 13 (2020).
I’ve learned there are misconceptions about UUID Version 4, and sometimes the reasons users pick this data type is based on them.
Because of the poor performance, misconceptions, and available alternatives, I’ve come around to a simple position: Avoid UUID Version 4 for primary keys.
My more controversial take is to avoid UUIDs in general, but I understand there are some legitimate scenarios where there aren’t practical alternatives.
As a database enthusiast, I wanted to have an articulated position on this classic “Integer v. UUID” debate.
Among databases folks, debating these alternatives may be tired and clichéd. However, from my consulting work, I can say that I’m working on databases with UUID v4 as the primary key in 2024 and 2025, and seeing the issues discussed in this post.
Let’s dig in.
uuid
data type in Postgres
Although unreleased as of this writing, and pulled from Postgres 17 previously, UUID V7 is part of Postgres 184 scheduled for release in the Fall of 2025.
What kind of app database
[...]PostgreSQL user groups are a fantastic way to build new connections and engage with the local community. Last week, I had the pleasure of speaking at the Stuttgart meetup, where I gave a talk on “Operating PostgreSQL as a Data Source for Analytics Pipelines.”
Below are my slides and a brief overview of the talk. If you missed the meetup but would be interested in an online repeat, let me know in the comments below!
As modern analytics pipelines evolve beyond simple dashboards into real-time and ML-driven environments, PostgreSQL continues to prove itself as a powerful, flexible, and community-driven database.
In my talk, I explored how PostgreSQL fits into modern data workflows and how to operate it effectively as a source for analytics.
PostgreSQL is widely used for OLTP workloads – but can it serve OLAP needs as well? With physical or logical replication, PostgreSQL can act as a robust data source for analytics, enabling teams to offload read-intensive queries without compromising production.
Physical replication provides an easy-to-operate, read-only copy of your production PostgreSQL database. It lets you use the full power of SQL and relational features for reporting – without the risk of data scientists or analysts impacting production. It offers strong performance, though with some limitations: no materialized views, no temporary tables, and limited schema flexibility. Honestly, there are more ways analysts could harm production even from the replica side.
Logical replication offers a better solution:
However, it also brings complexity – especially around DDL handling, failover, and more awareness from participating teams.
Data analytics in 2025 is more than jus
[...]Housekeeping announcements:
this website's PostgreSQL installation is now on version 17 ( insert champagne emoji here ) the search function now works properly with non-ASCII characters ( there was an embarrassing oversight which went unnoticed until someone kindly pointed it out ) PostgreSQL 18 changes this weekThis week there have been a couple of renamings:
psql 's meta-command \close was renamed \close_prepared pg_createsubscriber 's option --remove was renamed to --cleanBoth of these items were added during the PostgreSQL 18 development cycle so do not have any implications for backwards compatibility.
There have also been some fixes related to:
comments on NOT NULL constraints virtual/generated columnsSee below for other commits of note.
The basic promise of a query optimizer is that it picks the “optimal” query plan. But there’s a catch - the plan selection relies on cost estimates, calculated from selectivity estimates and cost of basic resources (I/O, CPU, …). So the question is, how often do we actually pick the “fastest” plan? And the truth is we actually make mistakes quite often.
Consider the following chart, with durations of a simple SELECT
query with a range condition. The condition is varied to match different fractions of the table, shown on the x-axis (fraction of pages with matching rows). The plan is forced to use different scan methods using enable_
options, and the dark points mark runs when the scan method “won” even without using the enable_
parameters.
It shows that for selectivities ~1-5% (the x-axis is logarithmic), the planner picks an index scan, but this happens to be a poor choice. It takes up to ~10 seconds, and a simple “dumb” sequential scan would complete the query in ~2 seconds.
PgPedia Week has been delayed this week due to malaise and other personal circumstances.
PostgreSQL 18 changes this week pg_dump has gained the ability to dump statistics on foreign tables various bugfixes for the new amcheck function gin_index_check() PostgreSQL 18 articles PostgreSQL 18 just dropped: 10 powerful new features devs need to know (2025-06-20) - Devlink Tips Preserve optimizer statistics during major upgrades with PostgreSQL v18 (2025-06-17) - Laurenz Albe / CYBERTECNumber of posts in the past two months
Number of posts in the past two months
Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.