Our platform team has a regular meeting where we often use ops issues as a springboard to dig into Postgres internals. Great meeting today – we ended up talking about the internal architecture of Postgres replication. Sharing a few high-quality links from our discussion:
Alexander Kukushkin’s conference talk earlier this year, which includes a great explanation of how replication works
Alexander’s interview on PostgresTV with Nik Samokhvalov
Postgres Documentation for pg_stat_replication
system catalog (most important source of replication monitoring data)
CloudNativePG source code that translates pg_stat_replication
data into prometheus (titan) metrics
Chapter about streaming replication in Hironobu Suzuki’s book, Internals of PostgreSQL
Here is very helpful diagram from Alexander’s slide deck, which we referenced heavily during our discussion.
Can you identify exactly where in this diagram the three lag metrics come from? (write lag, flush lag and replay lag)
Learning Structured Query Language can be frustrating when double-checking that what you wanted to have done is actually what was done. PostgreSQL 18 has 'OLD and NEW support for RETURNING clauses in INSERT, UPDATE, DELETE, and MERGE commands'. Now you can get instant feedback.
The addition of the RETURNING clause in the previous version made MERGE much easier to use. Now it makes other commands easier.
To demonstrate, let's create a table with one column that is designated as a unique, primary key integer and insert a value.
create table foo (a int unique primary key);
insert into foo (a) values (1);
Now is the point where many of us would run a SELECT a FROM foo, just to double check that indeed there is a 1 in column a.
There is now an option in PG 18 to use RETURNING, and it provides the previous 'old' value of a column along with the 'new' value.
insert into foo (a) values (2) returning old.a, new.a;
insert into foo(a) values (2)
on conflict(a) do update
set a = 99
returning old.a as old, new.a as update
If you are reading this hoping that PostgreSQL finally got UPDATE ... LIMIT
like MySQL, I have to disappoint you. The LIMIT
clause is not yet supported for DML statements in PostgreSQL. If you want to UPDATE
only a limited number of rows, you have to use workarounds. This article will describe how to do this and how to avoid the pitfalls and race condition you may encounter. Note that most of the following also applies to DELETE ... LIMIT
!
UPDATE ... LIMIT
The following table contains 100 categories of 100 entries each. Since we don't discuss persistence, an UNLOGGED
table will do:
CREATE UNLOGGED TABLE upd ( id bigint GENERATED ALWAYS AS IDENTITY, category integer NOT NULL, flag boolean DEFAULT FALSE NOT NULL ); INSERT INTO upd (category) SELECT (i - 1) / 100 FROM generate_series(1, 100000) AS i; /* set hint bits, gather optimizer statistics */ VACUUM (ANALYZE) upd; /* created after INSERT for performance */ ALTER TABLE upd ADD PRIMARY KEY (id); CREATE INDEX ON upd (category);
Now we would like to set the flag
on a single row with category 42:
UPDATE upd SET flag = TRUE WHERE category = 42 LIMIT 1;
But alas, we cannot do that. We'll have to look for other ways to achieve our goal.
UPDATE ... LIMIT
with a simple subquery
Since there is a LIMIT
clause for SELECT
, we can use a subquery as follows:
UPDATE upd AS u SET flag = TRUE WHERE u.category = 42 AND u.id IN (SELECT id FROM upd AS u2 WHERE u2.category = u.category LIMIT 1);
This query works as intended if nobody else is using the table. But it can fail to work as expected in the face of concurrent data modifications:
BEGIN; DELETE FROM upd WHERE id = 4201;
Now run our proposed update statement in a concurrent session, and it will hang. Now, if you COMMIT
the DELETE
, our UPDATE
statement will delete nothing:
UPDATE 0
How did this happen? It is true that the entire query sees the same state
[...]This is a follow‑up to the last article: Run Jepsen against CloudNativePG to see sync replication prevent data loss. In that post, we set up a Jepsen lab to make data loss visible when synchronous replication was disabled — and to show that enabling synchronous replication prevents it under crash‑induced failovers.
Since then, I’ve been trying to make data loss happen more reliably in the “async” configuration so students can observe it on their own hardware and in the cloud. Along the way, I learned that losing data on purpose is trickier than I expected.
To simulate an abrupt primary crash, the lab uses a forced pod deletion, which is effectively a kill -9
for Postgres:
kubectl delete pod -l role=primary --grace-period=0 --force --wait=false
This mirrors the very first sanity check I used to run on Oracle RAC clusters about 15 years ago: “unplug the server.” It isn’t a perfect simulation, but it’s a simple, repeatable crash model that’s easy to reason about.
I should note that the label role
is deprecated by CNPG and will be removed. I originally used it for brevity, but I will update the labs and scripts to use the label cnpg.io/instanceRole
instead.
After publishing my original blog post, someone pointed out an important Kubernetes caveat with forced deletions:
Irrespective of whether a force deletion is successful in killing a Pod, it will immediately free up the name from the apiserver. This would let the StatefulSet controller create a replacement Pod with that same identity; this can lead to the duplication of a still-running Pod
https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/
This caveat would apply to the CNPG controller just like a StatefulSet controller. In practice, for my tests, this caveat did not undermine the goal of demonstrating that synchronous replication prevents data loss. The lab includes an automation script (Exercise 3) to run the 5‑minute Jepsen test in a loop for ma
[...]PostgreSQL 18 is such an exceptional piece of software - it is hard to imagine anything better, right? I remember a similar thought when PostgreSQL 7 was released (“wow, how cool is that - this is it”). Well, let me put it mildly: I was wrong, and things are a lot better than they used to be. So, obviously the idea that PostgreSQL 18 is as far as humanity can go is also wrong. But what might be next?
The question is: While a major step forward has been made in PostgreSQL 18, and an important milestone has been achieved … is there anything out there that is even cooler? Something we might see in a couple of years? Well, maybe: I am of course talking about “Direct I/O”.
Under normal circumstances, a read works like this:
Sounds good? Well it is, and in most cases this is exactly what we want. However, there is an additional method: Direct IO. What it does can be summed up in one sentence: DirectIO bypasses the OS page cache layer.
But let us take a look at more detail: The good part is that it skips the overhead and scalability limitations. The bad part is that it skips the services that layer provi
[...]PostgreSQL 18 was released on schedule this week!
PostgreSQL 18 articles Exploring PostgreSQL 18 – Part 1: The COPY Command Gets More User-Friendly (2025-09-28) - Deepak Mahto Postgres 18: OLD and NEW Rows in the RETURNING Clause (2025-09-26) - Brandur Leach / Crunchy Data PostgreSQL: "UPDATE … RETURNING" made even better (2025-09-25) - Hans-Jürgen Schönig / CYBERTEC PostgreSQL 18: part 5 or CommitFest 2025-03 (2025-09-25) - Pavel Luzanov / PostgresPro Parallel Apply of Large Transactions (2025-09-24) - Amit Kapila Celebrating the PostgreSQL 18 Release (2025-09-24) - EnterpriseDB Tuning AIO in PostgreSQL 18 (2025-09-24) - Tomas Vondra Changes to NOT NULL in Postgres 18 (2025-09-23) - Álvaro Herrera / EnterpriseDB PostgreSQL 19 changes this weekDevelopment continues apace, but no user-visible features were committed this week. See the Commits of Interest list below for more details on changes.
In PostgreSQL 18, the statistics & monitoring subsystem receives a
significant overhaul - extended cumulative statistics, new per-backend I/O
visibility, the ability for extensions to export / import / adjust statistics,
and improvements to GUC controls and snapshot / caching behavior. These changes
open new doors for performance analysis, cross‑environment simulation, and
tighter integration with extensions. In this article I explore what’s new, what
to watch out for, Grand Unified Configuration (GUC) knobs, and how extension
authors can leverage the new C API surface.
Statistics (in the broad sense: monitoring counters, I/O metrics, and planner /
optimizer estimates) lie at the heart of both performance tuning and internal
decision making in PostgreSQL. Transparent, reliable, and manipulable
statistics, among other things, allow DBAs to address the efficiency of
PostgreSQL directly, as well as enable “extensions” to improve the user
experience.
That said, the historic statistics system of PostgreSQL has not been without
points of friction. These include limited ability to clear (relations)
statistics, metrics with units that don’t always align with user goals, and no C
API for using the PostgreSQL Cumulative Stats engine. PostgreSQL 18 addresses
these concerns head on.
Below is a summary of the key enhancements.
While statistics offer incredible value, their collection can take up
significant time and resources. PostgreSQL 18 introduces an important
consideration: with the expanded range of collectible metrics, the hash table
maximum size has been increased. Do keep in mind, especially if you’re designing
large-scale systems with table-per-customer architectures, that 1GB ceilings
have been shown to be hit with some millions of tables.
Here are the major new or improved features relating to statistics and
monitoring. Each item links to the relevant
The PostgreSQL 18 Release Notes are like a great noir novel, full of surprises and intrigue. But we know who did it - the amazing PostgreSQL community.
If you have not perused the Notes, I advise you to do so now. They contain many of what I call 'Wow!' level items, such as the redesigned I/O subsystem, skip scan lookups, virtual generated columns, and more. The new Uuidv7 function and temporal constraints would be a significant leap forward. The OLD and NEW support for RETURNING will help lessen the learning curve for many who are new to SQL.
But go down to the seemingly more mundane items. Removing redundant self-joins, merge-joins can now use incremental sorts, and internally reordering the keys of SELECT DISTINCT to avoid sorting are the types of improvements that are not as attention-getting as the items in the preceding paragraph, but will make life much nicer for us.
Each release of PostgreSQL is faster than the previous one, and early reports report that this has happened again. This is an astounding delivery for those of us with backgrounds in other databases. All too often, new features came with a trade-off in performance that relegated upgrades to the back burner.
Please read the Acknowledgments. This is a list of the good folks who have worked very hard on our behalf to deliver an improved version of PostgreSQL. And they are already working on 19! If you run into any of them, please give them a big 'thank you'.
PostgreSQL 18 was released on September 25, 2024, introducing a host of exciting new features across all major areas. In this blog series, we’ll delve into these features and explore how they can benefit database developers and migration engineers transitioning to PostgreSQL.
Part 1 of the series focuses on enhancements to the COPY
command, particularly for use cases involving loading external files into the database.
The COPY command is the default native option for loading flat files, primarily in text or CSV format, into PostgreSQL. If you are a data engineer or database developer looking for ways to load flat files into PostgreSQL, you must take care of quality checks and encapsulate a mechanism to report or discard failed records.
Until PostgreSQL 16, if quality checks were planned to be done on the database side, one option was to load flat files into staging tables with all columns as text or a generic data type and later move legitimate and discarded data to the concerned tables.With PostgreSQL 17 onward, the COPY command provides options for handling data type incompatibilities and logging failed rows or records using on_error
options.
In PostgreSQL 18 these features is further enhances with REJECT_LIMIT
options. If overall discarded or error row data is beyond the reject limit the copy command is discarded as whole.
Let’s start by creating sample target tables and a CSV file with intentional data type mismatch issues. The sample table also has check and NOT NULL constraints defined on it.
CREATE TABLE copy18(
col1 integer,
col2 text DEFAULT 'x'::text,
col3 text NOT NULL,
col4 text,
col5 boolean,
col6 date,
col7 timestamp without time zone
CONSTRAINT copy18_col4_check CHECK (length(col4) > 2)
);
The CSV file contains data type incompatibilities in the following lines:
It has been a while since last time I blogged about any real-database-life problems and investigation. Here is one which I am really happy about, because it took a while and it was not obvious, but I found the root cause of a problem!
The problem has been going on for months: one of the Postgres databases WAL file system was growing with an alarming speed, many times faster than the database itself. Not like we can’t promptly remove the WALs, but this situation made any database restore a reali nightmare. At first, I suspected long transactions with suboptimal backfill algorithms. There were, indeed, a lot of jobs running on a regular basis which could be improved, however, I noticed a couple of things.
First, when the users fixed one of the primary suspects jobs, the situation with WAL growth didn’t change. Second, the rate of the growth couldn’t be explained by these suboptimal jobs: the data volumes they were removing and reloading were still magnitudes smaller than the WAL size we were dealing with. Finally, I decided to do what I should have done from the start – to take a look at what exactly was in these super-fast growing WALs.
Here is what I did:
create extension pg_walinspect;
select pg_current_wal_insert_lsn();
/* save the result: '17797/A167C8E8'
*/
select pg_current_wal_insert_lsn()
/* save the result : '17797/FEDE0CC8'
*/
select * from pg_get_wal_stats('17797/A167C8E8', '17797/FEDE0CC8', true)
where count>0;
To my amazement, I realized that 97% of the WALs stored the information about locks! To be honest, I didn’t even know that locks are recorded in the WAL files, so I am thankful for gracious Postgres community enlightening me!
Now that I knew where to look, I ran the following:
select * from pg_get_wal_records_info('17797/A167C8E8', '17797/FEDE0CC8')
where resource_manager='Heap'
and record_type='LOCK'
Zooming in:
Next, I found the table which had all these locks:
select * from pg_class where oid =10229951
When I fo
[...]
PostgreSQL 18 is officially released, packed with improvements for performance, authentication, operations, and security. In this article, I’ll show you how to run it on Kubernetes from day one with CloudNativePG. I will summarise key new features like asynchronous I/O and OAuth 2.0, as well as the extension_control_path
parameter. This is a simple but critical feature for operational control in cloud-native environments, and one our team at CloudNativePG and EDB was proud to help contribute to PostgreSQL. You’ll see how this reflects our close collaboration with the upstream community, learn about our new half-sized minimal
container image, and be able to follow my guide to deploy your first cluster today.
In our CNPG series, we have mentioned that backups are crucial in every database system in case of any disaster. PostgreSQL has powerful recovery capabilities as well as backup capabilities, including the ability to restore a database cluster to a specific moment in time. This is extremely useful in scenarios where we need to recover from user errors—such as accidental data deletion, wrong updates, dropped tables, or even dropped databases.
We will now walk through how to perform Point-in-Time Recovery (PITR) with CloudNativePG (CNPG) using a Barman Cloud plugin for backups.
First, we need to simulate a typical disaster. In our CNPG cluster instance:
postgres=# create database pitr_test; CREATE DATABASE postgres=# \c pitr_test You are now connected to database "pitr_test" as user "postgres".
Create a sample table and load some data:
CREATE TABLE test_data ( id SERIAL PRIMARY KEY, name TEXT, age INT, created_at TIMESTAMP DEFAULT NOW() ); INSERT INTO test_data (name, age, created_at) SELECT 'User_' || gs, (random() * 60 + 18)::INT, NOW() - (random() * interval '365 days') FROM generate_series(1, 10000) gs;
Update some rows to check later:
UPDATE test_data
SET age = 1
WHERE id <= 15;
Note the current Write-Ahead Log (WAL) position and timestamp to recover exactly to this point with PITR:
SELECT pg_current_wal_lsn(); pg_current_wal_lsn -------------------- 0/85BDBB0 SELECT now(); now ------------------------------ 2025-09-19 13:39:18.89106+00
Now, simulate a disaster:
DROP TABLE test_data;
At this point, the table is dropped and the only option we have to recover the table to timestamp we want is PITR.
We already have a backup and WAL archive system in place (as covered in the CNPG Backup with Barman Cloud blog). For point in time recovery, WAL files are essential since they contain all transactions after the last backup.
We can confir
[...]Daria Aleshkova, Oleksii Kliukin, Sergey Dudoladov and Andreas Scherbaum organized the PostgreSQL Berlin September Meetup. Divyendu Singh and Marko Coha presented talks.
The Prague PostgreSQL meetup met for a September edition on the 22th. Pavel Lukasenko talked about “From SQL to Energy Charts: Scalable Energy Metering with PostgreSQL, Kubernetes, and Longhorn”, Josef Machytka presented “PostgreSQL Connections Memory Usage on Linux: How Much, Why, and When?” Gülçin Yıldırım Jelínek, Mayuresh B, Pavel Hák, Pavel Stěhule, and Tomas Vondra are organizers for the group.
September 25 PGDay Napoli took place, we recognized the organizers and talk selection committee in a previous post.
Speakers:
Lightning talk speakers:
Postgres 18 was released today. Well down page from headline features like async I/O and UUIDv7 support, we get this nice little improvement:
This release adds the capability to access both the previous (OLD) and current (NEW) values in the RETURNING clause for INSERT, UPDATE, DELETE and MERGE commands.
It's not a showstopper the way async I/O is, but it is one of those small features that's invaluable in the right situation.
A simple demonstration with UPDATE
to get all old and new values:
UPDATE fruit
SET quantity = 300
WHERE item = 'Apples'
RETURNING OLD.*, NEW.*;
id | item | quantity | id | item | quantity
----+--------+----------+----+--------+----------
5 | Apples | 200 | 5 | Apples | 300
(1 row)
OLD
on upsert
Say we're doing an upsert and want to differentiate between whether a row sent back by RETURNING
was one that was newly inserted or an existing row that was updated. This was possible before, but relied on an unintuitive check on xmax = 0
(see the very last line below):
INSERT INTO webhook (
id,
data
) VALUES (
@id,
@data
)
ON CONFLICT (id)
DO UPDATE SET id = webhook.id -- force upsert to return a row
RETURNING webhook.*,
(xmax = 0) AS is_new;
The statement relies on xmax
being set to zero for a fresh insert as an artifact of Postgres' locking implementation (see a full explanation for why this happens). It works, but isn't a guaranteed part of the API, and could conceivably change at any time.
In Postgres 18, we can reimplement the above so it's more legible and doesn't rely on implementation details. It's easy too -- just check whether OLD
is null in the returning clause:
INSERT INTO webhook (
id,
data
) VALUES (
@id,
@data
)
ON CONFLICT (id)
DO UPDATE SET id = webhook.id -- force upsert to return a row
RETURNING webhook.*,
(OLD IS NULL)::boolean AS is_new;
Access to OLD
and NEW
will undoubtedly have many other useful cases, but this is one example that lets us imp
I recently attended PGDay Lowlands 2025, an awesome PostgreSQL Community Event which took place on September 12th in the wondrous environment of the Rotterdam Zoo. It was a fantastic and focused day of PostgreSQL content, made all the more memorable by being in the company of amazing fish, sea turtles, and penguins!
I was honoured to kick off the conference, presenting a talk that I co-wrote with Hettie Dombrovskaya. We've been collecting material and ideas for this for over a year, and it was exciting to finally share it.
Our presentation, "How Do We Get Postgres to the Next Level?", was a call to action to think strategically about the future of the project. We argued that to continue its incredible growth and solidify its leadership position, we need to focus on three core pillars: improving code quality, strategically adding new features, and kicking into overdrive our advocacy and community growth. We touched on the need for things like a dedicated performance team, a more modular development process, standardization on what PostgreSQL compatibility means, and improving the developer experience to attract new contributors.
Packing over a year's worth of research into a 20-minute talk was a challenge, but our main goal was to spark a conversation. Based on the reaction, I think we succeeded! The ideas were received positively, and as we'd hoped, they also challenged some existing perspectives.
What was most rewarding were the hours of "hallway track" discussions that followed. It was fantastic to hear that so many of our points resonated with attendees. A recurring theme in these conversations was a shared sense that for Postgres to continue scaling, we have to brin
[...]In this post we’ll cover two types of Postgres internals.
The first internal item is an “SLRU.” The acronym stands for “simple least recently used.” The LRU portion refers to caches and how they work, and SLRUs in Postgres are a collection of these caches.
SLRUs are small in-memory item stores. Since they need to persist across restarts, they’re also saved into files on disk. Alvaro1 calls SLRUs “poorly named” for a user-facing feature. If they’re internal, why are they worth knowing about as Postgres users?
They’re worth knowing about because there can be a couple of possible failure points with them, due their fixed size. We’ll look at those later in this post.
Before getting into that, let’s cover some basics about what they are and look at a specific type.
The main purpose of SLRUs is to track metadata about Postgres transactions.
SLRUs are a general mechanism used by multiple types. Like a lot of things in Postgres, the SLRU system is extensible which means extensions can create new types.
The “least recently used” aspect might be recognizable from cache systems. LRU refers to how the oldest items are evicted from the cache when it’s full, and newer items take their place. This is because the cache has a fixed amount of space (measured in 8KB pages) and thus can only store a fixed amount of items.
Old SLRU cache items are periodically cleaned up by the Vacuum process.
The buffer cache (sized by configuring shared_buffers) is another form of cache in Postgres. Thomas Munro proposed unifying the SLRUs and buffer cache mechanisms.
However, as of Postgres 17 and the upcoming 18 release (released September 9, 2025), SLRUs are still their own distinct type of cache.
What types of data is stored in SLRUs?
Transactions are a core concept for relational databases like Postgres. Transactions are abbreviated “Xact,” and Xacts are one of the types of data stored in SLRU
[...]PostgreSQL 18 offers a couple of new features such as asynchronous I/O (aio), improvements around indexing, and a lot more. However, there is one feature that has really caught my attention. While it might go unnoticed by many users, it is really powerful and, to me, somehow represents everything I like about PostgreSQL.
The improvement I want to focus on today is related to the way the "RETURNING-clause" works in PostgreSQL 18.
Before we dive into the new functionality, we have to create a simple test table containing a single row of data:
test=# CREATE TABLE t_test AS SELECT 1 AS id, 'Hans' AS name; SELECT 1
The way RETURNING works is that it allows us to see what a DML statement has changed.
Here is an example:
test=# UPDATE t_test SET id = 2 WHERE id = 1 RETURNING *; id | name ----+------ 2 | Hans (1 row) UPDATE 1
The statement finds our row and modifies the ID by changing it from 1 to 2. The RETURNING-clause at the end can be used to fetch whatever columns we want from the new rows that have been written to the table. By placing a star, we will get the entire tuple, but we can of course simply list single columns or even expressions (depending on our needs).
This feature has existed for many years now and has been adopted by a fair number of applications.
What is new in PostgreSQL is support for the NEW and OLD keywords, which some users might already know from PostgreSQL trigger functions. What is the purpose of those two keywords? Well, NEW represents the new row as it is sent to the table by our UPDATE statement and OLD represents the row as it has been before the modification.
The following listing shows how the new syntax can be applied:
test=# UPDATE t_test SET id = 3 WHERE id = 2 RETURNING NEW.*; id | name ----+------ 3 | Hans (1 row) UPDATE 1
Essentially, the example does the same as the previous one - we modify one
[...]September 25th marks the release of PostgreSQL 18. This article covers the March CommitFest and concludes the series covering the new features of the upcoming update. This article turned out quite large, as the last March CommitFest is traditionally the biggest and richest in new features.
You can find previous reviews of PostgreSQL 18 CommitFests here: 2024-07, 2024-09, 2024-11, 2025-01.
PostgreSQL 18 was stamped earlier this week, and as usual there’s a lot of improvements. One of the big architectural changes is asynchronous I/O (AIO), allowing asynchronous scheduling of I/O, giving the database more control and better utilizing the storage.
I’m not going to explain how AIO works, or present detailed benchmark results. There have been multiple really good blog posts about that. There’s also a great talk from pgconf.dev 2025 about AIO, and a recent “Talking Postgres” podcast episode with Andres, discussing various aspects of the whole project. I highly suggest reading / watching those.
I want to share a couple suggestions on how to tune the AIO in Postgres 18, and explain some inherent (but not immediately obvious) trade-offs and limitations.
You can control the degree of parallelism using the max_parallel_apply_workers_per_subscription
parameter. By default, this is set to 2, allowing two large transactions to be applied in parallel per subscription.
Parallel apply significantly improves the efficiency of large transaction handling by not waiting for the COMMIT message from the publisher before applying changes. Previously, the subscriber would write all streamed data to temporary files and apply them only after receiving the commit record.
After a very long development period, we finally completed the project to rework NOT NULL constraints in PostgreSQL. This has long been a desire of the Postgres development community, and we finally pulled it off for version 18, which has made me very happy.
The most visible user change is that NOT NULL
constraints now have names. These names are visible in psql under the \d+
command. For instance, looking at the aircraft
table in the postgres_air
database, we can see something like this:
=# \d+ aircraft
Table "postgres_air.aircraft"
Column
I had a chance last week to sit down and read the original academic paper announcing Postgres as a platform and the original design goals from 1986. I was just awestruck at the forethought - and how the original project goals laid the foundation for the database that seems to be taking over the world right now.
The PostgreSQL creators totally nailed it. They laid out a flexible framework for a variety of business use cases that would eventually become the most popular database 30 years later.
The paper outlines 6 project goals:
better support for complex objects growing world of business and engineering use cases
provide user extendibility for data types, operators and access methods
provide facilities for active databases like alerters and triggers
simplify process for crash recovery
take advantage of upgraded hardware
utilize Codd’s relational model
Let's look at all of them in reference to modern features of Postgres.
Postgres has a rich and flexible set of native data types that are designed to meet a vast array of business use cases, from simple record-keeping to complex data analysis.
Numeric Types like SMALLINT
and INTEGER
are used for whole numbers while BIGINT
might be for a user's unique ID or primary keys. Precision like NUMERIC
and DECIMAL
are used, exact precision is critical, especially for money in Postgres. Floating-Point Types like REAL
or DOUBLE PRECISION
can be used for scientific or engineering calculations where absolute precision isn't as important as the range of values. You also have your UUID
(indexable UUIDs in Postgres 18) for distributed systems and secure URLs.
Character Types like VARCHAR(n)
or CHAR(n)
store variable-length text up to a specified maximum length (n) and only use as much storage as needed for the actual text.
Date/Time
[...]Summer is slowly fading, and that means one thing: the PostgreSQL conference season in Europe is officially back. After PGDay Austria on September 4 and PGDay UK on September 9, the next stop was PGDay Lowlands in Rotterdam on September 12. Three conferences in such a short time is a clear sign that the community is back in full swing after the summer break. And honestly—it felt great to be part of it again.
My conference adventure began a little earlier than the official start. I first traveled from Austria to Belgium to visit our partner Zebanza together with Jan Karremans. We met at their new office at Piros (in Everberg) for a strategic alignment session. It’s always rewarding to step out of the online bubble and sit together in person—discussing strategy, exchanging ideas, and building stronger ties.
From there, our journey continued to Amsterdam for a customer meeting, followed by another strategy session with our partner Nibble IT.
The plan sounded simple: hop on one train, work a bit along the way, and arrive fresh and prepared. But the Dutch railway system had other ideas. The direct connection was canceled, and we ended up taking four trains instead of one.
What could have been a stressful detour actually turned into a positive surprise. It gave me more time to enjoy the ride, watch the changing landscapes of Belgium and the Netherlands, and reflect on the upcoming days. Sometimes, detours are exactly what you need.
The official start of PGDay Lowlands happened the evening before the main event, with the warmup meetup. It combined short talks, pizza, and drinks—a casual setting that perfectly set the stage for the main conference.
For me, this was the first moment to reconnect with familiar faces from the PostgreSQL community, meet new ones, and exchange ideas in a relaxed atmosphere.
Jan Karremans also gave a Lightning Talk during the meetup, introducing the Open Alliance for PostgreSQL Education. The anno
[...]Number of posts in the past two months
Number of posts in the past two months
Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.