Latest Blog Posts

Worklog July 3-11, 2025
Posted by Mankirat Singh on 2025-07-12 at 00:00
Progress on ABI Compliance Reporting with BuildFarm

Efficiency of a sparse hash table
Posted by Ashutosh Bapat on 2025-07-11 at 15:01

When implementing an optimization for derived clause lookup myself, Amit Langote and David Rowley argued about the initial size of hash table (which would hold the clauses). See some discussions around this email on pgsql-hackers.

The hash_create() API in PostgreSQL takes initial size as an argument. It allocates memory for those many hash entries upfront. If more entries are added, it will expand that memory later. The point of argument was what should be the initial size of the hash table, introduced by that patch, containing the derived clauses. During the discussion, David hypothesised that the size of the hash table affects the efficiency of the hash table operations depending upon whether the hash table fits cache line. While I thought it's reasonable to assume so, the practical impact wouldn't be noticeable. I thought that beyond saving a few bytes choosing the right hash table size wasn't going to have any noticeable effects. If an derived clause lookup or insert became a bit slower, nobody would even notice it. It was practically easy to address David's concern by using the number of derived clauses at the time of creating the hash table to decide initial size of the hash table. The patch was committed.

Within a few months, I faced the same problem again when working on resizing shared buffers without server restart. The buffer manager maintains a buffer look table in the form of a hash table to map a page to buffer. When the number of configured buffers changes upon a server restart the size of buffer lookup table also changes. Doing that in a running server would be significant work. To avoid that, we could create a buffer lookup table large enough to accommodate future buffer size needs. Even if the buffer pool shrinks or expands, the size of the buffer lookup table would not change. As long as the expansion is within the buffer lookup table size limit, it could be done without a restart. Buffer lookup table isn't as large as the buffer pool itself, thus wasting a bit of memory can be consi

[...]

Replication Types and Modes in PostgreSQL
Posted by semab tariq in Stormatics on 2025-07-10 at 13:09

Data is a key part of any mission-critical application. Losing it can lead to serious issues, such as financial loss or harm to a business’s reputation. A common way to protect against data loss is by taking regular backups, either manually or automatically. However, as data grows, backups can become large and take longer to complete. If these backups are not managed properly or tested often, a sudden system crash could result in permanent data loss.

To address this, many organizations use real-time data replication as a more reliable solution. In this setup, data from the primary server is continuously copied to one or more standby servers. If the primary server fails, there’s no need to restore from a backup the standby server can be quickly promoted to take over as the new primary. This allows the application to resume with minimal downtime, reducing the Recovery Time Objective (RTO), and can bring the Recovery Point Objective (RPO) close to zero or even zero.

PostgreSQL also supports replication to keep standby servers in sync with the primary server using Write-Ahead Log (WAL) files. Every change made to the database is first recorded in these WAL files on the primary server. These logs are then continuously streamed to the standby server, which applies them to stay up to date. This method ensures that all standby servers stay in sync with the primary and are ready to be promoted in case the primary server fails.

In this blog, we will explore the different types and modes of replication available in PostgreSQL to help you understand which option best fits your business needs.

Types of Replication

PostgreSQL offers different types of replication, with each method designed to serve specific use cases.

  • Physical Replication
  • Logical Replication
  • Cascading Replication

Physical Replication

Physical replication is the foundation for setting up high availability in active-passive clusters. In this replication

[...]

Active-active replication - the hidden costs and complexities
Posted by Jan Wieremjewicz in Percona on 2025-07-10 at 00:00

In Part 1 of this series, we discussed what active-active databases are and identified some “good” reasons for considering them, primarily centered around extreme high availability and critical write availability during regional outages. Now, let’s turn our attention to the less compelling justifications and the substantial challenges that come with implementing such a setup.

What are “bad” reasons?

  1. Scaling write throughput
 Trying to scale your write capacity by deploying active-active across regions may sound like a clean horizontal solution, but it is rarely that simple. Write coordination, conflict resolution, and replication overhead introduce latency that defeats the purpose.
If you are thinking about low latency writes between regions like Australia and the US, keep in mind that you will still be paying the round-trip cost, typically 150-200ms+, to maintain consistency. Physics don’t do any favors.
Even if you have multiple primaries, unless you accept weaker consistency or potential conflicts, your writes will not scale linearly. In many real-world cases, throughput actually suffers compared to a well tuned primary replica setup.
If your real goal is better throughput, you are usually better served by:
    • Regional read replicas
    • Sharding
    • Task queuing and eventual delegation
  2. Performance
 Performance is often used as a vague justification. But active-active is not a panaceum for general slowness. If the issue is database bottlenecks, reasons may be as basic as not enough work spent on data structuring, indexing, query tuning or scaling reads before jumping into multi primary deployments.
Way too often we find that performance problems come not from scale limits, but from poor design and neglected maintenance. We are in 2025 and lessons like this classic one on fast inserts are still painfully relevant.
If what you are really facing is application level latency, that is a different challenge. And even then, active-active with strong guara
[...]

So why don't we pick the optimal query plan?
Posted by Tomas Vondra on 2025-07-08 at 10:00

Last week I posted about how we often don’t pick the optimal plan. I got asked about difficulties when trying to reproduce my results, so I’ll address that first (I forgot to mention a couple details). I also got questions about how to best spot this issue, and ways to mitigate this. I’ll discuss that too, although I don’t have any great solutions, but I’ll briefly discuss a couple possible planner/executor improvements that might allow handling this better.

Josef Machytka
Posted by Andreas 'ads' Scherbaum on 2025-07-07 at 14:00
PostgreSQL Person of the Week Interview with Josef Machytka: I was born in Czechia and lived most of my life there. However, for the last 12 years, my family and I have been calling Berlin, Germany, our home. It is an open-minded place that gives people a lot of freedom, we like it here very much.

PgPedia Week, 2025-07-06
Posted by Ian Barwick on 2025-07-07 at 11:38

PostgreSQL 19 development is now officially under way, so from now on any new features will be committed to that version. Any significant PostgreSQL 18 changes (e.g. reversions or substantial changes to already committed features) will be noted here separately (there were none this week).

PostgreSQL 19 changes this week

The first round of new PostgreSQL 19 features is here:

new object identifier type regdatabase , making it easier look up a database's OID COPY FROM now supports multi-line headers cross-type operator support added to contrib module btree_gin : non-array variants of function width_bucket() now permit operand input to be NaN

more...

PGDay UK 2025 - Schedule Published
Posted by Chris Ellis in PGDay UK on 2025-07-04 at 13:27

We are excited to announce the schedule for PGDay UK 2025 has been published. We've got an exciting line up for talks over a range of topics. There will be something for everyone attending.

Take a look at what we have going on: https://pgday.uk/events/pgdayuk2025/schedule/

We'd like to extend our gratitude to the whole CFP team, who did an amazing job selecting the talks to make up the schedule.

Thank you to all speakers whom submitted talks, it's always a shame that we can't accept all, and as ever it's a tough choice to choose the talks for the schedule. Be it your 100th time or 1st time submitting a talk, we hope you submit again in the future and at other PostgreSQL Europe events.

PGDay UK 2025 is taking place in London on September 9th, so don't forget to register for PGDay UK 2025, before it's too late!

https://2025.pgday.uk/

PGConf.be 2025
Posted by Wim Bertels on 2025-07-04 at 12:07

A round up of the fifth PGConf.be

The shared presentations are online, as are a couple of recordings and turtle-loading have-a-cup-of-tea locally stored photos.

Using the well known and broadly spread technique of inductive reasoning we came to the conclusion that this fourth PGConf.be conference was a success, as well as the art work. No animals or elephants we’re hurt during this event.

The statistics are

  • 60 attendants

    • depending on the session, an extra 60 to 150 students attended as well

  • 10 speakers

  • 2 sponsors

This conference wouldn’t have been possible without the help of volunteers.
To conclude a big thank you to all the speakers, sponsors and attendants.
Without them a conference is just a like tee party.

pgroll 0.14 - New commands and more control over version schemas
Posted by Andrew Farries in Xata on 2025-07-04 at 11:45
pgroll 0.14 is released with several new subcommands and better control over how version schema are created.

Disaster Recovery Guide with pgbackrest
Posted by warda bibi in Stormatics on 2025-07-03 at 14:18

Recently, we worked with a client who was manually backing up their 800GB PostgreSQL database using pg_dump, which was growing rapidly and had backups stored on the same server as the database itself. This setup had several critical issues:

  • Single point of failure: If the server failed, both the database and its backups would be lost.
  • No point-in-time recovery: Accidental data deletion couldn’t be undone.
  • Performance bottlenecks: Backups consumed local storage, impacting database performance.

To address these risks, we replaced their setup with pgBackRest, shifting backups to a dedicated backup server with automated retention policies and support for point-in-time recovery (PITR).

This guide will walk you through installing, configuring, and testing pgBackRest in a real-world scenario where backups will be configured on a dedicated backup server, separate from the data node itself.

PgBackRest Overview

pgBackRest is a robust, open-source backup and restore solution for PostgreSQL, developed by Crunchy Data. It is designed to meet the demands of large-scale PostgreSQL environments, with a focus on reliability, efficiency, and automation.

Key features include:

  • Support for full, differential, and incremental backups to optimize storage usage and reduce backup time.
  • Point-in-time recovery (PITR) for precise restoration in case of data loss or corruption.
  • Parallel processing for faster backup and restore operations, particularly in large databases.
  • Integrated compression and encryption to ensure secure and space-efficient backup storage.
  • Remote backup capabilities to protect against single points of failure by decoupling backups from the primary database server.

pgBackRest Backup Types

Full Backups

Full backup captures the entire PostgreSQL database cluster and includes all necessary files to re

[...]

On Postgres Plan Cache Mode Management
Posted by Andrei Lepikhov in Postgres Professional on 2025-07-03 at 08:29

Having attended PGConf.DE'2025 and discussed the practice of using Postgres on large databases there, I was surprised to regularly hear the opinion that query planning time is a significant issue. As a developer, it was surprising to learn that this factor can, for example, slow down the decision to move to a partitioned schema, which seems like a logical step once the number of records in a table exceeds 100 million. Well, let's figure it out.

The obvious way out of this situation is to use prepared statements, initially intended for reusing labour-intensive parts such as parse trees and query plans. For more specifics, let's look at a simple table scan with a large number of partitions (see initialisation script):

EXPLAIN (ANALYZE, COSTS OFF, MEMORY, TIMING OFF)
SELECT * FROM test WHERE y = 127;

/*
...
   ->  Seq Scan on l256 test_256
         Filter: (y = 127)
 Planning:
   Buffers: shared hit=1536
   Memory: used=3787kB  allocated=4104kB
 Planning Time: 61.272 ms
 Execution Time: 4.929 ms
*/

In this scenario involving a selection from a table with 256 partitions, my laptop's PostgreSQL took approximately 60 milliseconds for the planning phase and only 5 milliseconds for execution. During the planning process, it allocated 4 MB of RAM and accessed 1,500 data pages. Quite substantial overhead for a production environment! In this case, PostgreSQL has generated a custom plan that is compiled anew each time the query is executed, choosing an execution strategy based on the query parameter values during optimisation. To improve efficiency, let's parameterise this query and store it in the 'Plan Cache' of the backend by executing PREPARE:

PREPARE tst (integer) AS SELECT * FROM test WHERE y = $1;
EXPLAIN (ANALYZE, COSTS OFF, MEMORY, TIMING OFF) EXECUTE tst(127);

/*
...
   ->  Seq Scan on l256 test_256
         Filter: (y = $1)
 Planning:
   Buffers: shared hit=1536
   Memory: used=3772kB  allocated=4120kB
 Planning Time: 59.525 ms
 Execution Time: 5.184 ms
*/

The planning workload remains the same s

[...]

Avoid UUID Version 4 Primary Keys
Posted by Andrew Atkinson on 2025-07-02 at 00:00

Introduction

Over the last decade, when working on databases where UUID Version 41 was picked as the primary key data type, these databases usually have bad performance and excessive IO.

UUID is a native data type that can be stored as binary data, with various versions outlined in the RFC. Version 4 is mostly random bits, obfuscating information like when the value was created, or where it was generated.

Version 4 UUIDs are easy to work with in Postgres as the gen_random_uuid()2 function generates values natively since version 13 (2020).

I’ve learned there are misconceptions about UUID Version 4, and sometimes the reasons users pick this data type is based on them.

Because of the poor performance, misconceptions, and available alternatives, I’ve come around to a simple position: Avoid UUID Version 4 for primary keys.

My more controversial take is to avoid UUIDs in general, but I understand there are some legitimate scenarios where there aren’t practical alternatives.

As a database enthusiast, I wanted to have an articulated position on this classic “Integer v. UUID” debate.

Among databases folks, debating these alternatives may be tired and clichéd. However, from my consulting work, I can say that I’m working on databases with UUID v4 as the primary key in 2024 and 2025, and seeing the issues discussed in this post.

Let’s dig in.

The scope of UUIDs in this post

  • UUIDs (or GUID in Microsoft speak)3) are long strings of 36 characters, 32 digits, 4 hyphens, stored as 128 bits (16 byte) values, stored using the binary uuid data type in Postgres
  • The RFC documents how the 128 bits are set
  • The bits for UUID Version 4 are mostly random values
  • UUID Version 7 includes a timestamp in the first 48 bits, which works much better with database indexes compared with random values

Although unreleased as of this writing, and pulled from Postgres 17 previously, UUID V7 is part of Postgres 184 scheduled for release in the Fall of 2025.

What kind of app database

[...]

Operating PostgreSQL as a Data Source for Analytics Pipelines – Recap from the Stuttgart Meetup
Posted by Ilya Kosmodemiansky in Data Egret on 2025-07-01 at 11:29

PostgreSQL user groups are a fantastic way to build new connections and engage with the local community. Last week, I had the pleasure of speaking at the Stuttgart meetup, where I gave a talk on “Operating PostgreSQL as a Data Source for Analytics Pipelines.”

Below are my slides and a brief overview of the talk. If you missed the meetup but would be interested in an online repeat, let me know in the comments below!

 

PostgreSQL in Evolving Analytics Pipelines

As modern analytics pipelines evolve beyond simple dashboards into real-time and ML-driven environments, PostgreSQL continues to prove itself as a powerful, flexible, and community-driven database.
In my talk, I explored how PostgreSQL fits into modern data workflows and how to operate it effectively as a source for analytics.

From OLTP to Analytics

PostgreSQL is widely used for OLTP workloads – but can it serve OLAP needs as well? With physical or logical replication, PostgreSQL can act as a robust data source for analytics, enabling teams to offload read-intensive queries without compromising production.

Physical replication provides an easy-to-operate, read-only copy of your production PostgreSQL database. It lets you use the full power of SQL and relational features for reporting – without the risk of data scientists or analysts impacting production. It offers strong performance, though with some limitations: no materialized views, no temporary tables, and limited schema flexibility. Honestly, there are more ways analysts could harm production even from the replica side.

Logical replication offers a better solution:

  • It allows schema adaptation (e.g., different partitioning strategies)
  • Data retention beyond what’s on the primary
  • Support for multiple data sources feeding into one destination

However, it also brings complexity – especially around DDL handling, failover, and more awareness from participating teams.

The Shift Toward Modern Data Analytics

Data analytics in 2025 is more than jus

[...]

Behind the scenes: Speeding up pgstream snapshots for PostgreSQL
Posted by Esther Minano in Xata on 2025-07-01 at 10:30
How targeted improvements helped us speed up bulk data loads and complex schemas.

From 99.9% to 99.99%: Building PostgreSQL Resilience into Your Product Architecture
Posted by Umair Shahid in Stormatics on 2025-07-01 at 09:55

Most teams building production applications understand that “uptime” matters. I am writing this blog to demonstrate how much difference an extra 0.09% makes.

At 99.9% availability, your system can be down for over 43 minutes every month. At 99.99%, that window drops to just over 4 minutes. If your product is critical to business operations, customer workflows, or revenue generation, those 39 extra minutes of downtime each month can be the difference between trust and churn.

What Does 99.99% Look Like in Practice?

Before we dive into architecture, let us put the numbers into perspective:

Uptime SLA

Downtime per month

Downtime per year

99.9%

~43 minutes

~8.76 hours

99.95%

~22 minutes

~4.38 hours

99.99%

~4.4 minutes

~52.6 minutes

Moving from 99.9% to 99.99% availability encompasses the entire posture of resilience in your architecture, particularly in how your database layer handles failures, maintenance, upgrades, and unexpected events.

PostgreSQL by Default: Solid but Not Enough

PostgreSQL is a battle-tested, reliable RDBMS. It does what it promises, and does it well. However, the vanilla setup—single-node PostgreSQL, no failover, backups stored on local disk, and no monitoring—is only suitable for non-critical applications.

Out of the box, PostgreSQL off

[...]

An example for metrics assisted modeling of database performance
Posted by Dmitry Dolgov on 2025-07-01 at 00:00
PostgreSQL community conferences, as many other industry driven events, usually do not have proceedings or anything similar. All you get is a slide deck and, if you're lucky, a video recording. But it often feels not enough, as it was with one of my talks last year at PGConfDE, where the topic I was talking about included examples of mathematical modeling to predict how the database will behave. Plenty of things were not said or were not communicated with enough details, because they were simply not fitting the format. The long story short, I've managed to summarize the talk as a whitepaper [1] in much more depth and uploaded it to Zenodo, a European open research repository operated by CERN. Let me know what you think about it. [1]: https://zenodo.org/records/15786156

June, BuildFarm and ABIs
Posted by Mankirat Singh on 2025-07-01 at 00:00
This blog discusses the current progress and implementation of an automated ABI compliance reporting system for PostgreSQL, which is also my GSoC 2025 project.

The PG_TDE Extension Is Now Ready for Production
Posted by Jan Wieremjewicz in Percona on 2025-06-30 at 16:00
PG_TDE Extension Is Now Ready for ProductionLately, it feels like every time I go to a technical conference, someone is talking about how great PostgreSQL is. I’d think it’s just me noticing, but the rankings and surveys say otherwise. PostgreSQL is simply very popular. From old-school bare metal setups to VMs, containers, and fully managed cloud databases, PostgreSQL keeps gaining ground. And […]

Samed YILDIRIM
Posted by Andreas 'ads' Scherbaum on 2025-06-30 at 14:00
PostgreSQL Person of the Week Interview with Samed YILDIRIM: I’m originally from Turkey. The last city I lived in Turkey was Istanbul. I am now living in Riga, Latvia for more than 5 years already.

PgPedia Week, 2025-06-29
Posted by Ian Barwick on 2025-06-30 at 10:50

Housekeeping announcements:

this website's PostgreSQL installation is now on version 17 ( insert champagne emoji here ) the search function now works properly with non-ASCII characters ( there was an embarrassing oversight which went unnoticed until someone kindly pointed it out ) PostgreSQL 18 changes this week

This week there have been a couple of renamings:

psql 's meta-command \close was renamed \close_prepared pg_createsubscriber 's option --remove was renamed to --clean

Both of these items were added during the PostgreSQL 18 development cycle so do not have any implications for backwards compatibility.

There have also been some fixes related to:

comments on NOT NULL constraints virtual/generated columns

See below for other commits of note.

more...

How often is the query plan optimal?
Posted by Tomas Vondra on 2025-06-30 at 10:00

The basic promise of a query optimizer is that it picks the “optimal” query plan. But there’s a catch - the plan selection relies on cost estimates, calculated from selectivity estimates and cost of basic resources (I/O, CPU, …). So the question is, how often do we actually pick the “fastest” plan? And the truth is we actually make mistakes quite often.

Consider the following chart, with durations of a simple SELECT query with a range condition. The condition is varied to match different fractions of the table, shown on the x-axis (fraction of pages with matching rows). The plan is forced to use different scan methods using enable_ options, and the dark points mark runs when the scan method “won” even without using the enable_ parameters.

scan method durations for uniform data set

It shows that for selectivities ~1-5% (the x-axis is logarithmic), the planner picks an index scan, but this happens to be a poor choice. It takes up to ~10 seconds, and a simple “dumb” sequential scan would complete the query in ~2 seconds.

Performance Test for Percona Transparent Data Encryption (TDE)
Posted by Andreas Scherbaum on 2025-06-29 at 22:00
This blog posting compares the performance of the pg_tde by Percona extension for PostgreSQL with the performance of an unmodified PostgreSQL. The pg_tde by Percona extension provides encryption for both data pages on disk as well as WAL files on disk - Encryption at Rest. The extension is currently in the rc stage (release candidate). This test does not compare pg_tde by Percona to other products. It tests the extension performance and compares with unmodified PostgreSQL, and shows how much overhead it is to encrypt data at rest, and encrypt WAL.

PgPedia Week, 2025-06-22
Posted by Ian Barwick on 2025-06-28 at 04:22

PgPedia Week has been delayed this week due to malaise and other personal circumstances.

PostgreSQL 18 changes this week pg_dump has gained the ability to dump statistics on foreign tables various bugfixes for the new amcheck function gin_index_check() PostgreSQL 18 articles PostgreSQL 18 just dropped: 10 powerful new features devs need to know (2025-06-20) - Devlink Tips Preserve optimizer statistics during major upgrades with PostgreSQL v18 (2025-06-17) - Laurenz Albe / CYBERTEC

more...

Choosing the Right Barman Backup Type and Mode for Your PostgreSQL Highly Available Cluster
Posted by semab tariq in Stormatics on 2025-06-27 at 10:55

When running a PostgreSQL database in a High Availability (HA) cluster, it’s easy to assume that having multiple nodes means your data is safe. But HA is not a replacement for backups. If someone accidentally deletes important data or runs a wrong update query, that change will quickly spread to all nodes in the cluster. Without proper safeguards, that data is gone everywhere. In these cases, only a backup can help you restore what was lost.

The case mentioned above isn’t the only reason backups are important. In fact, many industries have strict compliance requirements that make regular backups mandatory. This makes backups essential not just for recovering lost data, but also for meeting regulatory standards.

Barman is a popular tool in the PostgreSQL ecosystem for managing backups, especially in High Availability (HA) environments. It’s known for being easy to set up and for offering multiple types and modes of backups. However, this flexibility can also be a bit overwhelming at first. That’s why I’m writing this blog to break down each backup option in a simple and clear way, so you can choose the one that best fits your business needs.

1. Backup Type in Barman

Barman primarily supports three backup types through various combinations of backup methods. The table below highlights the/z key differences between them

Feature Full Backup Incremental Backup Differential backup
Data Captured An entire database at a specific point in time. Changes since the last backup (full or incremental). Changes since the last full backup.
Relative To N/A (it’s the foundation) Previous backup (full or incremental). The last full backup.
[...]

Entity Relationship Maps
Posted by Dave Stokes on 2025-06-25 at 13:26

    Even the most experienced database professionals are known to feel a little anxious when peering into an unfamiliar database. Hopefully, they inspect to see how the data is normalized and how the various tables are combined to answer complex queries.  Entity Relationship Maps (ERM) provide a visual overview of how tables are related and can document the structure of the data.

    The Community Edition of DBeaver can provide an ERM easily.  First, connect to the database. Right-click on the 'Tables' and then select 'View Diagram'.


The ERM is then displayed.



And it gets better! If you are not sure how two tables are joined? Click on the link between the tables, and the names of the columns you will want to use in your JOIN statement are highlighted.



Conclusion

    Exploring an unfamiliar database can be daunting. But Entity Relationship Maps provide a way to navigate the territory. And Dbeaver is a fantastic tool for working with databases.


CNPG Recipe 20 – Finer Control of Postgres Clusters with Readiness Probes
Posted by Gabriele Bartolini in EDB on 2025-06-25 at 06:04

Explore the new readiness probe introduced in CloudNativePG 1.26, which advances Kubernetes-native lifecycle management for PostgreSQL. Building on the improved probing infrastructure discussed in my previous article, this piece focuses on how readiness probes ensure that only fully synchronised and healthy instances—particularly replicas—are eligible to serve traffic or be promoted to primary. Special emphasis is placed on the streaming probe type and its integration with synchronous replication, giving administrators fine-grained control over failover behaviour and data consistency.

PGConf.EU 2025 - Registration opened
Posted by Magnus Hagander in PostgreSQL Europe on 2025-06-24 at 14:37

The registration for PGConf.EU 2025, which will take place on 21-24 October in Riga, is now open.

We have a limited number of tickets available for purchase with the discount code EARLYBIRD.

This year, the first day of training sessions has been replaced with a Community Events Day. This day has a more limited space, and can be booked as part of the conference registration process or added later, as long as seats last.

We hope you will be able to join us in Riga in October!

Beyond the Basics of Logical Replication
Posted by Radim Marek on 2025-06-24 at 07:10

With First Steps with Logical Replication we set up a basic working replication between a publisher and a subscriber and were introduced to the fundamental concepts. In this article, we're going to expand on the practical aspects of logical replication operational management, monitoring, and dive deep into the foundations of logical decoding.

Initial Data Copy

As we demonstrated in the first part, when setting up the subscriber, you can choose (or not to) to rely on initial data copy using the option WITH (copy_data = false). While the default copy is incredibly useful behavior, this default has characteristics you should understand before using it in a production environment.

The mechanism effectively asks the publisher to copy the table data by taking a snapshot (courtesy of MVCC), sending it to the subscriber, and thanks to the replication slot "bookmark," seamlessly continues streaming the changes from the point the snapshot was taken.

Simplicity is the key feature here, as a single command handles the snapshot, transfer, and transition to ongoing streaming.

The trade-off you're making is when it comes to performance, solely due to the fact that it's using a single process per table. While it works almost instantly for test tables, you will encounter notable delay and overhead when dealing with tables with gigabytes of data.

Although parallelism can be controlled by the max_sync_workers_per_subscription configuration parameter, it still might leave you waiting for hours (and days) for any real-life database to get replicated. You can monitor whether the tables have already been synchronized or are still waiting/in progress using the pg_subscription_rel catalog.

SELECT srrelid::regclass AS table_name, srsubstate
FROM pg_subscription_rel;

Where each table will have one of the following states:

  • i not yet started
  • d copy is in progress
  • s syncing (or waiting for confirmation)
  • r done & replicating

Luckily, the state r indicates that the s

[...]

Contributions for the week of 2025-06-09 (Week 24)
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-06-24 at 06:36

On June, 9th, Andrzej Nowicki held a talk about “From Queries to Pints: Building a Beer Recommendation System with pgvector” at the Malmö PostgreSQL User Group.

On June, 3rd the 5. PostgreSQL User Group NRW MeetUp took place in Germany.

Speakers:
* Josef Machytka about “Boldly Migrate to PostgreSQL - Introducing credativ-pg-migrator" * Mathis Rudolf about "PostgreSQL Locking – Das I in ACID" * Christoph Berg about "Modern VACUUM"

POSETTE: An Event for Postgres 2025 took place June 10-12 online. Organized by:

Talk Selection Team: * Claire Giordano * Daniel Gustafsson * Krishnakumar “KK” Ravi * Melanie Plageman

Hosts: * Adam Wolk * Boriss Mejías * Claire Giordano * Derk van Veen * Floor Drees * Melanie Plageman * Thomas Munro

Speakers: * Abe Omorogbe * Adam Wolk * Alexander Kukushkin * Amit Langote * Ashutosh Bapat * Bohan Zhang * Boriss Mejías * Bruce Momjian * Charles Feddersen * Chris Ellis * Cédric Villemain * David Rowley * Derk van Veen * Ellyne Phneah * Gayathri Paderla * Heikki Linnakangas * Jan Karremans * Jelte Fennema-Nio * Jimmy Zelinskie * Johannes Schuetzner * Karen Jex * Krishnakumar “KK” Ravi * Lukas Fittl * Marco Slot * Matt McFarland * Michael John Pena * Nacho Alonso Portillo * Neeta Goel * Nitin Jadhav * Palak Chaturvedi * Pamela Fox * Peter Farkas * Philippe Noël * Polina Bungina * Rahila Syed * Robert Haas * Sandeep Rajeev * Sarah Conway * Sarat Balijapelli * Shinya Kato * Silvano Coriani * Taiob Ali * Tomas Vondra * Varun Dhawan * Vinod Sridharan

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.