Latest Blog Posts

Not a Backup Replacement: What PostgreSQL Instant Recovery Actually Solves
Posted by Zhang Chen on 2026-01-09 at 00:00
When people first hear "instant recovery," they often assume it is a replacement for backup—or worse, a risky shortcut only experts should attempt. But the truth is exactly the opposite. Instant recovery does not challenge any design boundary of PostgreSQL. It simply rejects a premise the kernel never promised: "the database must be in a bootable state." This article explains, from the perspective of PostgreSQL internals, why instant recovery is a capability that was always meant to exist, how it differs fundamentally from backup recovery, and why time itself is often the scarcest resource at an incident scene.

PgPedia Week, 2025-12-14
Posted by Ian Barwick on 2026-01-07 at 22:10
PostgreSQL 19 changes this week ALTER TABLE ALTER TABLE ... SPLIT PARTITION ... syntax added ALTER TABLE ... MERGE PARTITIONS ... syntax added pg_stat_progress_analyze column started_by added pg_stat_progress_vacuum columns mode and started_by added vacuumdb option --dry-run added PostgreSQL 18 articles Postgres 18 New Default for Data Checksums and How to Deal with Upgrades (2025-12-11) - Greg Sabino Mullane / Crunchy Data

more...

Quick and dirty loading of CSV files
Posted by Hubert 'depesz' Lubaczewski on 2026-01-07 at 00:10
Back in September 2025, David Fetter asked on IRC, about a tool to quickly load CSV to database. One that would require minimal configuration, will try to figure out as much as possible on its own. I thought that it would be great idea. Plus, I'm trying to learn more JavaScript / Node, so figured … Continue reading "Quick and dirty loading of CSV files"

PostgreSQL Meetup in Frankfurt December 2025
Posted by Andreas Scherbaum on 2026-01-06 at 22:00
On December 10th, 2025, the PostgreSQL December Meetup in Frankfurt (Main) took place. We had two speakers, and a nice dinner. This Meetup was organized around the IT-Tage, and the Meetup organizers had booked a reeting toom in the Scandic Frankfurt Museumsufer. About 15 minutes walking from the conference, and unfortunately it was raining that evening. Meeting room Dirk Aumueller: Running PostgreSQL with Podman, Quadlet & Systemd Dirk spoke about how to run PostgreSQL inside a Podman Quadlet which is managed by systemd.

Small improvement for pretty-printing in paste.depesz.com
Posted by Hubert 'depesz' Lubaczewski on 2026-01-06 at 15:13
As you maybe know, some time ago I made paste service, mostly to use for queries, or related text to share on IRC. One part of it is that it also has pretty printer of provided queries. Recently I realized that in case of complex join conditions, the output is, well, sub-optimal. For example: SELECT … Continue reading "Small improvement for pretty-printing in paste.depesz.com"

What is index overhead on writes?
Posted by Hubert 'depesz' Lubaczewski on 2026-01-06 at 11:57
One of things people learn is that adding indexes isn't free. All write operations (insert, update, delete) will be slower – well, they have to update index. But realistically – how much slower? Full tests should involve lots of operations, on realistic data, but I just wanted to see some basic info. So I figured … Continue reading "What is index overhead on writes?"

pg_acm is here!
Posted by Henrietta Dombrovskaya on 2026-01-06 at 11:15

I am writing this post over the weekend but scheduling it to be published on Tuesday, after the PG DATA CfP closes, because I do not want to distract anyone, including myself, from the submission process.

A couple of months ago, I created a placeholder in my GitHub, promising to publish pg_acm before the end of the year. The actual day I pushed the initial commit was January 3, but it still counts, right? At least, it happened before the first Monday of 2026!

It has been about two years since I first spoke publicly about additional options I would love to see in PostgreSQL privileges management. Now I know it was not the most brilliant idea to frame it as “what’s wrong with Postgres permissions,” and this time I am much better with naming.

pg_acm stands for “Postgres Access Control Management.” The key feature of this framework is that each schema is created with a set of predefined roles and default privileges, which makes it easy to achieve complete isolation between different projects sharing the same database, and allows authorized users to manage access to their data without having superuser privileges.

Please take a look, give it a try, and let me know what’s wrong with my framework 🙂

Stabilizing Benchmarks
Posted by Tomas Vondra on 2026-01-06 at 10:00

I do a fair amount of benchmarks as part of development, both on my own patches and while reviewing patches by others. That often requires dealing with noise, particularly for small optimizations. Here’s an overview of ways I use to filter out random variations / noise.

Most of the time it’s easy - the benefits are large and obvious. Great! But sometimes we need to care about cases when the changes are small (think less than 5%).

Dissecting PostgreSQL Data Corruption
Posted by Josef Machytka in credativ on 2026-01-06 at 09:03

PostgreSQL 18 made one very important change – data block checksums are now enabled by default for new clusters at cluster initialization time. I already wrote about it in my previous article. I also mentioned that there are still many existing PostgreSQL installations without data checksums enabled, because this was the default in previous versions. In those installations, data corruption can sometimes cause mysterious errors and prevent normal operational functioning. In this post, I want to dissect common PostgreSQL data corruption modes, to show how to diagnose them, and sketch how to recover from them.

Corruption in PostgreSQL relations without data checksums surfaces as low-level errors like “invalid page in block xxx”, transaction ID errors, TOAST chunk inconsistencies, or even backend crashes. Unfortunately, some backup strategies can mask the corruption. If the cluster does not use checksums, then tools like pg_basebackup, which copy data files as they are, cannot perform any validation of data, so corrupted pages can quietly end up in a base backup. If checksums are enabled, pg_basebackup verifies them by default unless –no-verify-checksums is used. In practice, these low-level errors often become visible only when we directly access the corrupted data. Some data is rarely touched, which means corruption often surfaces only during an attempt to run pg_dump — because pg_dump must read all data.

Typical errors include:

-- invalid page in a table:
pg_dump: error: query failed: ERROR: invalid page in block 0 of relation base/16384/66427
pg_dump: error: query was: SELECT last_value, is_called FROM public.test_table_bytea_id_seq

-- damaged system columns in a tuple:
pg_dump: error: Dumping the contents of table "test_table_bytea" failed: PQgetResult() failed.
pg_dump: error: Error message from server: ERROR: could not access status of transaction 3353862211
DETAIL: Could not open file "pg_xact/0C7E": No such file or directory.
pg_dump: error: The command was: COPY publ
[...]

Exploration: CNPG Logical Replication in PostgreSQL
Posted by Umut TEKIN in Cybertec on 2026-01-06 at 06:05

Introduction

This image is to showcase PostgreSQL on cloud.

PostgreSQL has built-in support for logical replication. Unlike streaming replication, which works at the block level, logical replication replicates data changes based on replica-identities, usually primary keys, rather than exact block addresses or byte-by-byte copies.

PostgreSQL logical replication follows a publish–subscribe model. One or more subscribers can subscribe to one or more publications defined on a publisher node. Subscribers pull data changes from the publications they are subscribed to, and they can also act as publishers themselves, enabling cascading logical replication.

Logical replication has many use cases, such as;

  • Nearly zero downtime PostgreSQL upgrades
  • Data migrations
  • Consolidating multiple databases for reporting purposes
  • Real time analytics
  • Replication between PostgreSQL instances on different platforms

Other than these cases, if your PostgreSQL instance is running on RHEL and you want to migrate to CNPG, streaming replication can be used. However, since CNPG images are based on Debian, the locale configuration must be compatible, and when using libc-based collations, differences in glibc versions can affect collation behavior. That is why we will practice logical replication setup to avoid these limitations.

Preparation of The Source Cluster

The specifications of the source PostgreSQL instance:

cat /etc/os-release | head -5
NAME="Rocky Linux"
VERSION="10.1 (Red Quartz)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="10.1"

psql -c "select version();"
                                                 version                                                  
----------------------------------------------------------------------------------------------------------
 PostgreSQL 18.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 14.3.1 20250617 (Red Hat 14.3.1-2), 64-bit
(1 row)

Populating some test data using pgbench and checking pgbench_accounts table:

/usr/pgsql-18/bin/pgbench -i -s 10
dropping old tables...
[...]

PostgreSQL 18 RETURNING Enhancements: A Game Changer for Modern Applications
Posted by Ahsan Hadi in pgEdge on 2026-01-06 at 05:45

PostgreSQL 18 has arrived with some fantastic improvements, and among them, the RETURNING clause enhancements stand out as a feature that every PostgreSQL developer and DBA should be excited about. In this blog, I'll explore these enhancements, with particular focus on the MERGE RETURNING clause enhancement, and demonstrate how they can simplify your application architecture and improve data tracking capabilities.

Background: The RETURNING Clause Evolution

The RETURNING clause has been a staple of Postgres for years, allowing , , and operations to return data about the affected rows. This capability eliminates the need for follow-up queries, reducing round trips to the database and improving performance. However, before Postgres 18, the RETURNING clause had significant limitations that forced developers into workarounds and compromises.In Postgres 17, the community introduced RETURNING support for MERGE statements (commit c649fa24a), which was already a major step forward. MERGE itself had been introduced back in Postgres 15, providing a powerful way to perform conditional , , or operations in a single statement, but without RETURNING didn’t provide an easy way to see what you'd accomplished.

What's New in PostgreSQL 18?

Postgres 18 takes the RETURNING clause to the next level by introducing OLD and NEW aliases (commit 80feb727c8), authored by Dean Rasheed and reviewed by Jian He and Jeff Davis. This enhancement fundamentally changes how you can capture data during DML operations.

The Problem Before PostgreSQL 18

Previously, the RETURNING clause had these limitations; despite being syntactically similar, when applied to different query types:
  • INSERT and UPDATE
  •  could only return new/current values.
  • DELETE
  •  could only return old values
  • MERGE
  •  would return values based on the internal action executed (
  • INSERT
  • ,
  • UPDATE
  • , or
  • DELETE
  • ).
If you needed to compare before-and-after values or track what actually changed during an update, you had l[...]

Not All Unrecoverable PostgreSQL Data Is Actually Lost
Posted by Zhang Chen on 2026-01-06 at 00:00
Most teams assume data loss means restore from backup. This article introduces the Instant Recovery mindset, explains why PostgreSQL makes it possible, and how PDU turns recoverability into a practical, predictable process.

Inventing A Cost Model for PostgreSQL Local Buffers Flush
Posted by Andrei Lepikhov in pgEdge on 2026-01-05 at 12:39

In this post, I describe experiments on the write-versus-read costs of PostgreSQL's temporary buffers. For the sake of accuracy, the PostgreSQL functions set is extended with tools to measure buffer flush operations. The measurements show that writes are approximately 30% slower than reads. Based on these results, the cost estimation formula for the optimiser has been proposed:
flush_cost = 1.30 × dirtied_bufs + 0.01 × allocated_bufs.

Introduction

Temporary tables in PostgreSQL have always been parallel restricted. From my perspective, the reasoning is straightforward: temporary tables exist primarily to compensate for the absence of relational variables, and for performance reasons, they should remain as simple as possible. Since PostgreSQL parallel workers behave like separate backends, they don't have access to the leader process's local state, where temporary tables reside. Supporting parallel operations on temporary tables would significantly increase the complexity of this machinery.

However, we now have at least two working implementations of parallel temporary table support: Postgres Pro and Tantor. One more reason: identification of temporary tables within a UTILITY command is an essential step toward auto DDL in logical replication. So, maybe it is time to propose such a feature for PostgreSQL core.

After numerous code improvements over the years, AFAICS, only one fundamental problem remains: temporary buffer pages are local to the leader process. If these pages don't match the on-disk table state, parallel workers cannot access the data.

A comment in the code (80558c1) made by Robert Haas in 2015 clarifies the state of the art:

/*
 * Currently, parallel workers can't access the leader's temporary
 * tables.  We could possibly relax this if we wrote all of its
 * local buffers at the start of the query and made no changes
 * thereafter (maybe we could allow hint bit changes), and if we
 * taught the workers to read them.  Writing a large number of
 * temporary buffers could be 
[...]

PostgreSQL Table Rename and Views – An OID Story
Posted by Deepak Mahto on 2026-01-05 at 08:53

Recently during a post-migration activity, we had to populate a very large table with a new UUID column (NOT NULL with a default) and backfill it for all existing rows.

Instead of doing a straight:

ALTER TABLE ... ADD COLUMN ... DEFAULT ... NOT NULL;

we chose the commonly recommended performance approach:

  • Create a new table (optionally UNLOGGED),
  • Copy the data,
  • Rename/swap the tables.

This approach is widely used to avoid long-running locks and table rewrites but it comes with hidden gotchas. This post is about one such gotcha: object dependencies, especially views, and how PostgreSQL tracks them internally using OIDs.

A quick but important note

On PostgreSQL 11+, adding a column with a constant default is a metadata-only operation and does not rewrite the table. However:

  • This is still relevant when the default is volatile (like uuidv7()),
  • Or when you must immediately enforce NOT NULL,
  • Or when working on older PostgreSQL versions,
  • Or when rewriting is unavoidable for other reasons.

So the rename approach is still valid but only when truly needed.

The Scenario: A Common Performance Optimization

Picture this: You’ve got a massive table with millions of rows, and you need to add a column with unique UUID default value and not null constraint values. The naive approach? Just run ALTER TABLE ADD COLUMN. But wait for large tables, this can lock your table while PostgreSQL rewrites every single row and can incur considerable time.

So what do we do? We get clever. We use the intermediate table(we can also use unlogged table) with rename trick, below is an sample created to show the scenario’s.

drop table test1;
create table test1
(col1 integer, col2 text, col3 timestamp(0));

insert into test1 
select col1, col1::text , (now() - (col1||' hour')::interval) 
from generate_series(1,1000000) as col1;

create view vw_test1 as 
select * from test1;


CREATE TABLE test1_new 
(like test1 including all);
alte
[...]

Extreme Recovery Series: 4 Hours to Rescue Core Data from a Domestic PG Database
Posted by Zhang Chen on 2026-01-05 at 00:00
A client accidentally ran rm -rf /*, wiping out the entire OS and database. After disk recovery experts salvaged the data files, PDU adapted to this domestic PostgreSQL variant and completed full data recovery in just 4 hours.

How to Recover PostgreSQL When Data Dictionary Gets Corrupted - A Real Case Study
Posted by Zhang Chen on 2026-01-05 at 00:00
When pg_type and pg_attribute are partially destroyed, how do you piece together a corrupted database? This real-world case reveals an ingenious workaround that saved 46% of the data.

World First! Secrets Behind PostgreSQL Fragment Scanning Recovery
Posted by Zhang Chen on 2026-01-05 at 00:00
DROP TABLE with no backup? Most consider it game over. Discover how PDU achieves the "impossible" - scanning raw disk blocks and matching table structures to resurrect your lost data.

Mission Impossible: How We Recovered 1TB of Data in 48 Hours
Posted by Zhang Chen on 2026-01-05 at 00:00
A corrupted disk. A dead database. Unusable backups. 1.5TB of critical business data hanging by a thread. This is the story of how PDU turned an impossible situation into a triumph.

Chaos testing the CloudNativePG project
Posted by Floor Drees in CloudNativePG on 2026-01-05 at 00:00
Meet the mentee: Yash Agarwal worked with the project maintainers on adding chaos testing to CloudNativePG, as part of the LFX mentorship program.

PgPedia Week, 2025-12-07
Posted by Ian Barwick on 2026-01-04 at 23:44
PostgreSQL 19 changes this week pg_stat_replication_slots newly added column  slotsync_skip_at renamed to slotsync_last_skip pg_dsm_registry_allocations improvments to display of the size of DSAs and dshashes PostgreSQL 18 articles A deeper look at old UUIDv4 vs new UUIDv7 in PostgreSQL 18 (2025-12-05) - Josef Machytka / Credativ

more...

Waiting for PostgreSQL 19 – Implement ALTER TABLE … MERGE/SPLIT PARTITIONS … command
Posted by Hubert 'depesz' Lubaczewski on 2026-01-04 at 17:30
On 14th of December 2025, Alexander Korotkov committed patch: Implement ALTER TABLE ... MERGE PARTITIONS ... command   This new DDL command merges several partitions into a single partition of the target table. The target partition is created using the new createPartitionTable() function with the parent partition as the template.   This commit comprises a … Continue reading "Waiting for PostgreSQL 19 – Implement ALTER TABLE … MERGE/SPLIT PARTITIONS … command"

Sticking with Open Source: pgEdge and CloudNativePG
Posted by Floor Drees in CloudNativePG on 2026-01-02 at 00:00
We talked to Matthew Mols, Sr. Director of Engineering at pgEdge, about how CloudNativePG enables them to meet the requirements of their customers using just open source.

CloudNativePG in 2025: CNCF Sandbox, PostgreSQL 18, and a new era for extensions
Posted by Gabriele Bartolini in EDB on 2025-12-31 at 11:50

2025 marked a historic turning point for CloudNativePG, headlined by its acceptance into the CNCF sandbox and a subsequent application for incubation. Throughout the year, the project transitioned from a high-performance operator to a strategic architectural partner within the cloud-native ecosystem, collaborating with projects like Cilium and Keycloak. Key milestones included the co-development of the extension_control_path feature for PostgreSQL 18, revolutionising extension management via OCI images, and the General Availability of the Barman Cloud Plugin. With nearly 880 commits (marking five consecutive years of high-velocity development) and over 132 million downloads, CloudNativePG has solidified its position as the standard for declarative, resilient, and sovereign PostgreSQL on Kubernetes.

PostgreSQL Recovery Internals
Posted by Imran Zaheer in Cybertec on 2025-12-30 at 05:30

Modern databases must know how to handle failures gracefully, whether they are system failures, power failures, or software bugs, while also ensuring that committed data is not lost. PostgreSQL achieves this with its recovery mechanism; it allows the recreation of a valid functioning system state from a failed one. The core component that makes this possible is Write-Ahead Logging (WAL); this means PostgreSQL records all the changes before they are applied to the data files. This way, WAL makes the recovery smooth and robust.

In this article, we are going to look at the under-the-hood mechanism for how PostgreSQL undergoes recovery and stays consistent and how the same mechanism powers different parts of the database. We will see the recovery lifecycle, recovery type selection, initialization and execution, how consistent states are determined, and reading WAL segment files for the replay.

We will show how PostgreSQL achieves durability (the "D" in ACID), as database recovery and the WAL mechanism together ensure that all the committed transactions are preserved. This plays a fundamental role in making PostgreSQL fully ACID compliant so that users can trust that their data is safe at all times.

Note: The recovery internals described in this article are based on the PostgreSQL version 18.1.

Overview

PostgreSQL recovery involves replaying the WAL records on the server to restore the database to a consistent state. This process ensures data integrity and protects against data loss in the event of system failures. In such scenarios, PostgreSQL efficiently manages its recovery processes, returning the system to a healthy operational state. Furthermore, in addition to addressing system failures and crashes, PostgreSQL's core recovery mechanism performs several other critical functions.

The recovery mechanism, powered by WAL and involving the replay of records until a consistent state is achieved (WAL → Redo → Consistency), facilitates several advanced database capabilities:

  • R
[...]

PostgreSQL Contributor Story: Manni Wood
Posted by Floor Drees in EDB on 2025-12-29 at 08:59
Earlier this year we started a program (“Developer U”) to help colleagues who show promise for PostgreSQL Development to become contributors. Manni's manager is responsible for his participation in the program. He always assumed that he didn’t have the skills, but taught himself some x86 assembler and C in his spare time, and when it came to apply, she encouraged him to give it a shot.

FOSS4GNA 2025: Summary
Posted by REGINA OBE in PostGIS on 2025-12-28 at 23:37

Free and Open Source for Geospatial North America (FOSS4GNA) 2025 was running November 3-5th 2025 and I think it was one of the better FOSS4GNAs we've had. I was on the programming and workshop committees and we were worried with the government shutdown that things could go badly since we started getting people withdrawing their talks and workshops very close to curtain time. Despite our attendance being lower than prior years, it felt crowded enough and on the bright side, people weren't fighting for chairs to sit even in the most crowded talks. The FOSS4G 2025 International happened 2 weeks after, in Auckland, New Zealand, and that I heard had a fairly decent turn-out too.

Continue reading "FOSS4GNA 2025: Summary"

Contributions for week 53, 2025
Posted by Cornelia Biacsics in postgres-contrib.org on 2025-12-28 at 21:22

Emma Sayoran organized a PUG Armenia speed networking meetup on December 25 2025.

FOSDEM PGDay 2026 Schedule announced on Dec 23 2025. Call for Paper Committee:

  • Teresa Lopes
  • Stefan Fercot
  • Flavio Gurgel

Community Blog Posts:

Improved Quality in OpenStreetMap Road Network for pgRouting
Posted by Ryan Lambert on 2025-12-28 at 05:01

Recent changes in the software bundled in PgOSM Flex resulted in unexpected improvements when using OpenStreetMap roads data for routing. The short story: routing with PgOSM Flex 1.2.0 is faster, easier, and produces higher quality data for routing! I came to this conclusion after completing a variety of testing with the old and new versions of PgOSM Flex. This post outlines my testing and findings.

The concern I had before this testing was that the variety of changes involved in preparing data for routing in PgOSM Flex 1.2.0 might have degraded routing quality. I am beyond thrilled with what I found instead. Quality of the generated network didn't suffer at all, it was a major win!

What Changed?

The changes started with PgOSM Flex 1.1.1 by bumping internal versions used in PgOSM Flex to Postgres 18, PostGIS 3.6, osm2pgsql 2.2.0, and Debian 13. There was not expected to be any significant changes bundled in that release. After v1.1.1 was released, it came to my attention that pgRouting 4.0 had been released and that update broke the routing instructions in PgOSM Flex's documentation. This was thankfully reported by Travis Hathaway who also helped verify the updates to the process.

pgRouting 4 removed the pgr_nodeNetwork, pgr_createTopology, and pgr_analyzeGraph functions. Removing these functions was the catalyst for the changes made in PgOSM Flex 1.2.0. I had used those pgr_* functions as part of my core process in data preparation for routing for as long as I have used pgRouting.

After adjusting the documentation it became clear there were performance issues using the replacement functions in pgRouting 4.0, namely in pgr_separateTouching(). The performance issue in the pgRouting function is reported as pgrouting#3010. Working through the performance challenges resulted in PgOSM Flex 1.1.2 and ultimately PgOSM Flex 1.2.0 that now uses a custom procedure to prepare the edge network far better suited to OpenStreetMap data.

PostgreSQL as a Graph Database: Who Grabbed a Beer Together?
Posted by Taras Kloba on 2025-12-27 at 00:00

Graph databases have become increasingly popular for modeling complex relationships in data. But what if you could leverage graph capabilities within the familiar PostgreSQL environment you already know and love? In this article, I’ll explore how PostgreSQL can serve as a graph database using the Apache AGE extension, demonstrated through a fun use case: analyzing social connections in the craft beer community using Untappd data.

This article is based on my presentation at PgConf.EU 2025 in Riga, Latvia. Special thanks to Pavlo Golub, my co-founder of the PostgreSQL Ukraine community, whose Untappd account served as the perfect example for this demonstration.

Pavlo Golub's Untappd Profile Pavlo Golub’s Untappd profile - the starting point for our graph analysis

Why Graph Databases?

Traditional relational databases excel at storing structured data in tables, but they can struggle when dealing with highly interconnected data. Consider a social network where you want to find the shortest path between two users through their mutual connections—this requires recursive queries with CTEs, joining multiple tables, and becomes increasingly complex as the depth of relationships grows.

You might say: “But I can do this with relational tables!” And yes, you would be right in some cases. But graphs offer a different approach that makes certain operations much more intuitive and efficient.

Graph databases model data as nodes (vertices) and edges (relationships), making them ideal for:

  • Social networks
  • Recommendation engines
  • Fraud detection
  • Knowledge graphs
  • Network topology analysis

Basic Terms in Graph Theory

Before diving into implementation, let’s establish some fundamental concepts:

Vertices (Nodes) are the fundamental units or points in a graph. You can think of them like tables in relational databases. They represent entities, objects, or data items—for example, individuals in a social network.

Edges (Links/Relationships) are the connections between nodes that indicate relationships

[...]

New PostgreSQL Features I Developed in 2025
Posted by Shinya Kato on 2025-12-25 at 23:00

Introduction

I started contributing to PostgreSQL around 2020. This year I wanted to work harder, so I will explain the PostgreSQL features I developed and committed in 2025.

I also committed some other patches, but they were bug fixes or small document changes. Here I explain the ones that seem most useful.

These are mainly features in PostgreSQL 19, now in development. They may be reverted before the final release.

Added a document that recommends default psql settings when restoring pg_dump backups

When you restore a dump file made by pg_dump with psql, you may get errors if psql is using non-default settings (I saw it with AUTOCOMMIT=off). The change is only in the docs. It recommends using the psql option -X (--no-psqlrc) to avoid reading the psql config file.

For psql config file psqlrc, see my past blog:
https://zenn.dev/shinyakato/articles/543dae5d2825ee

Here is a test with \set AUTOCOMMIT off:

-- create a test database
$ createdb test1

-- dump all databases to an SQL script file
-- -c issues DROP for databases, roles, and tablespaces before recreating them
$ pg_dumpall -c -f test1.sql

-- restore with psql
$ psql -f test1.sql
~snip~
psql:test1.sql:14: ERROR:  DROP DATABASE cannot run inside a transaction block
psql:test1.sql:23: ERROR:  current transaction is aborted, commands ignored until end of transaction block
psql:test1.sql:30: ERROR:  current transaction is aborted, commands ignored until end of transaction block
psql:test1.sql:31: ERROR:  current transaction is aborted, commands ignored until end of transaction block
~snip~

DROP DATABASE from -c cannot run inside a transaction block, so you get these errors. Also, by default, when a statement fails inside a transaction block, the whole transaction aborts, so later statem

[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.