Latest Blog Posts

All Your GUCs in a Row: backtrace_functions
Posted by Christophe Pettus in pgExperts on 2026-05-15 at 01:00
Debug PostgreSQL errors by capturing C-level stack traces for specific internal functions.

Welcome to ORDER BY jungle
Posted by Radim Marek on 2026-05-15 at 00:00

SQL is fun and not at all boring. The latest article by Markus Winand on Order by Has Come a Long Way sent me on quite a journey.

First, set up a table called nums with one integer column and four rows:

CREATE TABLE nums (a int);
INSERT INTO nums VALUES (0), (1), (2), (3);

Try to guess what these two queries return.

SELECT -a AS a FROM nums ORDER BY a;
SELECT -a AS a FROM nums ORDER BY -a;

Most of us would guess the same rows in a different order. The actual answer is that they produce exactly the same rows in exactly the same order. By the same logic you might expect

SELECT a AS c FROM nums ORDER BY -c;

to do exactly the same. Except it does not. It errors with column "c" does not exist despite the alias being right there in the statement. Welcome to ORDER BY jungle.

Names and expressions are not the same

If you ask most developers how ORDER BY works, they will say "you put a column name there and it sorts the rows". In 99% of queries that is exactly what happens. People sort by created_at or id and move on.

Strictly speaking, three, if you count ORDER BY 1. Positional references are their own can of worms and out of scope for this post.
But `ORDER BY` accepts two different kinds of things:
SELECT created_at, user_id FROM events ORDER BY created_at;
SELECT created_at, user_id FROM events ORDER BY date(created_at);

Both feel natural. And the thing nobody tells you is that they go down completely different code paths in the parser. Different scope rules, different lookups, different error messages. The first looks at your SELECT list. The second looks at your FROM clause. They never look at the same place.

Same answer, two different sorts

Look at the first query again.

SELECT -a AS a FROM nums ORDER BY a;

You wrote ORDER BY a. A bare identifier, no decoration. Postgres goes down the name path. It scans the SELECT list for something called a, finds the aliased column -a AS a, and sorts by its output values. The negated values are -3, -2, -1, 0, ascending is -3, -2, -1,

[...]

Eleven CVEs Walk Into a Release
Posted by Christophe Pettus in pgExperts on 2026-05-14 at 18:00
PostgreSQL 18.4, 17.10, 16.14, 15.18, and 14.23 are out as of May 14, 2026. The release fixes eleven security issues and more than sixty bugs. That is not a typo. Eleven CVEs is the largest single-release security batch I can remember, and three of them are CVSS 8.8 with practical exploitation pa…

PARTITION MERGE/SPLIT, Once More With Locking
Posted by Christophe Pettus in pgExperts on 2026-05-14 at 15:00
PostgreSQL 19 brings back MERGE PARTITIONS and SPLIT PARTITION—but simpler and safer than the first attempt.

Prairie Postgres May meetup: the Mythical data Warehouse
Posted by Henrietta Dombrovskaya on 2026-05-14 at 12:49

Yesterday, we had our first meetup at our new venue, which we hope will become our permanent home: the Chicago Innovations Center at 1 W. Monroe. We had the pleasure of having Elizabeth Christensen from Snowflake, who delivered a talk pg_lake: Unifying transactional and analytical data with Postgres.

I find the topic exceptionally valuable, and I was delighted when Elizabeth suggested it. Below are some photos and a presentation recording.

Many thanks to:

  • Chicago Innovations Center and personally, David Dewane, for hosting
  • Elizabeth, for coming and presenting
  • Snowflake for sponsoring pizza
  • Carlos Aranibar for co-hosting
  • Ryan Weisman, Rober Ismo, and Akshay Mestry for essential help before, during, and after the meetup
  • and everyone who came and made it a great event!

I hope to see everyone at PG DATA on June 4-5 and at our next meetup on July 15.

All Your GUCs in a Row: backslash_quote
Posted by Christophe Pettus in pgExperts on 2026-05-14 at 01:00
A 2006 SQL injection vulnerability and multibyte character encodings created `backslash_quote`, a GUC parameter that remains in PostgreSQL for backward…

Postgres May 2026 Security Update: 11 CVEs, All Versions Affected
Posted by Robins Tharakan on 2026-05-13 at 15:28

It's that time again. The upcoming Postgres v18.4 release (along with minor releases for all Major versions) has dropped some serious hints in the git logs, and it's bringing a significant payload of CVE tagged patches. As a seasoned Postgres end-user and an erstwhile DBA, whenever I see a flurry of high-vulnerability security commits, I immediately start recommending that customers begin planning their patching cycles.

(Note: As these patches are hot off the press, official CVSS scores and detailed advisories from NVD and postgresql.org are still pending. The severity scores listed below are estimates at best. I'll update this post with links to the NVD and the postgresql.org site as soon as they're available.)

Tom Lane: Last-minute updates for release notes.
Security: CVE-2026-6472, CVE-2026-6473, CVE-2026-6474, CVE-2026-6475, CVE-2026-6476, CVE-2026-6477, CVE-2026-6478, CVE-2026-6479, CVE-2026-6575, CVE-2026-6637, CVE-2026-6638

Let's quickly take a quick look at the CVE list:

The Memory and Overflow Fixes

  • CVE-2026-6472 (Estimated Severity: Medium)Missing CREATE Privilege Check on Multirange Types. (Backpatched to: 14, 15, 16, 17, 18)
  • CVE-2026-6473 (Estimated Severity: High)The Memory Overflow Ghost. (Backpatched to: 14, 15, 16, 17, 18)
  • CVE-2026-6474 (Estimated Severity: Medium)Unsafe pg_strftime() handling. (Backpatched to: 14, 15, 16, 17, 18)
  • CVE-2026-6477 (Estimated Severity: High)Frontend Large Object Buffer Overruns. (Backpatched to: 14, 15, 16, 17, 18)

The Replication and Backup Vulnerabilities

  • CVE-2026-6475 (Estimated Severity: Critical)Path Traversal in pg_basebackup & pg_rewind. (Backpatched to: 14, 15, 16, 17, 18)
  • CVE-2026-6476 (Estimated Severity: High)SQL Injection in pg_createsubscriber. (Backpatched to: 17, 18 — pg_createsubscriber was introduced in v17)
  • CVE-2026-6638 (Estimated Severity: High)SQL Injection in Logical Replication. (Backpatched to: 16, 17, 18)

The Cryptograp

[...]

Twenty Years in pgcrypto
Posted by Christophe Pettus in pgExperts on 2026-05-13 at 15:00
A heap buffer overflow in pgcrypto's OpenPGP code lurked for two decades—until a December 2025 exploit made it real.

pg_statviz 1.0 released with AI-powered analysis
Posted by Jimmy Angelakos on 2026-05-13 at 12:37

pg_statviz logo

I'm excited to announce release 1.0 of pg_statviz, the minimalist extension and utility pair for time series analysis and visualization of PostgreSQL internal statistics.

This is a major release that introduces a new optional capability: AI-powered analysis. With the new --ai flag, each chart's data and PNG are sent to a vision-capable LLM along with Senior PostgreSQL DBA-level context, and the model produces a [HEALTHY] / [WARNING] / [CRITICAL] verdict, a short interpretation, and a concrete remediation step for any [WARNING] or [CRITICAL] finding. Reports are written as HTML pages, created alongside the chart PNGs, with a top-level index.html synthesising the per-module findings into a single summary.

AI report sample

The new features:

  • Three AI providers, one flag: --ai claude for Anthropic Claude (the default), --ai gemini for Google AI Studio's free-tier Gemini 2.5 Flash, and --ai local for an Ollama instance running a vision-capable model such as gemma4:e4b (the recommended local default). All three are entirely optional: pg_statviz still installs and runs with zero AI dependencies, and the new [ai] extra (pip install pg_statviz[ai]) pulls in only what you ask for.
  • Per-module HTML reports embed each chart PNG and render the LLM's markdown analysis with status badges and styled paragraphs. A new top-level index.html report aggregates per-chart verdicts and asks the model to synthesise them, identifying correlated patterns across charts (for example, a WAL spike alongside long-running sessions) and surfacing the single most important next action.
  • Deterministic rules engine runs checks on the actual numeric data before the LLM call. Findings are injected into the prompt as additional context, and a severity floor enforces that the final verdict can never be downgraded below the worst rule finding, so an overly optimistic LLM can't quietly hide a real problem.
  • Configuration-aware prompts: the relevant pg_settings for each chart (shared_buffers and bgwriter_* for buffers
[...]

CNPG Recipe 24 - Migrating from Crunchy PGO to PostgreSQL 18 with CloudNativePG
Posted by Gabriele Bartolini in EDB on 2026-05-13 at 10:21

A step-by-step guide to migrating a PostgreSQL 17 cluster managed by Crunchy PGO v6 to PostgreSQL 18 under CloudNativePG. Two paths are covered: a fully declarative offline migration using CloudNativePG’s built-in pg_dump import, and an online migration using native PostgreSQL logical replication for a near-zero-downtime cutover.

Contributions for week 18, 2026
Posted by Cornelia Biacsics in postgres-contrib.org on 2026-05-13 at 06:10

PGConf Belgium took place on 5 May 2026, organized by Wim Bertels, An Vercammen, and Grégory Gioffredi , who served also at the talk selection team.

Speaker:

  • Jan Karremans
  • Boriss Mejias
  • Dwarka Rao
  • Emrah Becer
  • Robert Treat
  • Franck Pachot
  • Mohsin Ejaz
  • Afroditi Loukidou
  • Gianni Ciolli
  • Bruce Momijan
  • Matt Cornillon
  • Xavier Fisher
  • Thijs Lemmens
  • Josef Machytka

Claire Giordano and Aaron Wislang hosted and published a new podcast episode on May 6, 2026 From MemSQL to HorizonDB, an engineer's journey with Adam Prout from the Talking Postgres series.

On May 7 2026, Andreas Scherbaum, Daria Aleshkova, Sergey Dudoladov and Oleksii Kliukin organized the Berlin PostgreSQL May Meetup. Robert Treat and Celeste Horgan spoke at this event.

On May 7 2026, Jimmy Angelakos organized the PostgreSQL Edinburgh Meetup May 2026. River MacLeod, Jim Gardner and Jimmy Angelakos delivered a talk.

All Your GUCs in a Row: backend_flush_after
Posted by Christophe Pettus in pgExperts on 2026-05-13 at 01:00
PostgreSQL's complicated relationship with the Linux page cache spawns four GUCs to manage writeback—and backend_flush_after is the conservative one.

Snowflake Postgres, Lakebase, HorizonDB: Picking the Lock-In You Want
Posted by Christophe Pettus in pgExperts on 2026-05-12 at 15:00
Three major cloud platforms just shipped Postgres with custom storage engines and scale-out architectures.

Managed Postgres, Examined: Google Cloud SQL for PostgreSQL
Posted by Christophe Pettus in pgExperts on 2026-05-12 at 13:00
Google's managed PostgreSQL returns to first principles: a conventional instance on a VM with a regional disk, plus a distinctive data cache on Enterprise Plus…

Two projects, one mission - hackorum and pginbox join forces
Posted by Kai Wagner in Percona on 2026-05-12 at 11:15

Last week, Zsolt and I jumped on a call with someone who had been building something remarkably similar to what we had been working on, completely independently. That someone is Jack Bonatakis, the creator of pginbox.dev, and that call turned into one of the most energizing conversations we’ve had since launching hackorum.dev.

Two builders, one problem

When we launched Hackorum back in January, the goal was simple but important: make the pg-hackers mailing list actually readable. The list is the heartbeat of PostgreSQL core development, patches are proposed, debated, iterated on, and committed entirely through it. But the interface? Decades-old email threads. Dense, fast-moving, and not exactly welcoming to newcomers or even experienced contributors trying to manage the volume.

All Your GUCs in a Row: autovacuum_worker_slots
Posted by Christophe Pettus in pgExperts on 2026-05-12 at 01:00
PostgreSQL 18 splits autovacuum configuration to finally let you tune worker concurrency without restarting.

ParadeDB is Officially on Render
Posted by Ming Ying in ParadeDB on 2026-05-12 at 00:00
Deploy ParadeDB on Render with one click. Full-text search, vector search, and hybrid search over Postgres — now available on your favorite cloud platform.

What’s New in pg_clickhouse
Posted by David Wheeler on 2026-05-11 at 20:24

Bit of a news catchup on the pg_clickhouse project.

What’s New

First up, a couple weeks ago the ClickHouse Blog published What’s New in pg_clickhouse, in which I covered various improvements to the extension:

We’ve been gratified by the community reception of pg_clickhouse, the extension to query ClickHouse databases from Postgres. Recent uptake generated a ton of feedback, which we’ve been diligently addressing in the last few releases. These changes follow our constant mantra for pg_clickhouse: pushdown, pushdown, pushdown! Let’s take a quick tour.

It includes working pushdown examples for JSONB accessors, SQL value functions like CURRENT_TIMESTAMP, array functions like array_cat() and array_to_string(). It wraps with a demonstration of HTTP result set streaming, with a nice bar char for the before and after (spoiler: pg_clickhouse’s http driver became far more memory-efficient).

v0.3.0

But that’s not all. Today we released pg_clickhouse 0.3.0. Nothing drives improvements like customer issues, and v0.3.0 features a slew of them, including:

  • Mapping for the ClickHouse JSON type to the PostgreSQL JSONB type in the binary driver; it was already supported for the HTTP driver.

  • Support for mapping the Postgres JSON type to the ClickHouse JSON type. In general JSONB better matches ClickHouse JSON semantics, but we wanted to support the obvious alternative.

  • Pushdown for the Postgres to_char(timestamp[tz], fmt) function to the ClickHouse formatDateTime() function for formats that map to binary-compatible equivalents: YYYY, MM, DD, DDD, HH24, HH12, HH, MI, SS, Q, Mon, Dy, AM/PM, plus lowercase variants.

  • Support for pushing down functions from the new re2 extension, which provides ClickHouse-compatible RE2-backed regular expression functio

[...]

SSL in PostgreSQL
Posted by SHRIDHAR KHANAL in Stormatics on 2026-05-11 at 15:09

A beginner’s guide to encrypting your database connections

“’SSL is enabled’ and ‘SSL is actually working’ are two very different things.”

1. What is SSL, and why does a database need
it?

SSL stands for Secure Sockets Layer. Its successor is TLS (Transport Layer Security), but in the PostgreSQL world, and in most documentation, people still call it SSL out of old habit. Don’t let that confuse you. When someone says “SSL” in a Postgres context, they mean modern TLS-based encryption.

Here’s the problem it solves. By default, when your application connects to PostgreSQL, everything travels across the network in plain text. Usernames. Passwords. Every query you run. Every row of data that comes back. If anyone can intercept that traffic, someone on the same network, a compromised internal service, they can read all of it. A basic packet sniffer is enough. No special skills needed.

SSL wraps that connection in encryption before any data is exchanged. What travels on the wire becomes unreadable noise to anyone who doesn’t hold the session keys.

ℹ Note:  Even inside a private network or VPC, this matters. And the “it’s an internal network” line doesn’t protect you from lateral movement attacks, where an attacker is already inside the perimeter

2. How does SSL actually work?

When a client connects to PostgreSQL with SSL, before any database traffic is exchanged, this sequence happens: The client opens a plain TCP connection and signals it wants SSL. The server sends its certificate — a signed document that proves the server’s identity and contains its public key. The client checks whether that certificate was signed by a Certificate Authority it trusts, and whether the hostname in the certificate matches what it is connected to. If both checks pass, both sides negotiate a cipher and derive a shared session key. After that, all PostgreSQL traffic — authenticat

[...]

The wal_level You Set Is Not the wal_level You Get
Posted by Christophe Pettus in pgExperts on 2026-05-11 at 15:00
PostgreSQL 19 finally lets wal_level adapt dynamically to your actual replication slots, eliminating the always-on WAL cost of logical standby insurance.

Making JSONB More Queryable with Generated Columns
Posted by Richard Yen on 2026-05-11 at 06:00

Introduction

Over the past year, I’ve worked in a handful of contexts managing large volumes of data stored as JSONB in PostgreSQL. The scenario is common: users appreciate the flexibility of a document-oriented storage model, avoiding the need to predefine schemas or constantly migrate table structures as their data requirements evolve. JSONB documents can be deeply nested with numerous optional fields, and they scale to hundreds of kilobytes per record without issue. However, when the time comes to query these documents – filtering by user ID, event type, timestamps, or nested action properties – the queries can become slow and/or cumbersome to work with.

The problem I want to address is: “How do we make searching JSONB data more efficient without breaking apart our documents or forcing it into columns in a relational database?” There are several approaches available in Postgres, each with different tradeoffs. I hope to shed some light on those approaches in this article.

The Setup

I created a basic, no-frills table for the sake of this test:

CREATE TABLE events (
    id BIGSERIAL PRIMARY KEY,
    data JSONB NOT NULL
);

Here's the document shape I used for testing and writing this post -- it's representative of the event logs and audit trails I've encountered: a mix of primitive fields, nested objects, and metadata that accumulates over time.

-- Representative JSONB document
{
  "user_id": 5234,
  "event_type": "event_42",
  "timestamp": 1712341200,
  "session_id": "sess_abc123...",
  "ip_address": "192.168.1.42",
  "action": {
    "type": "click",
    "target_id": 87654,
    "coordinates": {"x": 512, "y": 768},
    "duration_ms": 1234
  },
  "device": {
    "type": "mobile",
    "os": "iOS",
    "screen_width": 1920,
    "screen_height": 1080
  },
  "performance": {
    "page_load_time": 1234,
    "dns_lookup": 123,
    "tcp_connection": 234,
    "server_response": 876
  },
  "custom_fields": { ... }
}

The queries that matter are straightforward equality and range filters on known

[...]

All Your GUCs in a Row: autovacuum_work_mem
Posted by Christophe Pettus in pgExperts on 2026-05-11 at 01:00
autovacuum_work_mem sets the maximum memory each autovacuum worker may use for tracking dead tuple identifiers (TIDs) during a vacuum. Default is -1, which means “inherit from maintenance_work_mem.” Context is sighup. The parameter exists so that autovacuum’s memory consumption can be tuned indep…

All Your GUCs in a Row: autovacuum_vacuum_scale_factor and autovacuum_vacuum_threshold
Posted by Christophe Pettus in pgExperts on 2026-05-10 at 01:00
Autovacuum's most powerful tuning lever: the scale factor that determines when dead tuples trigger a vacuum. On large tables, the 20% default waits too long.

Strong views on PostgreSQL VIEWs
Posted by Radim Marek on 2026-05-10 at 00:00

VIEWs should be the cleanest abstraction SQL, and therefore Postgres, has on offer. I love the concept. The promise of decoupling logical intent from physical storage is perfect on paper. In practice, few things in the database world trigger such a heated debate or carry as much historical baggage. VIEWs mix big promises with false hopes, and the promises rarely survive contact with production.

The appeal is straightforward. Abstract "active customer" once and reuse it everywhere. Every query, report and dashboard uses the same definition. The "active customer" then becomes the foundation of a "customer orders" view, which in turn powers an operational "customer summary" view.

-- layer 1: who counts as an active customer?
CREATE VIEW active_customers AS
SELECT c.*
FROM customers c
WHERE c.deleted_at IS NULL
  AND c.status = 'active'
  AND c.last_login_at > now() - interval '90 days';

-- layer 2: active customers with their recent orders
CREATE VIEW customer_orders AS
SELECT
    ac.*,
    o.id         AS order_id,
    o.total_cents,
    o.created_at AS ordered_at,
    o.status     AS order_status
FROM active_customers ac
LEFT JOIN orders o ON o.customer_id = ac.id
WHERE o.created_at > now() - interval '12 months'
   OR o.created_at IS NULL;

-- layer 3: one row per customer, ready for the dashboard
CREATE VIEW customer_summary AS
SELECT
    co.id,
    co.email,
    co.name,
    COUNT(co.order_id)                                   AS orders_12mo,
    COALESCE(SUM(co.total_cents), 0)                     AS revenue_12mo_cents,
    MAX(co.ordered_at)                                   AS last_order_at,
    COUNT(*) FILTER (WHERE co.order_status = 'refunded') AS refunds_12mo
FROM customer_orders co
GROUP BY co.id, co.email, co.name;

Each layer has one job. "Active customer" is defined exactly once - if marketing changes the ninety-day rule tomorrow, it is one line in one place, and the dashboard query collapses to SELECT * FROM customer_summary WHERE id = $1.

VIEWs also have the potential to be a real se

[...]

All Your GUCs in a Row: autovacuum_vacuum_max_threshold
Posted by Christophe Pettus in pgExperts on 2026-05-09 at 01:00
PostgreSQL 18 finally fixes the autovacuum formula that left billion-row tables waiting for 200M dead tuples.

A Field Guide to Alternative Storage Engines for PostgreSQL
Posted by Christophe Pettus in pgExperts on 2026-05-08 at 17:30
Six years after PostgreSQL shipped the table access method API, the alternative storage engine ecosystem is thriving—but messier than early predictions…

pg_lake vs Lakebase: Two Very Different Things Called “Postgres + Lakehouse”
Posted by Christophe Pettus in pgExperts on 2026-05-08 at 15:00
Snowflake's pg_lake and Databricks' Lakebase both wrap PostgreSQL for lakehouse workloads, but they're nearly opposite architectures.

No Compiler Required: Writing SQL-Only Postgres Extensions
Posted by Shaun Thomas in pgEdge on 2026-05-08 at 12:11

Recently at Postgres Conference 2026 in San Jose, I presented a talk called Let's Build a Postgres Extension! Since that entire presentation was primarily focused on writing a C extension while exploring the Postgres source code, I only mentioned pure SQL extensions as an aside. But what's more likely in the Postgres community in general: C devs, or people who know SQL?It turns out that you can do a lot with functions, triggers, views, tables, and various other Postgres-native capabilities. The extension system doesn't care whether the contents are compiled C or plain SQL. It just wants a control file, a SQL script, and an optional  to help with installation.So let's build a relatively trivial extension article entirely in SQL.

What Do We Want?

First things first: we need a plan. What should this extension actually do? I wrote about blocking DDL a while back with a C extension, so why not revisit that example with SQL?This being pure SQL, there are other handy elements we can add with very little effort, so how about:
  • A setting to enable or disable the extension.
  • A setting to allow or block superusers from executing DDL.
  • A role that allows members to bypass the DDL restriction.
  • A function to add users to the bypass role.
  • A function to remove users from the bypass role.
  • A view to see which users are in the bypass role.
  • An
  • event trigger
  •  to actually block DDL attempts.
Rather than a simple event trigger to prevent DDL execution, we are building a kind of DDL execution management suite. That should hopefully demonstrate just how capable a purely SQL implementation can be.

Three Files and a Dream

Every Postgres extension, regardless of complexity, boils down to the same basic structure:
  • A control file to describe the extension.
  • A SQL script to create the tables, views, functions, etc.
  • An optional Makefile to copy the SQL script and control file to the right place. Unlike a C project, there's no build step for a SQL-only extension bec
[...]

All Your GUCs in a Row: autovacuum_vacuum_insert_scale_factor and autovacuum_vacuum_insert_threshold
Posted by Christophe Pettus in pgExperts on 2026-05-08 at 01:00
PostgreSQL 13 added insert-triggered autovacuum to solve a critical problem: append-only tables never vacuumed, breaking index-only scans and delaying tuple…

Tracing PostgreSQL Using eBPF and Hardware Breakpoints
Posted by Jan Kristof Nidzwetzki on 2026-05-08 at 00:00

Hardware breakpoints can trigger eBPF programs when specific memory addresses are accessed, leveraging CPU hardware support for low overhead. By utilizing these hardware breakpoints, we can efficiently monitor PostgreSQL’s internal variable updates, such as transaction ID generation and OID assignment. In this post, we will discuss what hardware breakpoints are, whether they have less overhead than uprobes, and how to answer questions like “How many transactions are being executed per second?” or “Which backend is consuming the most OIDs?” with bpftrace.

In a previous blog post, I discussed how to use eBPF, uprobes/uretprobes, and bpftrace to monitor PostgreSQL’s internal functions, such as vacuum. uprobes and uretprobes trigger eBPF code in the Linux kernel when a function in user space is entered or exited. Even though uprobes and uretprobes have very low overhead, they still require instrumenting the function entry or exit with a software interrupt. That overhead is especially relevant for functions that are called very frequently. In contrast, hardware breakpoints use CPU hardware features to monitor specific memory addresses and trigger a real hardware interrupt when the monitored address is accessed. Therefore, they also let us catch all updates to a specific variable, even if it is updated in multiple functions, without instrumenting every function that touches it.

How Uprobes Work Under the Hood?

Uprobes and uretprobes instrument the function entry or exit by replacing the first few instructions with a software (int3) interrupt. When the function is called, the CPU executes the software interrupt, triggering a CPU mode switch that enables the eBPF program to run.

When the eBPF program finishes, the kernel needs to execute the instruction that was replaced with int3. This is called out-of-line execution and requires the kernel to run the original instruction separately, which adds additional overhead.

The instruction replacement can be observed in gdb by inspecting the first few byte

[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.