Latest Blog Posts

All Your GUCs in a Row: archive_command
Posted by Christophe Pettus in pgExperts on 2026-04-26 at 01:00
Last post’s deferred party has arrived. archive_command is how WAL segments leave the primary and travel somewhere durable — the substrate on top of which point-in-time recovery, warm standbys, and every serious backup tool are built. It is also, in the classic formulation, a shell command …

All your GUCs in a Row: archive_cleanup_command
Posted by Christophe Pettus in pgExperts on 2026-04-25 at 01:00
Alphabetical order delivers our first casualty. archive_cleanup_command is a standby-server knob that exists entirely to tidy up after archive_command, which the alphabet insists on deferring until the next post. So we will describe how to sweep up a party we have not yet thrown. The briefest-pos…

Postgres Goes to the Lake, Two Ways
Posted by Christophe Pettus in pgExperts on 2026-04-24 at 13:00
Last year’s acquisitions have now shipped products, and for the first time it is possible to compare the Snowflake and Databricks “Postgres-in-the-lakehouse” strategies as real things rather than as marketing decks. The acquisitions: Snowflake bought Crunchy Data in June 2025 (a…

Huge Pages, End to End
Posted by Christophe Pettus in pgExperts on 2026-04-24 at 13:00
The previous post on the Linux 7.0 pgbench regression ended with the same instruction every other Postgres performance post ends with: set huge pages. This post is the long version. If you have read the Postgres docs on huge_pages and you’re still not completely sure what /proc/meminfo is t…

The Scaling Ceiling: When One Postgres Instance Tries to Be Everything
Posted by Shaun Thomas in pgEdge on 2026-04-24 at 11:36

There's a persistent belief in the database world that vertical scaling solves all problems. Need more throughput? Add CPUs. Running out of cache? More RAM. Queries hitting disk? Higher IOPS. It's a comforting philosophy because it's simple, and for a surprisingly long time, it works. A single beefy Postgres instance can handle an enormous amount of punishment before collapsing under the strain.But there's a ceiling up there, and it's not made of hardware. Postgres was designed as a single-instance database engine, and many of its internal structures are shared across every database the instance contains. These shared resources are rarely concerning in a single modest instance. But with twenty databases running a mixture of heavy OLTP workloads, analytical queries, or even mostly idle, the shared nature of these internals becomes very relevant.Let’s talk about the barriers these over-provisioned instances eventually hit, with references to the Postgres source code itself for good measure. Some of these are well known, while others are the kind of thing that strikes suddenly at 2 AM when all the monitoring dashboards turn red simultaneously.

One Pool to Rule Them All

The  parameter is probably the first tunable every Postgres administrator encounters. It controls the size of Postgres's own buffer cache, the region of shared memory where frequently accessed disk pages live so they don't need to be fetched from storage on every read. The documentation suggests starting at 25% of system RAM, and that's reasonable advice for a single-database instance. Most experts in the subject agree.It’s easy to forget that this allocation is instance-wide. The contents of  bear this out, as the buffer pool gets allocated once at startup as a flat array of pages in shared memory:There is no per-database partitioning, no priority system, no reservation mechanism. Every database on the instance competes for the same pages in the same pool. An analytics query scanning a 500GB table in one database will happily evict cached page[...]

Finding the centre of Jamaica.
Posted by Rhys Stewart on 2026-04-24 at 05:10
Do family meetups always devolve into SQL?

All Your GUCs in a Row: application_name
Posted by Christophe Pettus in pgExperts on 2026-04-24 at 01:00
Most GUCs in this series will be operationally irrelevant to most readers. This one is not. application_name is the single cheapest piece of observability infrastructure PostgreSQL ships, and an astonishing number of production databases are running with it unset or stuck at a client library&rsqu…

PREEMPT_NONE Is Dead; Your Postgres Probably Doesn’t Care
Posted by Christophe Pettus in pgExperts on 2026-04-23 at 19:00
A benchmark came out of AWS earlier this month showing PostgreSQL throughput on Linux 7.0 dropping to 0.51x what the same workload produced on Linux 6.x. The Phoronix headline wrote itself. Hacker News did what Hacker News does. By the end of the week, I had been asked by three separate clients w…

Postgres Conference 2026
Posted by Dave Stokes on 2026-04-23 at 14:46

 Postgres Conference 2026 was held in San Jose, California, and once again, I was lucky to be invited to speak. This is a great show for the 'hallway track' where you talk to members of the community and discover many interesting things.

I had a brief conversation with two early contributors to the original PostgreSQL project. One said he was surprised by how much of his code was still in the code base after FORTY YEARS. 

AI and MCP are fully interlaced in many projects.  The one that struck me the most was PgEdge's AI DBA Workbench. It features three tiers of detection and an agentic AI assistant to watch it all. I was talking with CEO David Mitchell about it and told him about my past frustrations with Percona's PMM. He then told me author of the AI DBA Workbench also wrote PMM. And PgAdmin4. 

Observability seems to be the hot, underlying theme this year. Take a look at pg_collector. It is an SQL script that gathers valuable database information and consolidates it into a single HTML file, providing a convenient way to view and navigate the report's sections. One issue with PostgreSQL management is that it is a very complex product with about 400 different settings, and nobody can know in depth how they work. Pg_collector covers a lot of the territory by showing duplicate indexes, invalid indexes, unused indexes, autovacuum parameter, sequences with less than 10% of the remaining values, orphaned prepared transactions, connections without SSL,  excessive logging parameters, track_counts parameter,  synchronous_commit parameter, invalid databases, tables with more than 20% dead rows, transaction ID TXID (Wraparound),  tables that have autovacuum_enabled=off on table level, inactive replication slots, logical replication spill files, and outdated extensions.

For more on observability, check out Ryan Booz's session from the second day, the one not about beekeeping.

This is being written on the morning of the last of three days. I needed to capture some of the highlights of the show before th

[...]

Async I/O in PostgreSQL 19: The Year After
Posted by Christophe Pettus in pgExperts on 2026-04-23 at 13:00
PostgreSQL 18 shipped asynchronous I/O. The dominant flavor on Linux was io_uring; everything else fell back to a worker pool controlled by io_method=worker. Early benchmarks from pganalyze, Aiven, and Better Stack showed real wins on read-heavy workloads with large sequential scans. They also sh…

All your GUCs in a row: allow_system_table_mods
Posted by Christophe Pettus in pgExperts on 2026-04-23 at 01:00
Here is a GUC that ships with a warning label. The docs, which are normally restrained to the point of parody, state plainly that setting this parameter wrong can cause “irretrievable data loss or seriously corrupt the database system.” When the PostgreSQL docs raise their voice, list…

Give Us Access, Already
Posted by Christophe Pettus in pgExperts on 2026-04-22 at 13:00
If you’re going to hire a PostgreSQL consultant, hire one. That means access to the database. I’m writing this because the “we hired you but you can’t touch the thing” conversation happens at the start of roughly one in four PGX engagements, and I would like to have …

Introducing the AI DBA Workbench: PostgreSQL Monitoring That Diagnoses, Not Just Reports
Posted by Antony Pegg in pgEdge on 2026-04-22 at 05:54

PostgreSQL is dominating the database market, and the monitoring tools haven't noticed.More teams run Postgres in production every year. More of those deployments are distributed, multi-region, and mission-critical. And the tooling most of those teams rely on was built for a simpler world: a single instance, a handful of threshold alerts, and a senior DBA who can interpret what the graphs mean at 3 AM. That works when you have one cluster and one person who knows where the bodies are buried. It falls apart the moment you scale past either of those constraints.We built the pgEdge AI DBA Workbench to close that gap, and today it's entering public beta. We think it's the best PostgreSQL monitoring and management platform you've seen, and the rest of this post explains why any postgres 14+. Local installs, self-hosted enterprise estates, Supabase, Amazon RDS - If you can connect to it, you can monitor it.

Three Services, One Platform

The Workbench is a self-hosted platform that combines three services into a single deployment. A collector gathers metrics from every monitored PostgreSQL instance. An alerter evaluates those metrics against threshold rules and a layered anomaly detection system. A server ties everything together through a web UI, a REST API, and a Model Context Protocol (MCP) endpoint that lets AI tools talk directly to your databases.It runs on your infrastructure, ships under the PostgreSQL license, and stores nothing outside your network.

34 Probes, Zero Agents

The collector is the foundation. Point it at your PostgreSQL instances (any PostgreSQL 14 or later, not just pgEdge) and it starts pulling metrics across 34 built-in probes covering query performance, replication health, active connections, WAL throughput, vacuum activity, checkpoints, database conflicts, IO statistics, system-level CPU and memory, disk usage, and more.Two things matter about how collection works. First, the collector connects remotely over standard PostgreSQL connections. There are no agents to install on your datab[...]

All your GUCs in a row: allow_in_place_tablespaces
Posted by Christophe Pettus in pgExperts on 2026-04-22 at 01:00
allow_in_place_tablespaces exists so the PostgreSQL test suite can test replication. That’s it. If you’re reading this as an operator, you will never touch it. But it’s in the alphabet, so here we are. When off (the default), CREATE TABLESPACE requires a LOCATION that points to …

Polish configuration for TSearch
Posted by Hubert 'depesz' Lubaczewski on 2026-04-21 at 21:06
Some time ago someone posted on Reddit that they have problems adding Polish configuration to their PostgreSQL. While checking it, I found some interesting news. And of course figured out how to have Polish configuration… First, the news: apparently in PostgreSQL 19, we will automatically get Polish dictionary, thanks to this commit from 5th of … Continue reading "Polish configuration for TSearch"

Hints, Part 3: Advice, Not Orders
Posted by Christophe Pettus in pgExperts on 2026-04-21 at 13:00
Robert Haas’s pg_plan_advice patch set, proposed for PostgreSQL 19, is where the twenty-year argument from Part 2 has landed — or is trying to. It is not pg_hint_plan brought into core. It is a different thing, with different mechanics, a different scope, and a different answer to the &ldqu…

Waiting for PostgreSQL 19 – Add CONCURRENTLY option to REPACK
Posted by Hubert 'depesz' Lubaczewski on 2026-04-21 at 10:40
On 6th of April 2026, Álvaro Herrera committed patch: Add CONCURRENTLY option to REPACK   When this flag is specified, REPACK no longer acquires access-exclusive lock while the new copy of the table is being created; instead, it creates the initial copy under share-update-exclusive lock only (same as vacuum, etc), and it follows an MVCC … Continue reading "Waiting for PostgreSQL 19 – Add CONCURRENTLY option to REPACK"

How to Use the pgEdge Control Plane: From Zero to Multi-Master and Beyond
Posted by Antony Pegg in pgEdge on 2026-04-21 at 09:37

A couple of months back, the CEO challenged product and marketing to revamp the developer experience on our website in three weeks. I vibe-coded a proof of concept full of "try it now" buttons and interactive guides, the CEO loved it, and then I had to deal with almost every one of those interactive guides being a placeholder card. Engineering was fully booked, and the Control Plane product I needed to write guides for was one I knew inside out at the architecture level but had never personally operated end-to-end through the API.So I sat down and learned the pgEdge Control Plane the hard way: by using it. What follows is what I found, organized as the guide I wish I'd had when I started. If you're evaluating Control Plane, deploying it for the first time, or trying to understand what Day 2 operations actually look like, this is for you.

What Is the Control Plane?

pgEdge Control Plane is a lightweight orchestrator for PostgreSQL. It manages the full database lifecycle (creation, replication, failover, backup, restore, scaling) through a declarative REST API. You describe the database you want in a JSON spec, POST it, and Control Plane handles the rest: configuration, networking, Spock multi-master replication, Patroni for high availability, pgBackRest for backups. All of it.The important thing to understand is that setup is only half the story. There are enough tools out there that can get you a running cluster if you know what you're doing. The hard part, the part where most tools leave you on your own, is Day 2. Modifying a running HA cluster. Adding a node to a live distributed database. Performing a rolling upgrade without downtime. Restoring from backup while keeping replication intact across the remaining nodes. That's where the complexity lives, and that's where Control Plane earns its keep.

Getting Started: Zero to Multi-Master in Five Minutes

Caveat

I’m not promising that every line of code in here will run as-is - It's real, but as I learned the hard way while building the interactive guid[...]

Finding invisible use-after-free bugs in the PostgreSQL planner
Posted by Andrei Lepikhov in pgEdge on 2026-04-20 at 19:57

On a PostgreSQL build with assertions enabled, run the standard make check-world suite with a small debugging extension called pg_pathcheck loaded. It will report on pointers to freed memory in the planner's path lists. Such dangling pointers exist even in the core Postgres now. They are harmless today. But the word today is what makes this worth writing about.

A production story

This story started in July 2021. At the time, I was finishing a sharding solution built on top of postgres_fdw. During testing, our engineers sent me an example query that would crash periodically with a SEGFAULT. One look at the plan told me something was very off.

 Append
   ->  Nested Loop
         Output: data_1.b
         Join Filter: (g1.a = g2.a)
         ...
   ->  Materialize
         Output: g2.a, data_2.b
         ->  Hash Join
               Output: g2.a, data_2.b
               Hash Cond: (data_2.b = g2.a)
               ...

The first obvious question: how did a Materialize node end up as a direct child of an Append? The second, more interesting one: how can one Append combine two sources with different tuple widths? No wonder the query was crashing — and to make it worse, the failure was intermittent; sometimes the very same query produced a perfectly reasonable plan.

On paper, the bug shouldn't have been possible: the optimiser doesn't work that way. A few days of debugging pointed the finger at dangling pointers. While building one of the alternative Append paths, the optimiser adds a cheaper path to a child RelOptInfo’s pathlist and evicts the one that was there before. But the previously constructed Append still holds a pointer to that now-freed slot. A step or two later, the allocator hands the exact same chunk back out for a new Path higher up the tree, for, say, an enclosing JOIN. The result is a plan that makes no semantic sense at all.

Where dangling pointers come from

PostgreSQL builds each relation's pathlist incrementally through add_path(). When a newly arrived path dominates an e

[...]

All your GUCs in a row: allow_alter_system
Posted by Christophe Pettus in pgExperts on 2026-04-20 at 18:00
We begin at allow_alter_system, which is both new and politically fraught — so let’s start with a fight. ALTER SYSTEM was added in 9.4 as a quality-of-life improvement: set GUCs from an SQL prompt, have the values written into postgresql.auto.conf, no shell access required. It was immediate…

Hints, Part 2: Features We Do Not Want
Posted by Christophe Pettus in pgExperts on 2026-04-20 at 13:00
For most of PostgreSQL’s history, the official community position on query hints has been a polite version of “no, and stop asking.” The position isn’t subtle. The PostgreSQL wiki maintains a page titled Not Worth Doing, and “Oracle-style optimizer hints” is li…

Understanding PostgreSQL REPACK Through repack.c
Posted by Chao Li in Highgo Software on 2026-04-20 at 08:55

REPACK is a new PostgreSQL 19 feature for physically compacting a table by rewriting it into new storage. Like VACUUM, it deals with the space left behind by dead tuples, but it does so by building a fresh table file instead of mostly cleaning pages in place. Ordinary VACUUM can mark space reusable
inside the table and may truncate some empty pages at the end, but it usually cannot fully return bloat to the operating system. REPACK, like VACUUM FULL, rewrites the table into a compact file and swaps that storage into place. The important difference from VACUUM FULL is that REPACK CONCURRENTLY keeps the table usable for most of the operation by copying a snapshot and replaying concurrent changes before a short final lock-and-swap phase.

REPACK code is interesting because it sits between several difficult subsystems: table rewrites, index rebuilds, relfilenode swaps, logical decoding, background workers, snapshots, and lock management. Reading repack.c is a good way to understand how PostgreSQL can physically rebuild a table while preserving the table’s logical identity.

At a high level, REPACK creates a new physical copy of a table, fills it with live tuples from the old table, rebuilds or swaps indexes, and then swaps the physical storage underneath the original relation OID. The user still sees the same table OID, privileges, dependencies, inheritance relationships, and catalog identity, but the heap file is new and compact.

The file comment at the top of repack.c summarizes the two modes:

  • non-concurrent mode: take AccessExclusiveLock, rewrite the table, swap storage, drop the old storage
  • concurrent mode: take ShareUpdateExclusiveLock, copy the table while writes continue, decode concurrent changes from WAL, replay them into the new heap, briefly upgrade to AccessExclusiveLock, apply remaining changes, then swap.

That split drives almost every design choice in the file.

Entry Point

The main SQL entry point is ExecRepack() in repack.c. It parses options like VERBOSE, ANALYZ

[...]

Contributions for week 15, 2026
Posted by Cornelia Biacsics in postgres-contrib.org on 2026-04-20 at 08:07

The London PostgreSQL Meetup Group met on April 14, 2026 organized by:

  • Valeria K. (Data Egret)
  • Chris Ellis
  • Alastair Turner
  • Michael Christofides

Monica Sarbu spoke at the The San Francisco Bay Area PostgreSQL Meetup Group met virtually on April 14, 2026 organized by

  • Katharine Saar
  • Stacey Haysler
  • Christophe Pettus

On April 15, the Postgres Meetup for All met virtually, organized by Elizabeth Christensen on April 15, 2026.

Speaker:

  • Arun Kumar Samayam
  • Phani Kadambari
  • Y V Ravi Kumar

The Barcelona (+Valencia) PostgreSQL User Group met on April 17, 2026

Organizer:

  • Marcelo Diaz
  • Valeria Haro
  • Laura Minen
  • Martín Marqués

Speaker:

  • Marcelo Díaz
  • Javier Vela

On April 15, 2026, Ellyne Phneah delivered a PostgreSQL talk at the Digital Marketing Europe 2026.

Community Blog Posts:

  • Andreas Scherbaum about PGConf India 2026 - Review Jesús Espino wrote about his book publication “[My Book Is Out: Deep Dive into a SQL Query]”(https://www.linkedin.com/pulse/my-book-out-deep-dive-sql-query-jes%C3%BAs-espino-arcqe/)

Book Publications:

The Postgres Performance Triangle
Posted by Richard Yen on 2026-04-20 at 08:00

Everyone who’s gone at least knee-deep in photography knows there’s this idea of the exposure triangle: aperture, shutter speed, and ISO. Depending on what you’re going for artistically, you adjust the three parameters, knowing that there are trade-offs in doing so. After working on a few cases, and presenting solutions to customers, I’ve started to think about Postgres performance tuning in a similar way – there are basic parameters that can be tuned, and there are trade-offs for the choices DBAs make:

  • Memory Allocation
  • Disk I/O
  • Concurrency

Each of these (in broad strokes) affects throughput – how much work your system gets done.

Caveat: I know that in the academic sense, “throughput” doesn’t quite capture the balance of these concepts, but please bear with me!

Let’s talk about how each of these three work together with the whole system, and what the trade-offs look like.


Memory Allocation

When you increase memory allocation in Postgres, whether it’s shared_buffers or work_mem, things tend to feel smoother. Most notably, queries spill to disk less often, sorts and joins stay in memory, cache hit rates improve. But there’s a trade-off that’s easy to miss at first, especially with these two parameters. A single complex query can consume multiple chunks of work_mem (see Laetitia’s excellent post about it). Multiply that across concurrent queries, and you begin to see the OS consuming swap space, churning at checkpoints, and even OOM Killer getting invoked. So while more memory can make things faster, it also quietly reduces how much concurrency your system can safely handle.

I’d relate this to aperture – you can throw money at some fast glass, but you also get shallower depth of field (in an annoying way).


Disk I/O

Disk is where things go when memory isn’t enough, or when an access pattern requires it. We see examples of this in , sequential scans, random index lookups, and temporary files from sorts or hashes. Lowering work_mem might increase disk I/O due to so

[...]

Hints, Part 1: The State of the Art Everywhere But Here
Posted by Christophe Pettus in pgExperts on 2026-04-20 at 06:00
pg_plan_advice is expected to land in PostgreSQL 19. That makes this a good moment to look at query hints — what they are, what every other major database does with them, and how PostgreSQL ended up being the obvious outlier. Three parts. This is the first. What a hint is A query hint is an instr…

PostgreSQL MVCC, Byte by byte
Posted by Radim Marek on 2026-04-17 at 13:15

You run SELECT * FROM orders in one psql session and see 50 million rows. A colleague in another session runs the same query at the same moment and sees 49,999,999. Neither of you is wrong, and neither is seeing stale data. You are both reading the same 8KB heap pages, the same bytes on disk.

This is the promise of PostgreSQL's MVCC (Multi-Version Concurrency Control), and it's the reason readers never block writers and writers never block readers. It is also one of the most misunderstood pieces of the storage engine. People know "there are multiple versions of a row" and leave it at that.

The answer lives in eight bytes on every single tuple.

xmin and xmax: the only two XIDs that matter

If you've read Inside the 8KB Page, you know that every tuple starts with 23-byte header. The first eight bytes of that header are two 32-bit transaction IDs: t_xmin (the transaction that inserted this version) and t_xmax (the transaction that deleted or updated it, or 0 if it's still live).

That's the core of MVCC at the storage level. PostgreSQL does not keep a separate "current version" table. It does not mark rows as latest. Every tuple carries its own two-field timestamp, and when your query reads a page, PostgreSQL has to decide, tuple by tuple, whether your transaction is allowed to see it.

A minimal demo:

CREATE TABLE mvcc_demo (id int, val text);
INSERT INTO mvcc_demo VALUES (1, 'alpha'), (2, 'beta');

Peek at the raw page with pageinspect:

SELECT lp, t_xmin, t_xmax, t_ctid
FROM heap_page_items(get_raw_page('mvcc_demo', 0));
 lp | t_xmin | t_xmax | t_ctid
----+--------+--------+--------
  1 |    100 |      0 | (0,1)
  2 |    100 |      0 | (0,2)
(2 rows)

Two tuples. Both stamped with t_xmin = 100 (the transaction that ran the INSERT) and t_xmax = 0 (nobody has deleted them). At this moment, every session on the database will see these rows, because everyone's snapshot agrees that transaction 100 has committed.

Now open two concurrent sessions. Session A runs an UPDATE without committ

[...]

Enforcing Constraints Across Postgres Partitions
Posted by Shaun Thomas in pgEdge on 2026-04-17 at 05:48

Postgres table partitioning is one of those features that feels like a superpower right up until it isn't. Just define a partition key, carve up data into manageable chunks, and everything hums along beautifully. And what's not to love? Partition pruning in query plans, smaller tables, faster maintenance, easy archiving of old data; it's a smorgasbord of convenience.Then you try to enforce a unique constraint without including the partition key, and Postgres behaves as if you just asked it to divide by zero. Well... about that.

The Rule Nobody Reads Until It's Too Late

The Postgres documentation on partitioning spells it out pretty clearly in the limitations section:To create a unique or primary key constraint on a partitioned table, the partition keys must not include any expressions or function calls and the constraint's columns must include all of the partition key columns.Read that again. The constraint's columns must include all of the partition key columns. Not "should." Not "it would be nice if." Must. And the reasoning is maddeningly justified: each partition maintains its own index, and a local index can only enforce uniqueness within its own partition. Postgres has no concept of a global index that spans all partitions simultaneously, so it has no mechanism to check whether some value in partition A already exists in partition B.Other database engines (Oracle, for instance) have global indexes that solve this at the storage layer. Postgres does not, and there's been no serious movement on the mailing lists to add them. So we're left to our own devices.

When Theory Meets the Event Pipeline

Consider a fairly common (if somewhat contrived) scenario: an  table partitioned by range on an identity column. The table includes a  that the application uses to prevent duplicate event processing. Naturally, that should be unique across all partitions.Now try adding  to that table without an error. The partition key is , and  doesn't include it, so Postgres rejects the constraint. You could make a composit[...]

Postgres 19 Release Notes
Posted by Bruce Momjian in EDB on 2026-04-15 at 21:15

I have just completed the first draft of the Postgres 19 release notes. It includes little developer community feedback and still needs more XML markup and links. This year I have created a wiki page explaining the process I use.

The release note feature count is 212, which includes a strong list of administrative and monitoring features. Postgres 19 Beta 1 should be released in a few months. The final release is planned for September/October of this year.

Waiting for PostgreSQL 19 – Online enabling and disabling of data checksums
Posted by Hubert 'depesz' Lubaczewski on 2026-04-15 at 18:05
On 3rd of April 2026, Daniel Gustafsson committed patch: Online enabling and disabling of data checksums   This allows data checksums to be enabled, or disabled, in a running cluster without restricting access to the cluster during processing.   Data checksums could prior to this only be enabled during initdb or when the cluster is … Continue reading "Waiting for PostgreSQL 19 – Online enabling and disabling of data checksums"

Introducing Xata OSS: Postgres platform with branching, now Apache 2.0
Posted by Tudor Golubenco in Xata on 2026-04-15 at 12:30
Xata core is now available as open source under the Apache 2 license. It adds copy-on-write branching, scale-to-zero compute to Postgres.

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.