Latest Blog Posts

PGConf.be 2024
Posted by Wim Bertels on 2024-11-06 at 15:19

A round up of the fourth PGConf.be

The shared presentations are online, as are a couple of recordings and turtle-loading have-a-cup-of-tea locally stored photos.

Using the well known and broadly spread technique of inductive reasoning we came to the conclusion that this fourth PGConf.be conference was a success, as well as the art work. No animals or elephants we’re hurt during this event.

The statistics are

  • 67 participants

    • depending on the session, an extra 80 to 100 students attended as well

  • 11 speakers

  • 4 sponsors

This conference wouldn’t have been possible without the help of volunteers.
To conclude a big thank you to all the speakers, sponsors and attendants.
Without them a conference is just a like tee party.

An Elephant in the Cluster: Making PostgreSQL Feel at Home on Kubernetes
Posted by Jan Wieremjewicz in Percona on 2024-11-06 at 15:00
Making PostgreSQL Feel at Home on KubernetesTL;DR Kubernetes was built for stateless apps, but as more stateful applications (like databases) run on it, operators include quite heavy implementations to Kubernetes workload management API (such as StatefulSets) deficits. While creating custom methods allows flexibility and faster time to market, it also leads to inconsistency and complexity. In this blog, I want to […]

Transitioning from Oracle to PostgreSQL: Partitioning
Posted by Umair Shahid in Stormatics on 2024-11-06 at 11:46

As databases grow, managing large tables becomes more challenging. Table partitioning is a tried-and-tested approach that helps break down large tables into smaller, more manageable segments, enhancing performance, maintainability, and scalability.

What is Table Partitioning?

Table partitioning is a database design technique that divides a large table into smaller, more manageable sub-tables called partitions. Each partition holds a subset of the data based on specific criteria, such as date ranges, categories, or hash values. While partitioning makes it seem like you’re working with a single large table, behind the scenes, queries and operations are distributed across multiple partitions.

This approach serves several key purposes:
- Performance Improvement: Partitioning allows databases to focus operations (like SELECT, UPDATE, or DELETE) on relevant partitions instead of scanning the entire table. For instance, when querying a sales table for a specific month, only the partition corresponding to that month is accessed, significantly reducing the I/O load and boosting performance.
- Better Manageability: By splitting large tables into smaller segments, maintenance tasks such as indexing, backups, and archiving can be performed on individual partitions. This keeps operations manageable, even for tables with billions of rows.
- Efficient Data Retention and Archiving: Data retention policies are easier to enforce when using partitioning. For example, old partitions can be quickly archived or dropped when data is no longer needed, without affecting the rest of the table.

In both Oracle and PostgreSQL, partitioning is a crucial feature for DBAs managing high-volume databases. Although both systems offer range, list, and hash partitioning methods, the implementation and management vary, which is why understanding the nuances is critical for a seamless transition.

The post Transitioning from Oracle to PostgreSQL: Partitioning appeared first on Stormatics.

Transitioning from Oracle to PostgreSQL: Partitioning
Posted by Umair Shahid in Stormatics on 2024-11-06 at 11:46

As databases grow, managing large tables becomes more challenging. Table partitioning is a tried-and-tested approach that helps break down large tables into smaller, more manageable segments, enhancing performance, maintainability, and scalability.

What is Table Partitioning?

Table partitioning is a database design technique that divides a large table into smaller, more manageable sub-tables called partitions. Each partition holds a subset of the data based on specific criteria, such as date ranges, categories, or hash values. While partitioning makes it seem like you’re working with a single large table, behind the scenes, queries and operations are distributed across multiple partitions.

This approach serves several key purposes:
- Performance Improvement: Partitioning allows databases to focus operations (like SELECT, UPDATE, or DELETE) on relevant partitions instead of scanning the entire table. For instance, when querying a sales table for a specific month, only the partition corresponding to that month is accessed, significantly reducing the I/O load and boosting performance.
- Better Manageability: By splitting large tables into smaller segments, maintenance tasks such as indexing, backups, and archiving can be performed on individual partitions. This keeps operations manageable, even for tables with billions of rows.
- Efficient Data Retention and Archiving: Data retention policies are easier to enforce when using partitioning. For example, old partitions can be quickly archived or dropped when data is no longer needed, without affecting the rest of the table.

In both Oracle and PostgreSQL, partitioning is a crucial feature for DBAs managing high-volume databases. Although both systems offer range, list, and hash partitioning methods, the implementation and management vary, which is why understanding the nuances is critical for a seamless transition.

The post Transitioning from Oracle to PostgreSQL: Partitioning appeared first on Stormatics.

System views added to pgdoc.link
Posted by Hubert 'depesz' Lubaczewski on 2024-11-06 at 11:44
Checking logs for pgdoc.link I noticed that some people where searching for system views, like pg_stat_activity, or pg_stat_all_tables in Pg 9.3. Now, these will work. This increased total number of known keywords from 1840 to 1883. Not much, but it's a progress 🙂

PostgreSQL is super solid in enforcing (well established) constraints!
Posted by Luca Ferrari on 2024-11-06 at 00:00

A note about mgirating from other databases…

PostgreSQL is super solid in enforcing (well established) constraints!

Well, let’s turn that around: SQLite3 is somehow too flexible in allowing you to store data!

We all know that.

And we all have been fighting situations where we have a well defined structure in SQLite3 and, ocne we try to migrate to PostgreSQL, a bad surprise arrives! As an example, today I was trying to migrate a Django project with the built-in loaddata from a dumpdata, and sadly:



django.db.utils.DataError:
    Problem installing fixture '/home/luca/git/respi/respiato/sql/data.respi.json':
           Could not load respi.PersonPhoto(pk=30647):
               value too long for type character varying(20)



So in my SQLite3 tables some fields (at least one) have exceeded the size of the varchar(20), and while PostgreSQL correctly refuses to store such value(s), SQLite3 happily get them into the database without warning you!

The fix, in this particular case, is quite simple: issueing an ALTER TABLE personphoto ALTER COLUMN file_path SET VARCHAR(50) does suffice. I could have used text also, but I would like to keep under control crazy values incoming from my application.

The point is: sooner or later, you will be stuck against a constraint your stack is not honoring, so be prepared for some troubles.

Using PostgreSQL in first place would have made the long-term maintanance easier, according to me.

Plugging the Postgres Upgrade Hole
Posted by Shaun Thomas in Tembo on 2024-11-05 at 09:00
We at Tembo truly love Postgres. We're convinced everyone would love Postgres just as much as we do if only upgrading Postgres between major versions wouldn't be so darn difficult.

Waiting for PostgreSQL 18 – Add SQL function array_reverse()
Posted by Hubert 'depesz' Lubaczewski on 2024-11-05 at 08:17
On 1st of November 2024, Michael Paquier committed patch: Add SQL function array_reverse()   This function takes in input an array, and reverses the position of all its elements. This operation only affects the first dimension of the array, like array_shuffle().   The implementation structure is inspired by array_shuffle(), with a subroutine called array_reverse_n() that … Continue reading "Waiting for PostgreSQL 18 – Add SQL function array_reverse()"

Contributions for the week of 2024-10-28 (Week 44 overview)
Posted by Jimmy Angelakos in postgres-contrib.org on 2024-11-04 at 23:28
[...]

Cloning Postgres user privileges vs ChatGPT
Posted by Kaarel Moppel on 2024-11-04 at 22:00
At pgConf.eu in Athens - by the way, reported to be the largest PostgreSQL conference to date, attesting to the software’s undeniable success! - I got into a sort of interesting hallway discussion. I guess my chat companion was relatively new to the ecosystem - and said something along the...

Rushabh Lathia
Posted by Andreas 'ads' Scherbaum on 2024-11-04 at 14:00
PostgreSQL Person of the Week Interview with Rushabh Lathia: My name is Rushabh Lathia, I am recognized as a PostgreSQL Contributor. Originally, I hail from the vibrant state of Gujarat, India, known for its rich culture, delectable cuisine, and warm hospitality. However, for the past few years, I have been residing in the beautiful city of Pune, Maharashtra. While I cherish my roots in Gujarat, Pune has become my second home, a place where I’ve found my stride, forged lasting connections, and embarked on new adventures.

Exposing PostgreSQL with NGINX Ingress Controller
Posted by Sergey Pronin in Percona on 2024-11-04 at 13:57
Exposing PostgreSQL with NGINX Ingress ControllerI wrote a blog post in the past about a generic approach on how to expose databases in Kubernetes with Ingress controllers. Today, we will dive deeper into PostgreSQL with ingress and also review two different ways that can be taken to expose the TCP service. The goal is to expose multiple PostgreSQL clusters through […]

PL/pgSQL Secrets: How Conditional Expressions Are Parsed and Evaluated Under the Hood.
Posted by Deepak Mahto on 2024-11-02 at 09:35

Recently, in the Official PostgreSQL Slack, a user posted a PL/pgSQL block (shown below) and was surprised that it did not generate a syntax error:

DO $$
DECLARE i INT;
BEGIN
  i = 0;
  IF i = 0 AND THEN 
    RAISE NOTICE 'i = 0';
  END IF;
END; $$;

At first glance, this code block seems incomplete. Notice the IF condition: it appears to be missing an additional condition after the AND operator. Logically, this should cause an exception due to the incomplete condition following AND.

  IF i = 0 AND THEN 

However, during PL/pgSQL execution, the condition is evaluated without any syntax errors or warnings. This raises a critical question:

How does PostgreSQL internally process this condition?
What allows this seemingly incomplete expression to work?

While reviewing such Pl/pgSQL it looks incomplete and assuming that it should fail will be surprising.

In this blog, we’ll dive into the internals of PL/pgSQL to understand how this conditional statement is processed and why PostgreSQL does not flag it as erroneous.

This blog covers the topic both in text and through an accompanying video for a more in-depth look. Scroll down to watch the video or read on for the full written guide.
DatabaseGyaan – PL/pgSQL Conditional Expression Processing Internal.

Uncover PL\pgSQL Internal Code

One of the greatest advantages of open-source software is the ability to examine the code base directly. This gives us a foundation for understanding how things work or, in some cases, why they don’t break as expected.

Our investigation begins with the PLPGSQL_STMT_IF structure, tracing through the call stack in the pl_exec.c file.

Double click to enlarge

PL\pgSQL Internal Code

By exploring the code, we find that IF statements and their conditions are evaluated using exec_run_select, which effectively executes a SELECT statement that returns a boolean result.

Ready to enhance your PostgreSQL development skills? My course on PL/pgSQL will hel
[...]

Announcing Release 18 of the PostgreSQL Buildfarm client
Posted by Andrew Dunstan in EDB on 2024-11-02 at 06:07

I have pushed  Release 18 of the PostgreSQL Buildfarm client

In addition to numerous minor tweaks and bug fixes, the following changes are made:

  • many improvements in the collection of log files
  • accommodate change in data checksum default in cross version upgrade testing
  • increase default PGCTLTIMEOUT to 180s
  • improve "safe" environment set, including all in the build_env configuration settings
  • on script timeout, don't just fail but send the result to the server (Note: the timeout mechanism doesn't work on Windows, and has been disabled there)
  • set the default timeout to 4 hours.
  • remove redundant TestDecoding module
  • add a module for running "make dist"
  • improve detection of failures when running with meson
  • add README files to the set ignored by the sample trigger_exclude setting.
     

The release can be downloaded from  

https://github.com/PGBuildFarm/client-code/releases/tag/REL_18 or

https://buildfarm.postgresql.org/downloads

Upgrading is highly recommended. 

Why pg_dump Is Amazing
Posted by Robert Haas in EDB on 2024-11-01 at 15:56

I wrote a blog post a couple of weeks ago entitled Is pg_dump a Backup Tool?. In that post, I argued in the affirmative, but also said that it's probably shouldn't be your primary backup mechanism. For that, you probably shouldn't directly use anything that is included in PostgreSQL itself, but rather a well-maintained third-party backup tool such as barman or pgbackrest. But today, I want to talk a little more about why I believe that pg_dump is both amazingly useful for solving all kinds of PostgreSQL-related problems and also just a great piece of technology.

Read more »

Offline PostgreSQL Installation on RHEL 9: Solving the No Internet Challenge
Posted by semab tariq in Stormatics on 2024-10-31 at 09:01

PostgreSQL is one of the most loved databases, especially by developers, for its simplicity, easy configurations, and massive community support. It's an open-source powerhouse known for handling everything from small projects to large-scale applications. 

While major cloud providers, like AWS, Google, and Microsoft offer robust solutions for hosting databases on the cloud, not all businesses can or want to go this route

Many companies, choose to store their databases in secure, closed environments—machines without internet access or outside the cloud. This is often done to maintain tight control over sensitive data and to meet strict security requirements. However installing PostgreSQL in a restricted, offline environment can be a real challenge, as it limits access to typical installation tools. 

Recently, I worked on a client project with a similar setup—a secure, offline environment without internet access—where we needed to install and configure PostgreSQL from scratch. If you’re facing the challenge of setting up PostgreSQL in a closed environment, this blog will guide you through the process step-by-step.

The post Offline PostgreSQL Installation on RHEL 9: Solving the No Internet Challenge appeared first on Stormatics.

Contributions for the week of 2024-10-21 (Week 43 overview)
Posted by Jimmy Angelakos in postgres-contrib.org on 2024-10-30 at 22:04
  • On October 22nd the Extensions Ecosystem Summit took place, as an unofficial side event of PGConf EU in Athens. Supported by Tembo](https://tembo.io/), Percona, Xata, and Timescale, 5 speakers shared their (experience working on) extensions, followed by an open space / contribution fest meant to attract new contributors to the projects. The speakers who donated their time:

The event was organized by: * Floor Drees * David Wheeler

An Interlude: Why you do not want to use C/POSIX locale.
Posted by Christophe Pettus in pgExperts on 2024-10-30 at 14:50

(A quick interlude in our series on locales and character encodings in PostgreSQL.)

C/POSIX locale is only useful if you know that you will never have characters in your strings other than 7-bit ASCII, or you are 100% sure that you are never going to have mixed client encodings. If you do have non-7-bit-ASCII characters with mixed client encodings, zaniness can result:

c=# \l
                                                         List of databases
   Name    |      Owner       | Encoding | Locale Provider |   Collate   |    Ctype    |  Locale   | ICU Rules | Access privileges 
-----------+------------------+----------+-----------------+-------------+-------------+-----------+-----------+-------------------
 c         | xof              | UTF8     | libc            | POSIX       | POSIX       |           |           | 
 (17 rows [othes deleted])

c=# set client_encoding='UTF8';
SET
c=# CREATE TABLE m (v varchar);
CREATE TABLE
c=# INSERT INTO m VALUES('Í');
INSERT 0 1
c=# set client_encoding='LATIN1';
SET
c=# INSERT INTO m VALUES('é');
INSERT 0 1
c=# SELECT * FROM m;
 v  
----
 ?
 é
(2 rows)

c=# set client_encoding='UTF8';
SET
c=# SELECT * FROM m;
 v  
----
 Í
 é
(2 rows)

c=# INSERT INTO m VALUES('‰');
INSERT 0 1
c=# SELECT * FROM m;
 v  
----
 Í
 é
 ‰
(3 rows)

c=# set client_encoding='LATIN1';
SET
c=# SELECT * FROM m;
ERROR:  character with byte sequence 0xe2 0x80 0xb0 in encoding "UTF8" has no equivalent in encoding "LATIN1"

Even worse (in this regard) than C locale is SQL_ASCII locale. Please never use this locale.

PostgreSQL in The Big Apple
Posted by Stefanie Janine on 2024-10-29 at 23:00

For the first time we have a guest writer, Emma Saroyan.

Discovering PostgreSQL - My Journey

I recently traveled from Armenia to New York City to speak at PGConf NYC 2024, organized by PgUS. PostgreSQL, originally called Postgres, is the #1 open source relational database in the world. I gave a talk about my journey into discovering Postgres, and it was an inspiring experience engaging with the PostgreSQL community, sharing insights, and learning about the latest advancements in the field.

Emma Saroyan presenting at PGConf NYC

The 3-day conference, held from September 30 to October 2, 2024, at Convene 237 Park Avenue in New York City, welcomed hundreds of attendees from across the US and around the world.

Entry to PGConf NYC 2024 held in midtown Manhattan

entry

The speakers shared insights on their lessons learned in the Postgres world, along with the latest developments in PostgreSQL’s security, performance, new features, best practices, challenges, community, and more.

Bruce Momjian presenting a talk

Some of the world’s largest software companies, including Microsoft, Google, AWS, and Fujitsu were present at the conference. They are heavily invested in PostgreSQL, making its adoption a great success in the global database market.

Many companies are migrating from alternative database platforms to PostgreSQL due to its free and open-source nature. Companies like EDB, Percona, Citadel, pganalyze, Notion, and Xata showcased prominent use cases for PostgreSQL in areas such as financial services, analytics, and AI. These examples highlight not only why PostgreSQL is the top choice among open-source databases, but also reflect how its strong, collaborative community is a key factor in its widespread adoption.

Boriss Mejías and Emma Saroyan standing in front of an EDB rollup

With over 35 years of active development, PostgreSQL has become the world’s most advanced open source database, excelling in various areas and supported by a vibrant, dedicated community that continuously drives its growth and innovation. There are a number of ways to get involved in the Postgres community: user groups, submitting patches, documentation, events, conferences, blogs,

[...]

PGConf & Extension Ecosystem Summit EU 2024
Posted by David Wheeler in Tembo on 2024-10-29 at 22:04
The PGConf 2024 logo

Last week I MCed the first Extension Ecosystem Summit EU and attended my first at PGConf EU in Athens, Greece. Despite my former career as an archaeologist — with a focus on Mediterranean cultures, no less! — this was my first visit to Greece. My favorite moment was the evening after the Summit, when I cut out of a networking shindig to walk to Pláka and then circumnavigate the Acropolis. I mean just look at this place!

Nighttime photo of the Acropolis of Athens

The Acropolis of Athens on the evening of October 22, 2024. © 2024 David E. Wheeler

Highlight of the trip for sure. But the Summit and conference were terrific, as well.

Extension Ecosystem Summit

Floor Drees kindly organized The Extension Ecosystem Summit EU, the follow-up to the PGConf.dev original. While the Vancouver Summit focused on developers, we tailored this iteration to users. I started the gathering with a condensed version of my POSETTE talk, “State of the Postgres Extension Ecosystem”, but updated with a Trunk OCI Distribution demo. Links:

We then moved into a lightning round of 10 minute introductions to a variety of extensions:

[...]

Contributions for the week of 2024-10-14 (Week 42 overview)
Posted by Jimmy Angelakos in postgres-contrib.org on 2024-10-29 at 19:19
  • On October 15th, Paul Jungwirth presented temporal INSERT at the Chicago PostgreSQL User Group (PUG) meetup. You can find the recording and write-up here.

4 Ways to Create Date Bins in Postgres: interval, date_trunc, extract, and to_char
Posted by Christopher Winslett in Crunchy Data on 2024-10-29 at 14:30

You followed all the best practices, your sales dates are stored in perfect timestamp format …. but now you need to get reports by day, week, quarters, and months. You need to bin, bucket, and roll up sales data in easy to view reports. Do you need a BI tool? Not yet actually. Your Postgres database has hundreds of functions that let you query data analytics by date. By using some good old fashioned SQL - you have powerful analysis and business intelligence with date details on any data set.

In this post, I’ll walk through some of the key functions querying data by date. For a summary of the best ways to store date and time in Postgres, see Working with Time in Postgres. We also have interactive web based tutorial with lots of sample code for working with data by date, with sample data set of ecommerce orders.

Interval - the Swiss-army knife of date manipulation

The interval is a data type used to modify other times. For instance, an interval can be added or subtracted from a known time. Interval is super handy and the first place you can go to quickly summarize data by date. Like a Swiss-army knife, it’s not always the best tool for the job, but it can be used in a pinch. Let’s talk about where it excels.

How can we run a query that returns the total sum of orders for the last 90 days? Of course, interval can be used. Without interval, we often see people using a date variable passed from an external source that has generated a date. Using now() - INTERVAL '90 days', you can use the same query no matter the date. The other secret sauce is the use of now() which is a timestamp for the current time on the server.

SELECT
  SUM(total_amount)
FROM
  orders
WHERE
  order_date >= NOW () - INTERVAL '90 days';
    sum
-----------
 259472.99
(1 row)

Instead of using now(), current_date can be used to return a date instead of a time.

SELECT
  SUM(total_amount)
FROM
  orders
WHERE
  order_date >= current_date - INTERVAL '90 days';

These two queries are different — current_date starts at the beg

[...]

Using CTID Based Pagination for Data Cleanups in PostgreSQL
Posted by Shayon Mukherjee on 2024-10-29 at 11:44
When dealing with very large PostgreSQL tables (we’re talking 15TB+), sometimes routine maintenance like archiving very old data can become surprisingly challenging. Despite having good indexes. I recently faced this issue when trying to clean up very old data on a very large and legacy table. The Problem Initial approach used standard ID-based pagination. Imagine a query like this: DELETE FROM large_table WHERE id IN ( SELECT id FROM large_table WHERE created_at < '2023-04-01' ORDER BY id LIMIT 10000 ); While totally harmless on the surface, this kept timing out (10 min statement_timeout 😬).

New way to search PostgreSQL documentation
Posted by Hubert 'depesz' Lubaczewski on 2024-10-29 at 10:45
PostgreSQL documentation is, generally speaking, great. But it isn't the easiest thing to search in. Over the years I memorized urls to certain docs, but there is a limit to it. What's more, there are certain inconsistencies. For example – most pages that describe program have name that starts with app-. But not all. Some … Continue reading "New way to search PostgreSQL documentation"

Case study: optimization of weirdly picked bad plan
Posted by Hubert 'depesz' Lubaczewski on 2024-10-28 at 12:04
We recently hit an interesting case where planner picked wrong plan. Kinda. And figured it could be used to show how to deal with such cases. So, we have some databases on PostgreSQL 14 (yes, I know, we should upgrade, but it is LONG project to upgrade, so it's not really on the table now). … Continue reading "Case study: optimization of weirdly picked bad plan"

Transitioning from Oracle to PostgreSQL: PL/SQL vs PL/pgSQL
Posted by Umair Shahid in Stormatics on 2024-10-28 at 07:58

Structured Query Language (SQL) is the standard language for managing and manipulating relational databases. It serves as the core mechanism for interacting with databases, enabling users to perform tasks such as querying data, updating records, and managing database structures. SQL’s declarative nature makes it ideal for retrieving and modifying data, but it has limitations when it comes to implementing complex business logic directly within the database.

To address these limitations, database systems like Oracle and PostgreSQL offer procedural extensions to SQL. Oracle’s PL/SQL and PostgreSQL’s PL/pgSQL allow developers to implement more advanced logic, including loops, conditionals, error handling, and transaction control—all within the database. These procedural languages enhance SQL’s capabilities, making it possible to write complex routines that can execute closer to the data, thus improving performance and maintainability.

As an Oracle DBA transitioning to PostgreSQL, understanding the differences between PL/SQL and PL/pgSQL is critical. This article explores the nuances between the two languages, covering syntax, features, and practical migration tips, ensuring you can leverage PL/pgSQL effectively in your PostgreSQL environment.

The post Transitioning from Oracle to PostgreSQL: PL/SQL vs PL/pgSQL appeared first on Stormatics.

Prague PostgreSQL Meetup on October 29th
Posted by Gülçin Yıldırım Jelínek in Xata on 2024-10-28 at 00:00
The next Prague PostgreSQL Meetup will take place on October 29th!

Recap pgconf.eu 2024
Posted by Stefanie Janine on 2024-10-27 at 23:00

PostgreSQL Europe Conference 2024

This years conference took place in Athens, Greece from October 22nd until October 25th.
It has been the biggest European PostgreSQL conference so far with more than 780 attendees.

I am very proud that my company, ProOpenSource OÜ, has sponsored each PostgreSQL Europe Conference since the company has been founded.

Tuesday 22nd of October

PGEU Diverity Committee

It has been the first ever in person meeting of the PGEU Diverity Committee, which has been founded this year, a couple of months ago.
We discussed what we think would be the next things on the todo list.
More information about the goals can be found at Diversity.

One of the actual achievements is, we had a Diverity Celebration Board for the first time. Karen Jex organized all and every item for that board.

Diverity Celebration Board with lots of messages by attendees

In addition Karen Jex brought with her the new diversity logo as stickers. The logo has been designed by he doughter. I attached one to my notebook lid. Karen Jex also gave every member of the committee a personalized sticker with their name on it. Thank you for your great work, Karen.

PostgreSQL Diverity sticker showing two elephants on my notebook lid in the middle top PostgreSQL Diverity pin showing two elephants with my name on it beneath the diversity logo

Extension Ecosystem Summit

I attended the Extension Ecosystem Summit. David Wheeler announced he had started working on the next PGXN, thanks to support of Tembo.
PGXN is the PostgreSQL extension network. The place to go to find a lot of good extensions, that are enhancing PostgreSQL with new features, data types, foreign data wrapper, etc.

PGXN is not under the roof of the PostgreSQL governance. IMHO that is a great mistake. PGXN would get more awareness if it would be under the roof of PostgreSQL.

Some developers gave talks about their PostgreSQL extensions. As usual I learned something new. I will go into that in detail in another post.

Afterwards have been some good dicussions and knowledge sharing about extensions and why it is sometimes hard to maintain them. In case you don’t know, I am taking care of three extensions.

Wednesday 23rd of October

Talks High

[...]

Mapping Between Pgbouncer and Postgres Connection States
Posted by Abhishek Chanda on 2024-10-27 at 16:21

Pgbouncer is a popular connection pooler for Postgres. Connections from the end user to pgbouncer are called client connections from it’s point of view. Connections from pgbouncer to Postgres are called server connections. In a typical deployment, end users connect to pgbouncer, which maintains a pool of persistent server connections. The pooler assigns a server connection to a client connection based on the given pooling mode (that is beyond the scope of this discussion). Thus, from Postgres’ point of view, the client is actually pgbouncer which gets it’s own backend. Hence there is always a 1:1 correspondence between pgbouncer’s server connection and it’s backend in postgres. A mental model for their relationship is useful for debugging pgbouncer related issues. We will start with some definitions to establish that mapping.

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.