Latest Blog Posts

Posette 2024
Posted by Paolo Melchiorre in ITPUG on 2024-05-07 at 14:15

An Event for Postgres (pronounced /Pō-zet/, and formerly called Citus Con) is a free and virtual developer event. The name POSETTE stands for Postgres Open Source Ecosystem Talks Training & Education.

cursor_tuple_fraction and the PostgreSQL JDBC driver
Posted by Laurenz Albe in Cybertec on 2024-05-07 at 09:15
cursor_tuple_fraction and the PostgreSQL JDBC driver
@ Laurenz Albe 2024

This article is a kind of failure story. I originally set out to write a piece about the PostgreSQL parameter cursor_tuple_fraction, only to discover that my boss has already covered that topic to some extent. Then I thought that I could investigate the parameter’s effect on the JDBC driver. That led me to an embarrassingly long wrestle with auto_explain and an eventual failure to find anything. But the world is littered with success stories, so why not share some failure? And there is the one or other interesting insight in store that might make the article worth your while.

About cursor_tuple_fraction

In a database, cursors are the canonical way of iterating through result sets. Cursors avoid the need to read the complete result set at once, which reduces the memory consumption. In PostgreSQL you can use the non-standard statement DECLARE to explicitly create a cursor, but usually you use client API functions or PL/pgSQL FOR loops that create PostgreSQL cursors “behind the scene”.

Transferring result rows to the client takes some time, and processing on the client side takes even more time. If an application processes a query result in chunks using a cursor, it can take a while until it reaches the end of the result set. Therefore, it is most important to get the first result rows as quickly as possible when you read the result set using a cursor. The total execution time is less important. This is where the PostgreSQL parameter cursor_tuple_fraction comes into play: for queries executed using a cursor, it tells the optimizer to prefer execution plans that produce the first result rows quickly, at the expense of the total query execution time. Hans’ article shows you a good example of what cursor_tuple_fraction does.

Another use case for cursor_tuple_fraction is if you use a cursor, but want to optimize the total execution time of the query. This is usually the case if you declare a cursor WITH HOLD, so that it can outlive a database transaction. The result

[...]

Nazir Bilal Yavuz
Posted by Andreas 'ads' Scherbaum on 2024-05-06 at 14:00
PostgreSQL Person of the Week Interview with Nazir Bilal Yavuz: I am Nazir Bilal Yavuz, open source PostgreSQL developer working at Microsoft. I live in Turkey. I spend my free time by doing sports, playing games and football, watching tv series and movies.

The Notifier Pattern for Applications That Use Postgres
Posted by Brandur Leach on 2024-05-06 at 05:54

Listen/notify in Postgres is an incredible feature that makes itself useful in all kinds of situations. I’ve been using it a long time, started taking it for granted long ago, and was somewhat shocked recently looking into MySQL and SQLite to learn that even in 2024, no equivalent exists.

In a basic sense, listen/notify is such a simple concept that it needs little explanation. Clients subscribe on topics and other clients can send on topics, passing a message to each subscribed client. The idea takes only three seconds to demonstrate using nothing more than a psql shell:

=# LISTEN test_topic;
LISTEN
Time: 2.828 ms

=# SELECT pg_notify('test_topic', 'test_message');
 pg_notify
-----------

(1 row)

Time: 17.892 ms
Asynchronous notification "test_topic" with payload "test_message" received from server process with PID 98481.

But despite listen/notify’s relative simplicity, when it comes to applications built on top of Postgres, it’s common to use it less than optimally, eating through scarce Postgres connections and with little regard to failure cases.


Here’s where the notifier pattern for Postgres comes in. It’s an extremely simple idea, but in my experience, one that’s rarely seen in practice. Let’s start with these axioms:

  • LISTENs are affixed to specific connections. After listening, the original connection msut still be available somewhere to successfully receive messages.

  • There may be many components within an application that’d like to listen on topics for completely orthogonal uses.

  • Despite optimizations over the years, connections in Postgres are still somewhat of a precious, limited resource, and should be conserved. We’d like to minimize the number of them required for listen/notify use.

  • A single connection can listen on any number of topics.

With those stated, we can explain the role of the notifier. Its job is to hold a single Postgres connection per process, allow other components in the same

[...]

Posted by Henrietta Dombrovskaya on 2024-05-06 at 01:22

At PG Day Chicago, I presented an extended version of my talk given last year at Citus.con – Temporal Tables and Standard. Just between the time my talk was accepted and I delivered the presentation, I learned that PG 17 would include the first-ever support of an important temporal feature: uni-temporal primary keys and unique constraints.

It has been a while since the last time I presented anything temporal-related, which meant that many people in the audience hadn’t heard anything about the bitemporal model before. There was no way I could cover everything in 40 minutes, and many questions, which were asked both during the Q&A and later in the hallways, remained unanswered.

In this blog, I will address some of these questions and expand on what I would like to see in the upcoming Postgres implementation of temporal tables.

  1. Bitemporal framework and GIST. The key feature of the bitemporal model is its dependency on existing PG extensions, specifically on GIST indexes and GIST with EXCLUSION constraints. In fact, the GIST extension does all the work needed to support (bi) temporal primary/unique keys: it ensures that there is no time overlap on any two distinct values of the primary key. In the bitemporal model, we check the same thing for the two time dimensions. For those who never needed GIST indexes, here is the relevant documentation. I learned about GIST when I first started implementing bitemporality, and I could not believe all my needs were already met!
  2. Disk space requirements. For some reason, people believe that keeping all versions of each tuple requires “too much disk space.” I won’t deny that you need more space to store row versions than when you don’t; however, how much more is often overestimated. In my talk at PG Conf.EU 2022, I presented an example of storing changelog vs. storing data in a bitemporal model and demonstrated that it actually takes less space while allowing queries to be executed much faster.
  3. Excessive IO. One of the questions I was asked wa
[...]

LinuxFest Northwest PostgreSQL Booth Report
Posted by gabrielle roth on 2024-05-06 at 00:30
Last weekend, Jenn and I headed to Bellingham for LinuxFest Northwest. It was my first time back since about 2015, and I’ve really missed attending this laid-back, 100% volunteer-run, free to attend event. We had a lot of good conversations with folks coming by the booth. Some stats: I’m glad to see LFNW back on […]

Saturation Arithmetic with a PostgreSQL Extension
Posted by Artur Zakirov in Adjust on 2024-05-03 at 08:00
Introduction In certain situations, it can be beneficial to ignore integer overflow errors. Consider, for instance, an integer table column that typically accommodates small values. On rare occasions, a significantly larger value might be inserted. Attempting to aggregate these values could lead to an integer overflow error, causing the entire aggregate query to fail, which is far from ideal. Handling large values on the client or worker side, where the data is inserted, is one option.

Conversion Gotchas: Implicit Conversion in Oracle to PostgreSQL Migration
Posted by Deepak Mahto on 2024-05-03 at 05:51

Introduction – Implicit Conversion

Oracle to PostgreSQL migration is a playground that uncovers and touches upon many database concepts, which are always intriguing and fun to explore. Implicit Conversion, i.e., imposing automatic conversion on data types to make them comparable by database optimizers, is also a concept frequently encountered in database migrations. Implicit conversion allows for the automatic conversion of data types for an expression or condition when necessary for SQL execution, thus preventing failures.

Implicit conversion can be a performance issue in many cases, but it’s still common to encounter it in databases due to various real-world constraints. In some instances, we might utilize function-based indexes to mitigate this issue as well. In this blog, we will explore the differences in implicit conversion between Oracle and PostgreSQL and discuss methods to mitigate it. Our focus is not on comparing which database engine makes ideal choices regarding implicit conversion, but rather on achieving similar functionality in PostgreSQL when needed, such as comparing two different data types with implicit conversions.

Oracle – Implicit Conversion Observations.

Let’s begin by creating some mock tables in Oracle to understand implicit conversion characteristics. In our examples, we will focus on equality comparison between VARCHAR or TEXT datatypes and NUMBER family data types.

create table tstimplicitconv (col1 number, col2 number(6), col3 float , 
col4 varchar2(100));
insert into tstimplicitconv values (1,1,1.1,'1');
commit;

explain plan for select 1 from tstimplicitconv where col1 = col4;
select * from table(dbms_xplan.display);

explain plan for select 1 from tstimplicitconv where col4 = col2;
select * from table(dbms_xplan.display);

Tabular representation of implicit conversion behavior in Oracle for equality operator.

Operator Auto Conversion(YES/NO) Implicit Con
[...]

Bringing IvorySQL to Neon Autoscaling Platform
Posted by David Z in Highgo Software on 2024-05-02 at 17:02

1. Overview

In this blog post, we will guide you through the process of integrating IvorySQL, an open-source database built on PostgreSQL, into Neon Autoscaling Platform. Throughout this guide, we’ll walk you through each step, providing clear instructions and demonstrations.

2. What is IvorySQL

“IvorySQL is advanced, fully featured, open source Oracle compatible PostgreSQL with a firm commitment to always remain 100% compatible and a Drop-in replacement of the latest PostgreSQL. IvorySQL adds a “compatible_db” toggle switch to switch between Oracle and PostgreSQL compatibility modes. One of the highlights of IvorySQL is PL/iSQL procedural language that supports oracle’s PL/SQL syntax and Oracle style Packages.” [1]

The general architecture of IvorySQL is shown below. For more details, you can refer to here.

IvorySQL Key features:

  • Powered by PostgreSQL
    Leveraging the robust foundation of PostgreSQL, IvorySQL inherits its comprehensive SQL capabilities, rock-solid reliability, and access to a vast ecosystem of tools and extensions.

  • Oracle Compatibility
    IvorySQL introduces a unique “compatible_db” toggle switch, facilitating effortless transitions between Oracle and PostgreSQL compatibility modes. It supports Oracle’s PL/SQL syntax and includes Oracle-style Packages, making migration from Oracle databases a smooth process.

  • Customization
    Users have the freedom to customize IvorySQL to suit their specific requirements. Simply download the source code and tailor it to your preferences.

  • Open Source
    As an Apache 2 licensed project, IvorySQL is freely available for download and use, fostering a collaborative development environment.

  • Open Community
    A creative and inclusive community that encourages collaboration and innovation.

  • Backed by HighGo
    Supported by HighGo, a leading provider of PostgreSQL databases, IvorySQL benefits from expert guidance and support to ensure its con
[...]

What's new in pgvector v0.7.0
Posted by Pavel Borisov in Supabase on 2024-05-02 at 00:00
Exploring new features in pgvector v0.7.0

Hacking on PostgreSQL is Really Hard
Posted by Robert Haas in EDB on 2024-05-01 at 18:05

Hacking on PostgreSQL is really hard. I think a lot of people would agree with this statement, not all for the same reasons. Some might point to the character of discourse on the mailing list, others to the shortage of patch reviewers, and others still to the difficulty of getting the attention of a committer, or of feeling like a hostage to some committer's whimsy. All of these are problems, but today I want to focus on the purely technical aspect of the problem: the extreme difficulty of writing reasonably correct patches.

Read more »

LDAP Authentication in PgBouncer Through PAM
Posted by Jobin Augustine in Percona on 2024-04-30 at 14:46
There are many cases where external connection poolers like pgBouncer become unavoidable despite the costs and complexities associated with them. PgBouncer is one of the most popular external connection poolers for PostgreSQL. It is thin and lightweight, so it doesn’t have built-in authentication features like LDAP, which is essential for many enterprises. Luckily, pgBouncer has […]

Mini Summit: Universally Buildable Extensions
Posted by David Wheeler in Tembo on 2024-04-30 at 13:44

Well that snuck up on me. Tomorrow, May 1 2024, Yurii Rashkovskii of Omnigres will be giving a presentation at the fifth installment of the Postgres extension ecosystem mini-summit. The tal, “Universally buildable extensions: dev to prod”, should provoke some interesting asymmetrical thinking and discussion. I hope to see you there.

Note! If you reserved a spot at a prior mini-summit, you will need to do so again for each subsequent event or get no reminders from Eventbrite. If it’s sold out just email david@ this domain, ping me on Mastodon or via the #extensions channel on the Postgres Slack or the Postgres Discord for a link or, if you’d like a calendar invitation.

Logging Basics for PostgreSQL
Posted by muhammad ali in Stormatics on 2024-04-30 at 12:37

Explore foundational parameters for maximizing the utility of PostgreSQL logs.

The post Logging Basics for PostgreSQL appeared first on Stormatics.

The 150x pgvector speedup: a year-in-review
Posted by Jonathan Katz on 2024-04-30 at 00:00

I wanted to write a “year-in-review” covering all the performance pgvector has made (with significant credit to Andrew Kane), highlighting specific areas where pgvector has improved (including one 150x improvement!) and areas where we can continue to do better.

A few weeks ago, I started outlining this post and began my data collection the data. While I was working on this over a two week period, no fewer than three competitive benchmarks against pgvector published. To me, this is a testament both how well pgvector is at handling vector workloads (and by extension, PostgreSQL too) that people are using it as the baseline to compare it to their vector search systems.

Some of these benchmarks did contain info that identified areas we can continue to improve both PostgreSQL and pgvector, but I was generally disappointed in the methodology used to make these comparisons. Of course I’d like to see pgvector perform well in benchmarks, but it’s important to position technologies fairly and be vocally self-critical on where your system can improve to build trust in what you’re building.

I have a separate blog post planned for how to best present benchmark studies between different systems for vector similarity search (it’s a topic I’m interested in). Today though, I want to compare pgvector against itself, and highlight areas it’s improved over the past year, and where the project can continue to go and grow.

How I ran these tests

An important aspect of any benchmark is transparency. First, I’ll discuss the test methodology I used, describe the test environment setup (instances, storage, database configuration), and then discuss the results. If you’re not interested in this part, you can skip ahead to “The 150x pgvector speedup”, but this information can help you with your own testing!

First, what are testing for? We’ll be looking at these specific attributes in these tests:

  • Recall: A measurement of the relevancy of our results - what percentage of the expected results are returned dur
[...]

Hey Chicago – We did it!!!
Posted by Henrietta Dombrovskaya on 2024-04-29 at 04:16

The PG Day Chicago 2024 is over. For the last two days, I have been reading my LinkedIn and X feeds, which are full of mentions of PG Day Chicago. Everyone is happy and thankful.

Although I am busy “liking” all of these posts—not just “liking” but truly liking them—I didn’t blog anything myself. Two days later, I finally organized my thoughts to talk about that event.

I want to start by saying a big thank you to everyone involved: organizers, speakers, sponsors, volunteers, and attendees! You were all essential to the conference’s success. And although there are definitely some things that I did wrong, I wanted to focus on things that I did for the first time his year and which, I hope, were done right.

  • The best decision I made was inviting Karen Jex to be a CfP committee chair. Her support was incredible, and I am forever thankful to her! (Of course, she’ll have to be the CfP char again :)).
  • We had fourteen student volunteers in addition to experienced staff, and everybody mentioned their excellent work
  • Three participants received Travel scholarships from Pg.Us Diversity and Inclusion committee
  • We distributed 6 vouchers from Postgres Women
  • We were able to redistribute six sponsor’s vouchers which they were not going to use and pass them to students.
  • … and all of the students were very thankful for the opportunity!

And that’s what makes me especially happy: the number of participants for whom it was the first Postgres conference ever and who recognized it as a unique opportunity to get to know the Postgres community, to be exposed to the incredible amount of knowledge, and to get inspired.

PostgreSQL supported platforms over time
Posted by Peter Eisentraut in EDB on 2024-04-29 at 04:00

The recent discussion about AIX support in PostgreSQL (as of now removed in PostgreSQL 17) led me to look through the project’s history, to learn what platforms we have supported when.

In this context, “platform” really means operating system. One useful proxy is looking at the files in src/template/, because every supported platform needs to be listed there. There are other dimensions, such as what CPU architectures are supported. Historically, this was tied to the operating system (e.g., HP-UX/HPPA, Alpha/Tru64, AIX/PPC, Irix/MIPS), but nowadays, the combinations are more plentiful. CPU support in PostgreSQL might be the subject of a future article.

As usual, at some point in the distant past, this was all very murky and hard to decipher from the source code history, so let’s start with …

  • PostgreSQL 6.5 (1999) was the first mention in the source code of support for building on Windows, using MinGW (essentially GCC). I wonder what you could do with this then? It’s listed in the release notes! But official full Windows support didn’t come until much later, so this must have been quite limited.

  • PostgreSQL 7.1 (2001) added QNX, BeOS, and macOS (a.k.a. Darwin). With hindsight, one of these is not like the other!

At this point, PostgreSQL supported a formidable 22 platforms.

Also, the PostgreSQL build farm started in 2004. Over time, we have developed a quasi-policy that a platform needs to be represented in the build farm to be considered officially supported.

  • PostgreSQL 8.0 (2005) added native Windows server. This completed the work started around PostgreSQL 6.5.

  • PostgreSQL 8.2 (2006) removed QNX and BeOS. It added support for building for Windows using MSVC (Microsoft’s Visual C compiler suite).

  • PostgreSQL 9.2 (2012) began the great purges. This release removed DG/UX, NeXTSTEP, SunOS 4 (pre-Solaris), SVR 4, Ultrix, Univel, and BSDi. For some of these, I don’t even remember what they were. Al

[...]

Hamburg PostgreSQL Meetup in April 2024
Posted by Andreas Scherbaum on 2024-04-28 at 22:08

On April 23rd - after a very long break - the third PostgreSQL Meetup took place in Hamburg.

Sascha had temporarily taken over the Meetup group after the previous admins were no longer available. Last year at PGConf.DE 2023 I asked if there is anyone interested who would like to revive the group. Tobias had agreed to do so, and Lenz also joined the effort at the PGConf.EU 2023. After a few organizational details, the first (or rather third) meetup was scheduled.

Boost Database Security: Restrict Users to Read Replicas
Posted by Robins Tharakan on 2024-04-28 at 04:57
Only Allow Login to Read-Replicas and StandbysWhen you're working with large databases in production, it is incredibly common to use read-replicas to improve performance. These read-replicas are a copy of your primary (main) database and let your applications offload read-heavy queries, which in-turn reduces strain on your primary database, effectively making the application faster and

PG Phriday: Papa's Got a Brand New RAG
Posted by Shaun M. Thomas in Tembo on 2024-04-26 at 12:00
Remember this guy? Remember this guy? AI is all anyone talks about these days, isn’t it? Even when it comes to databases like Postgres, devs are finding new and innovative ways to leverage LLMs in everyday use cases. Can you really blame them though? This is an exciting new technology that will transform how we work and potentially society as a whole once it finally matures. We even covered building a crude RAG app a few short weeks ago.

Postgres Bloat Minimization
Posted by Pavel Borisov in Supabase on 2024-04-26 at 00:00
Understanding and minimizing Postgres table bloat

Mini Summit Four
Posted by David Wheeler in Tembo on 2024-04-25 at 22:40

My thanks to Jonathan Katz for his presentation, “Trusted Language Extensions for PostgreSQL”, at last week’s Postgres Extension Ecosystem Mini-Summit. As usual I’ve collected the transcript here interspersed with comments from the chat window. First, links!

And now, rivers of text!

Introduction

Presentation

  • Thank you for having me. I’m very excited to talk about this, and extensions in general. I see a lot of folks here I know and recognize and some new folks or folks I’ve not met in person yet.

  • Borrowed from the original presentation on TLEs from November of 2022, to level set on why we built it. I know this is a slightly more advanced audience, so some stuff that might seem relatively introductory to some of you, though there is some material on the internals of extensions.

  • The premise is why we decided to build TLEs, what were the problems we’re trying to solve. Part of it is just understanding Postgres extensions. In general this group is very familiar with Extensions but there are two points I want to hit on.

  • One of the things that excites me most about Postgres is that, when you look back at Postgres as the Berkeley database project researching how to create an object relational database, an accidental or intentional features is not just that Postgres is an object-relational database, but that Postgres is an extensible database, built from the get-go to be able to add functionality without necessarily having to fork it.

  • Early on you’d have to Fork the database to add additional functionality, but the beauty of the Postgres design was the ability to keep adding functionality without forking.

[...]

7 considerations for PCI DSS compliance in PostgreSQL
Posted by Umair Shahid in Stormatics on 2024-04-25 at 09:08

Learn how to ensure PCI DSS compliance in your PostgreSQL database with these 7 crucial considerations.

The post 7 considerations for PCI DSS compliance in PostgreSQL appeared first on Stormatics.

Test and Release pgrx Extensions with pgxn-tools
Posted by David Wheeler in Tembo on 2024-04-24 at 19:34

Yesterday I released v1.6.0 of the pgxn/pgxn-tools Docker image with a new command: pgrx-build-test works much like the existing pg-build-test utility for PGXS extensions, but for pgrx extensions. Here’s an example from pg-jsonschema-boon, a pgrx extension I’ve been working on:

name: 🧪 Test
on:
  push:
jobs:
  test:
    runs-on: ubuntu-latest
    container: pgxn/pgxn-tools
    strategy:
      matrix:
        pg: [11, 12, 13, 14, 15, 16]
    name: 🐘 Postgres ${{ matrix.pg }}
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Start PostgreSQL ${{ matrix.pg }}
        run: pg-start ${{ matrix.pg }}
      - name: Setup Rust Cache
        uses: Swatinem/rust-cache@v2
      - name: Test on PostgreSQL ${{ matrix.pg }}
        run: pgrx-build-test

The format is the same as for pg-build-test, starting with installing a specific version of Postgres from the Postgres Apt repository (supporting versions 8.2 – 17). It then adds the Swatinem/rust-cache action to speed up Rust builds by caching dependencies, and then simply calls pgrx-build-test instead of pg-build-test. Here’s what it does:

  • Extracts the pgrx version from the Cargo.toml file and installs it (requires v0.11.4 or higher)
  • Initializes pgrx to use the Postgres installed by pg-start
  • Builds the extension with cargo pgrx package
  • Tests the extension with cargo pgrx test
  • Installs the extension with cargo pgrx install
  • Checks for a Makefile with installcheck configured and, if it exists, runs make installcheck

This last step allows one to include PGXS-style pg_regress tests in addition to Rust/pgrx tests, as pg-jsonschema-boon does. Here’s a successful run.

Special thanks to Eric Ridge and @Jubilee for all the help and improvements in pgrx v0.11.4 that enable this to work transparently.

pgrx Release Pattern

The pattern for releasing a prgx extens

[...]

PostgreSQL's memory allocations
Posted by Ashutosh Bapat on 2024-04-24 at 14:27

There's a thread on hackers about recovering memory consumed by paths. A reference count is maintained in each path. Once paths are created for all the upper level relations that a given relation participates in, any unused paths, for which reference count is 0, are freed. This adds extra code and CPU cycles to traverse the paths, maintain reference counts and free the paths. Yet, the patch did not show any performance degradation. I was curious to know why. I ran a small experiment.

Experiment

I wrote an extension palloc_test which adds two SQL-callable functions palloc_pfree() and mem_context_free() written in C. Function definitions can be found here. The first function palloc's some memory and then pfree's it immediately. Other function just palloc's but never pfrees, assuming that the memory will be freed when the per-tuple memory context is freed. Both functions take the number of iterations and size of memory allocated in each iteration respectively as inputs. These functions return amount of time taken to execute the loop allocating memory. It appears that the first function spends CPU cycles to free memory and the second one doesn't. So the first one should be slower than the second one.

Results

The table below shows the amount of time reported by the respective functions to execute the loop as many times as the value in the first column, each iteration allocating 100 bytes. The figure shows the same as a plot. The time taken to finish the loop increases linearly for both the function indicating that the palloc logic is O(n) in terms of number of allocations. But the lines cross each other around 300K allocations.


 
count palloc_pfree memory context reset
100 0.0029 0.007124
100100 2.5646 5.079862
200100
[...]

Announcing PGDSAT to satisfy CIS benchmarks for PostgreSQL
Posted by Avi Vallarapu on 2024-04-23 at 20:40

In an era where data breaches are both costly and damaging to reputations, implementing best practices and robust security measures is crucial. PostgreSQL has emerged as one of the most successful and highly adopted open-source databases, competing with the enterprise standards of Oracle and SQL Server. Its increasing popularity is evident from DBEngine rankings, Stackoverflow […]

The post Announcing PGDSAT to satisfy CIS benchmarks for PostgreSQL appeared first on HexaCluster.

PGXN V2 Architecture
Posted by David Wheeler in Tembo on 2024-04-23 at 20:08
Diagram of the extension distribution ecosystem vision, featuring “Root Registry” in the center and bidirectional lines to four of the surrounding nodes: “Web UX”, “Client”, “Packaging”, and “Interactions”. The “Packaging” and “Interactions” boxes also have a bi-directional arrow between them, while the fifth box, “Stats & Reports”, has a bi–directional arrow pointing to “Interactions” and another arrow pointing to “Root Registry”.

PGXN Future Architecture

High-level diagram of the six logical services making up the proposed future extension distribution architecture. The Root Registry sits at the center, providing APIs for the other services to consume for their own use cases. Trusted instances of those services submit additional data about extensions via the Interactions service to enhance and enrich the service to better inform and delight users.

Over on the Postgres Wiki I’ve published a new document for the PGXN v2 project: PGXN v2 Architecture. It has diagrams, such as the one above! From the introduction:

This document outlines the project to build extension distribution, discovery, and packaging tools and services to power the growth, accessability, and utility of the Postgres extension ecosystem. Taking the overall Postgres community as its audience, it defines the services to be provided and the architecture to run them, as well as the strategic vision to guide project planning and decision-making.

With the goal to think strategically and plan pragmatically, this document describes the former to enable the latter. As such, it is necessarily high-level; details, scoping, and planning will be surfaced in more project-focused documents.

Bear in mind that this document outlines an ambitious, long-term strategy. If you’re thinking that there’s too much here, that we’er over-thinking and over-designing the system, rest assured that project execution will be fundamentally incremental and pragmatic. This document is the guiding light for the project, and subject to change as development proceeds and new wrinkles arise.

For those of you interested in the future of Postgres extension distribution, please give it a read! I expect it to guide the planning and implementation of the the new services and tools in the coming year. Please do consider it a living document, howeve

[...]

Installing pg_tle on Ubuntu: A Quick Guide
Posted by Robins Tharakan on 2024-04-23 at 13:25
Compile & Install pg_tle on PostgresPostgreSQL is a powerful database, but sometimes you want to extend its functionality with custom features, and that's where extensions like pg_tle (Trusted Language Extensions) come into play.If you're new to pg_tle, here's a quick round-up of why it makes sense for you - See Unlock PostgreSQL Superpowers with pg_tle.Given the power of&

DBaaG with SQL/PGQ
Posted by Ashutosh Bapat on 2024-04-23 at 09:44

For those who have studied ERD-lore, it's not new that a relational database is very much like a graph. But it has taken SQL, more than 30 years since it became a standard and almost half a century since its inception to incorporate construct that will allow a DataBase to be treated as a Graph, DBaaG. This is surprising given that SQL was developed as language for relational databases which are modeled using ER diagrams. Better late than never. SQL/PGQ has arrived as 16th part of SQL:2023.

Entity Relationship Diagram, ERD in short, is a tool to model and visualize a database as entity types (which classify the things of interest) and relationships that can exist between them. Entity types and the relationships both map to relations in a Relational DataBase Management System (RDBMS in short). The rows in the relations represent entities (instances of entity types) and relationship between entities respectively. Fig. 1 below shows an ERD for a hypothetical shop.

This diagram very much looks like a graph with entity types represented as nodes and relationships represented by edges. That's exactly what SQL/PGQ is about. It adds language constructs to SQL to present underlying database as a "Property Graph". For example, property graph definition corresponding to the above ERD would look like

CREATE PROPERTY GRAPH shop
VERTEX TABLES (
    CreditCard label Payment,
    BankAccount label Payment,
    Person label Customer,
    Company label Customer,
    Trust label Customer,
    Wishlist label ProdLink,
    Order label ProdLink,
    Product)
EDGE TABLES (
    CCOwns label Owns
    BAHolds lable Owns,
    CustOrders label CustLink,
    CustWishlist label CustLink,
    CompanyOrders label CustLink,
    CompanyWishlist label CustLink,
    TrustOrders label CustLink,
    TrustWishlist label CustLink,
    OrderCCPayment label OrderPayment,
    OrderBAPayment label OrderPayment,
    OrderItems label ItemLink,
    Wishl
[...]

PL\pgSQL Conversion Gotchas : Functions with Out parameter and return type.
Posted by Deepak Mahto on 2024-04-23 at 06:11

If you’re fascinated by the internals of PL/pgSQL processing and want to handle it efficiently, especially if you’re migrating procedural logic from another database, be sure to check out the previous posts in the series on PL/pgSQL Conversion Gotchas – Part 1 and Part 2

In Oracle, we used to build multiple functionalities within procedural blocks, either in functions or procedures. In these blogs, we will explore some of the gotchas that we need to take care of.

Migrating Functions with Out Parameters and Return Types

Oracle allows functions to have arguments defined as out parameters and also return additional status arguments or execution statuses as part of function execution. Though it is not best practice, when migrating legacy code bases, we encounter all types of scenarios.

Below is one such sample function with out params and return type.

create or replace function func_test1
(var1 in integer,var2 out integer )
return integer
as
begin
var2 := var1 + 1;
return var2*2;
end;

If we run it in Oracle, it will produce two values: one as an out parameter and another as a return value

set serveroutput on;
declare
  l_out_var2 integer; 
  l_out_ret integer;  
begin
  l_out_ret := func_test1(1,l_out_var2 );
  dbms_output.put_line('Out Value - ' || l_out_var2);
  dbms_output.put_line('Return Value - ' || l_out_ret);
end;
/
/

Let’s attempt to convert the sample function to PostgreSQL using the PL/pgSQL language. As a reference, we’ll aim to build similar code logic as in the Oracle sample, creating a function in PostgreSQL with PL/pgSQL that includes both an out parameter and a return type.

create or replace function func_test1
(var1 in integer,var2 out integer )
returns integer
language plpgsql
as
$$
begin
var2 := var1 + 1;
return var2*2;
end;
$$;

Let’s proceed with deploying it within the PostgreSQL database

In PostgreSQL, when there’s a requirement to return multiple values, we typically transform them into record types with attributes corresponding t

[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.