Latest Blog Posts

Saturation Arithmetic with a PostgreSQL Extension
Posted by Artur Zakirov in Adjust on 2024-05-03 at 08:00
Introduction In certain situations, it can be beneficial to ignore integer overflow errors. Consider, for instance, an integer table column that typically accommodates small values. On rare occasions, a significantly larger value might be inserted. Attempting to aggregate these values could lead to an integer overflow error, causing the entire aggregate query to fail, which is far from ideal. Handling large values on the client or worker side, where the data is inserted, is one option.

Conversion Gotchas: Implicit Conversion in Oracle to PostgreSQL Migration
Posted by Deepak Mahto on 2024-05-03 at 05:51

Introduction – Implicit Conversion

Oracle to PostgreSQL migration is a playground that uncovers and touches upon many database concepts, which are always intriguing and fun to explore. Implicit Conversion, i.e., imposing automatic conversion on data types to make them comparable by database optimizers, is also a concept frequently encountered in database migrations. Implicit conversion allows for the automatic conversion of data types for an expression or condition when necessary for SQL execution, thus preventing failures.

Implicit conversion can be a performance issue in many cases, but it’s still common to encounter it in databases due to various real-world constraints. In some instances, we might utilize function-based indexes to mitigate this issue as well. In this blog, we will explore the differences in implicit conversion between Oracle and PostgreSQL and discuss methods to mitigate it. Our focus is not on comparing which database engine makes ideal choices regarding implicit conversion, but rather on achieving similar functionality in PostgreSQL when needed, such as comparing two different data types with implicit conversions.

Oracle – Implicit Conversion Observations.

Let’s begin by creating some mock tables in Oracle to understand implicit conversion characteristics. In our examples, we will focus on equality comparison between VARCHAR or TEXT datatypes and NUMBER family data types.

create table tstimplicitconv (col1 number, col2 number(6), col3 float , 
col4 varchar2(100));
insert into tstimplicitconv values (1,1,1.1,'1');
commit;

explain plan for select 1 from tstimplicitconv where col1 = col4;
select * from table(dbms_xplan.display);

explain plan for select 1 from tstimplicitconv where col4 = col2;
select * from table(dbms_xplan.display);

Tabular representation of implicit conversion behavior in Oracle for equality operator.

Operator Auto Conversion(YES/NO) Implicit Con
[...]

Bringing IvorySQL to Neon Autoscaling Platform
Posted by David Z in Highgo Software on 2024-05-02 at 17:02

1. Overview

In this blog post, we will guide you through the process of integrating IvorySQL, an open-source database built on PostgreSQL, into Neon Autoscaling Platform. Throughout this guide, we’ll walk you through each step, providing clear instructions and demonstrations.

2. What is IvorySQL

“IvorySQL is advanced, fully featured, open source Oracle compatible PostgreSQL with a firm commitment to always remain 100% compatible and a Drop-in replacement of the latest PostgreSQL. IvorySQL adds a “compatible_db” toggle switch to switch between Oracle and PostgreSQL compatibility modes. One of the highlights of IvorySQL is PL/iSQL procedural language that supports oracle’s PL/SQL syntax and Oracle style Packages.” [1]

The general architecture of IvorySQL is shown below. For more details, you can refer to here.

IvorySQL Key features:

  • Powered by PostgreSQL
    Leveraging the robust foundation of PostgreSQL, IvorySQL inherits its comprehensive SQL capabilities, rock-solid reliability, and access to a vast ecosystem of tools and extensions.

  • Oracle Compatibility
    IvorySQL introduces a unique “compatible_db” toggle switch, facilitating effortless transitions between Oracle and PostgreSQL compatibility modes. It supports Oracle’s PL/SQL syntax and includes Oracle-style Packages, making migration from Oracle databases a smooth process.

  • Customization
    Users have the freedom to customize IvorySQL to suit their specific requirements. Simply download the source code and tailor it to your preferences.

  • Open Source
    As an Apache 2 licensed project, IvorySQL is freely available for download and use, fostering a collaborative development environment.

  • Open Community
    A creative and inclusive community that encourages collaboration and innovation.

  • Backed by HighGo
    Supported by HighGo, a leading provider of PostgreSQL databases, IvorySQL benefits from expert guidance and support to ensure its con
[...]

What's new in pgvector v0.7.0
Posted by Pavel Borisov in Supabase on 2024-05-02 at 00:00
Exploring new features in pgvector v0.7.0

Hacking on PostgreSQL is Really Hard
Posted by Robert Haas in EDB on 2024-05-01 at 18:05

Hacking on PostgreSQL is really hard. I think a lot of people would agree with this statement, not all for the same reasons. Some might point to the character of discourse on the mailing list, others to the shortage of patch reviewers, and others still to the difficulty of getting the attention of a committer, or of feeling like a hostage to some committer's whimsy. All of these are problems, but today I want to focus on the purely technical aspect of the problem: the extreme difficulty of writing reasonably correct patches.

Read more »

LDAP Authentication in PgBouncer Through PAM
Posted by Jobin Augustine in Percona on 2024-04-30 at 14:46
There are many cases where external connection poolers like pgBouncer become unavoidable despite the costs and complexities associated with them. PgBouncer is one of the most popular external connection poolers for PostgreSQL. It is thin and lightweight, so it doesn’t have built-in authentication features like LDAP, which is essential for many enterprises. Luckily, pgBouncer has […]

Mini Summit: Universally Buildable Extensions
Posted by David Wheeler in Tembo on 2024-04-30 at 13:44

Well that snuck up on me. Tomorrow, May 1 2024, Yurii Rashkovskii of Omnigres will be giving a presentation at the fifth installment of the Postgres extension ecosystem mini-summit. The tal, “Universally buildable extensions: dev to prod”, should provoke some interesting asymmetrical thinking and discussion. I hope to see you there.

Note! If you reserved a spot at a prior mini-summit, you will need to do so again for each subsequent event or get no reminders from Eventbrite. If it’s sold out just email david@ this domain, ping me on Mastodon or via the #extensions channel on the Postgres Slack or the Postgres Discord for a link or, if you’d like a calendar invitation.

Logging Basics for PostgreSQL
Posted by muhammad ali in Stormatics on 2024-04-30 at 12:37

Explore foundational parameters for maximizing the utility of PostgreSQL logs.

The post Logging Basics for PostgreSQL appeared first on Stormatics.

The 150x pgvector speedup: a year-in-review
Posted by Jonathan Katz on 2024-04-30 at 00:00

I wanted to write a “year-in-review” covering all the performance pgvector has made (with significant credit to Andrew Kane), highlighting specific areas where pgvector has improved (including one 150x improvement!) and areas where we can continue to do better.

A few weeks ago, I started outlining this post and began my data collection the data. While I was working on this over a two week period, no fewer than three competitive benchmarks against pgvector published. To me, this is a testament both how well pgvector is at handling vector workloads (and by extension, PostgreSQL too) that people are using it as the baseline to compare it to their vector search systems.

Some of these benchmarks did contain info that identified areas we can continue to improve both PostgreSQL and pgvector, but I was generally disappointed in the methodology used to make these comparisons. Of course I’d like to see pgvector perform well in benchmarks, but it’s important to position technologies fairly and be vocally self-critical on where your system can improve to build trust in what you’re building.

I have a separate blog post planned for how to best present benchmark studies between different systems for vector similarity search (it’s a topic I’m interested in). Today though, I want to compare pgvector against itself, and highlight areas it’s improved over the past year, and where the project can continue to go and grow.

How I ran these tests

An important aspect of any benchmark is transparency. First, I’ll discuss the test methodology I used, describe the test environment setup (instances, storage, database configuration), and then discuss the results. If you’re not interested in this part, you can skip ahead to “The 150x pgvector speedup”, but this information can help you with your own testing!

First, what are testing for? We’ll be looking at these specific attributes in these tests:

  • Recall: A measurement of the relevancy of our results - what percentage of the expected results are returned dur
[...]

Hey Chicago – We did it!!!
Posted by Henrietta Dombrovskaya on 2024-04-29 at 04:16

The PG Day Chicago 2024 is over. For the last two days, I have been reading my LinkedIn and X feeds, which are full of mentions of PG Day Chicago. Everyone is happy and thankful.

Although I am busy “liking” all of these posts—not just “liking” but truly liking them—I didn’t blog anything myself. Two days later, I finally organized my thoughts to talk about that event.

I want to start by saying a big thank you to everyone involved: organizers, speakers, sponsors, volunteers, and attendees! You were all essential to the conference’s success. And although there are definitely some things that I did wrong, I wanted to focus on things that I did for the first time his year and which, I hope, were done right.

  • The best decision I made was inviting Karen Jex to be a CfP committee chair. Her support was incredible, and I am forever thankful to her! (Of course, she’ll have to be the CfP char again :)).
  • We had fourteen student volunteers in addition to experienced staff, and everybody mentioned their excellent work
  • Three participants received Travel scholarships from Pg.Us Diversity and Inclusion committee
  • We distributed 6 vouchers from Postgres Women
  • We were able to redistribute six sponsor’s vouchers which they were not going to use and pass them to students.
  • … and all of the students were very thankful for the opportunity!

And that’s what makes me especially happy: the number of participants for whom it was the first Postgres conference ever and who recognized it as a unique opportunity to get to know the Postgres community, to be exposed to the incredible amount of knowledge, and to get inspired.

PostgreSQL supported platforms over time
Posted by Peter Eisentraut in EDB on 2024-04-29 at 04:00

The recent discussion about AIX support in PostgreSQL (as of now removed in PostgreSQL 17) led me to look through the project’s history, to learn what platforms we have supported when.

In this context, “platform” really means operating system. One useful proxy is looking at the files in src/template/, because every supported platform needs to be listed there. There are other dimensions, such as what CPU architectures are supported. Historically, this was tied to the operating system (e.g., HP-UX/HPPA, Alpha/Tru64, AIX/PPC, Irix/MIPS), but nowadays, the combinations are more plentiful. CPU support in PostgreSQL might be the subject of a future article.

As usual, at some point in the distant past, this was all very murky and hard to decipher from the source code history, so let’s start with …

  • PostgreSQL 6.5 (1999) was the first mention in the source code of support for building on Windows, using MinGW (essentially GCC). I wonder what you could do with this then? It’s listed in the release notes! But official full Windows support didn’t come until much later, so this must have been quite limited.

  • PostgreSQL 7.1 (2001) added QNX, BeOS, and macOS (a.k.a. Darwin). With hindsight, one of these is not like the other!

At this point, PostgreSQL supported a formidable 22 platforms.

Also, the PostgreSQL build farm started in 2004. Over time, we have developed a quasi-policy that a platform needs to be represented in the build farm to be considered officially supported.

  • PostgreSQL 8.0 (2005) added native Windows server. This completed the work started around PostgreSQL 6.5.

  • PostgreSQL 8.2 (2006) removed QNX and BeOS. It added support for building for Windows using MSVC (Microsoft’s Visual C compiler suite).

  • PostgreSQL 9.2 (2012) began the great purges. This release removed DG/UX, NeXTSTEP, SunOS 4 (pre-Solaris), SVR 4, Ultrix, Univel, and BSDi. For some of these, I don’t even remember what they were. Al

[...]

Hamburg PostgreSQL Meetup in April 2024
Posted by Andreas Scherbaum on 2024-04-28 at 22:08

On April 23rd - after a very long break - the third PostgreSQL Meetup took place in Hamburg.

Sascha had temporarily taken over the Meetup group after the previous admins were no longer available. Last year at PGConf.DE 2023 I asked if there is anyone interested who would like to revive the group. Tobias had agreed to do so, and Lenz also joined the effort at the PGConf.EU 2023. After a few organizational details, the first (or rather third) meetup was scheduled.

Boost Database Security: Restrict Users to Read Replicas
Posted by Robins Tharakan on 2024-04-28 at 04:57
Only Allow Login to Read-Replicas and StandbysWhen you're working with large databases in production, it is incredibly common to use read-replicas to improve performance. These read-replicas are a copy of your primary (main) database and let your applications offload read-heavy queries, which in-turn reduces strain on your primary database, effectively making the application faster and

PG Phriday: Papa's Got a Brand New RAG
Posted by Shaun M. Thomas in Tembo on 2024-04-26 at 12:00
Remember this guy? Remember this guy? AI is all anyone talks about these days, isn’t it? Even when it comes to databases like Postgres, devs are finding new and innovative ways to leverage LLMs in everyday use cases. Can you really blame them though? This is an exciting new technology that will transform how we work and potentially society as a whole once it finally matures. We even covered building a crude RAG app a few short weeks ago.

Postgres Bloat Minimization
Posted by Pavel Borisov in Supabase on 2024-04-26 at 00:00
Understanding and minimizing Postgres table bloat

Mini Summit Four
Posted by David Wheeler in Tembo on 2024-04-25 at 22:40

My thanks to Jonathan Katz for his presentation, “Trusted Language Extensions for PostgreSQL”, at last week’s Postgres Extension Ecosystem Mini-Summit. As usual I’ve collected the transcript here interspersed with comments from the chat window. First, links!

And now, rivers of text!

Introduction

Presentation

  • Thank you for having me. I’m very excited to talk about this, and extensions in general. I see a lot of folks here I know and recognize and some new folks or folks I’ve not met in person yet.

  • Borrowed from the original presentation on TLEs from November of 2022, to level set on why we built it. I know this is a slightly more advanced audience, so some stuff that might seem relatively introductory to some of you, though there is some material on the internals of extensions.

  • The premise is why we decided to build TLEs, what were the problems we’re trying to solve. Part of it is just understanding Postgres extensions. In general this group is very familiar with Extensions but there are two points I want to hit on.

  • One of the things that excites me most about Postgres is that, when you look back at Postgres as the Berkeley database project researching how to create an object relational database, an accidental or intentional features is not just that Postgres is an object-relational database, but that Postgres is an extensible database, built from the get-go to be able to add functionality without necessarily having to fork it.

  • Early on you’d have to Fork the database to add additional functionality, but the beauty of the Postgres design was the ability to keep adding functionality without forking.

[...]

7 considerations for PCI DSS compliance in PostgreSQL
Posted by Umair Shahid in Stormatics on 2024-04-25 at 09:08

Learn how to ensure PCI DSS compliance in your PostgreSQL database with these 7 crucial considerations.

The post 7 considerations for PCI DSS compliance in PostgreSQL appeared first on Stormatics.

Test and Release pgrx Extensions with pgxn-tools
Posted by David Wheeler in Tembo on 2024-04-24 at 19:34

Yesterday I released v1.6.0 of the pgxn/pgxn-tools Docker image with a new command: pgrx-build-test works much like the existing pg-build-test utility for PGXS extensions, but for pgrx extensions. Here’s an example from pg-jsonschema-boon, a pgrx extension I’ve been working on:

name: 🧪 Test
on:
  push:
jobs:
  test:
    runs-on: ubuntu-latest
    container: pgxn/pgxn-tools
    strategy:
      matrix:
        pg: [11, 12, 13, 14, 15, 16]
    name: 🐘 Postgres ${{ matrix.pg }}
    steps:
      - name: Checkout
        uses: actions/checkout@v4
      - name: Start PostgreSQL ${{ matrix.pg }}
        run: pg-start ${{ matrix.pg }}
      - name: Setup Rust Cache
        uses: Swatinem/rust-cache@v2
      - name: Test on PostgreSQL ${{ matrix.pg }}
        run: pgrx-build-test

The format is the same as for pg-build-test, starting with installing a specific version of Postgres from the Postgres Apt repository (supporting versions 8.2 – 17). It then adds the Swatinem/rust-cache action to speed up Rust builds by caching dependencies, and then simply calls pgrx-build-test instead of pg-build-test. Here’s what it does:

  • Extracts the pgrx version from the Cargo.toml file and installs it (requires v0.11.4 or higher)
  • Initializes pgrx to use the Postgres installed by pg-start
  • Builds the extension with cargo pgrx package
  • Tests the extension with cargo pgrx test
  • Installs the extension with cargo pgrx install
  • Checks for a Makefile with installcheck configured and, if it exists, runs make installcheck

This last step allows one to include PGXS-style pg_regress tests in addition to Rust/pgrx tests, as pg-jsonschema-boon does. Here’s a successful run.

Special thanks to Eric Ridge and @Jubilee for all the help and improvements in pgrx v0.11.4 that enable this to work transparently.

pgrx Release Pattern

The pattern for releasing a prgx extens

[...]

PostgreSQL's memory allocations
Posted by Ashutosh Bapat on 2024-04-24 at 14:27

There's a thread on hackers about recovering memory consumed by paths. A reference count is maintained in each path. Once paths are created for all the upper level relations that a given relation participates in, any unused paths, for which reference count is 0, are freed. This adds extra code and CPU cycles to traverse the paths, maintain reference counts and free the paths. Yet, the patch did not show any performance degradation. I was curious to know why. I ran a small experiment.

Experiment

I wrote an extension palloc_test which adds two SQL-callable functions palloc_pfree() and mem_context_free() written in C. Function definitions can be found here. The first function palloc's some memory and then pfree's it immediately. Other function just palloc's but never pfrees, assuming that the memory will be freed when the per-tuple memory context is freed. Both functions take the number of iterations and size of memory allocated in each iteration respectively as inputs. These functions return amount of time taken to execute the loop allocating memory. It appears that the first function spends CPU cycles to free memory and the second one doesn't. So the first one should be slower than the second one.

Results

The table below shows the amount of time reported by the respective functions to execute the loop as many times as the value in the first column, each iteration allocating 100 bytes. The figure shows the same as a plot. The time taken to finish the loop increases linearly for both the function indicating that the palloc logic is O(n) in terms of number of allocations. But the lines cross each other around 300K allocations.


 
count palloc_pfree memory context reset
100 0.0029 0.007124
100100 2.5646 5.079862
200100
[...]

Announcing PGDSAT to satisfy CIS benchmarks for PostgreSQL
Posted by Avi Vallarapu on 2024-04-23 at 20:40

In an era where data breaches are both costly and damaging to reputations, implementing best practices and robust security measures is crucial. PostgreSQL has emerged as one of the most successful and highly adopted open-source databases, competing with the enterprise standards of Oracle and SQL Server. Its increasing popularity is evident from DBEngine rankings, Stackoverflow […]

The post Announcing PGDSAT to satisfy CIS benchmarks for PostgreSQL appeared first on HexaCluster.

PGXN V2 Architecture
Posted by David Wheeler in Tembo on 2024-04-23 at 20:08
Diagram of the extension distribution ecosystem vision, featuring “Root Registry” in the center and bidirectional lines to four of the surrounding nodes: “Web UX”, “Client”, “Packaging”, and “Interactions”. The “Packaging” and “Interactions” boxes also have a bi-directional arrow between them, while the fifth box, “Stats & Reports”, has a bi–directional arrow pointing to “Interactions” and another arrow pointing to “Root Registry”.

PGXN Future Architecture

High-level diagram of the six logical services making up the proposed future extension distribution architecture. The Root Registry sits at the center, providing APIs for the other services to consume for their own use cases. Trusted instances of those services submit additional data about extensions via the Interactions service to enhance and enrich the service to better inform and delight users.

Over on the Postgres Wiki I’ve published a new document for the PGXN v2 project: PGXN v2 Architecture. It has diagrams, such as the one above! From the introduction:

This document outlines the project to build extension distribution, discovery, and packaging tools and services to power the growth, accessability, and utility of the Postgres extension ecosystem. Taking the overall Postgres community as its audience, it defines the services to be provided and the architecture to run them, as well as the strategic vision to guide project planning and decision-making.

With the goal to think strategically and plan pragmatically, this document describes the former to enable the latter. As such, it is necessarily high-level; details, scoping, and planning will be surfaced in more project-focused documents.

Bear in mind that this document outlines an ambitious, long-term strategy. If you’re thinking that there’s too much here, that we’er over-thinking and over-designing the system, rest assured that project execution will be fundamentally incremental and pragmatic. This document is the guiding light for the project, and subject to change as development proceeds and new wrinkles arise.

For those of you interested in the future of Postgres extension distribution, please give it a read! I expect it to guide the planning and implementation of the the new services and tools in the coming year. Please do consider it a living document, howeve

[...]

Installing pg_tle on Ubuntu: A Quick Guide
Posted by Robins Tharakan on 2024-04-23 at 13:25
Compile & Install pg_tle on PostgresPostgreSQL is a powerful database, but sometimes you want to extend its functionality with custom features, and that's where extensions like pg_tle (Trusted Language Extensions) come into play.If you're new to pg_tle, here's a quick round-up of why it makes sense for you - See Unlock PostgreSQL Superpowers with pg_tle.Given the power of&

DBaaG with SQL/PGQ
Posted by Ashutosh Bapat on 2024-04-23 at 09:44

For those who have studied ERD-lore, it's not new that a relational database is very much like a graph. But it has taken SQL, more than 30 years since it became a standard and almost half a century since its inception to incorporate construct that will allow a DataBase to be treated as a Graph, DBaaG. This is surprising given that SQL was developed as language for relational databases which are modeled using ER diagrams. Better late than never. SQL/PGQ has arrived as 16th part of SQL:2023.

Entity Relationship Diagram, ERD in short, is a tool to model and visualize a database as entity types (which classify the things of interest) and relationships that can exist between them. Entity types and the relationships both map to relations in a Relational DataBase Management System (RDBMS in short). The rows in the relations represent entities (instances of entity types) and relationship between entities respectively. Fig. 1 below shows an ERD for a hypothetical shop.

This diagram very much looks like a graph with entity types represented as nodes and relationships represented by edges. That's exactly what SQL/PGQ is about. It adds language constructs to SQL to present underlying database as a "Property Graph". For example, property graph definition corresponding to the above ERD would look like

CREATE PROPERTY GRAPH shop
VERTEX TABLES (
    CreditCard label Payment,
    BankAccount label Payment,
    Person label Customer,
    Company label Customer,
    Trust label Customer,
    Wishlist label ProdLink,
    Order label ProdLink,
    Product)
EDGE TABLES (
    CCOwns label Owns
    BAHolds lable Owns,
    CustOrders label CustLink,
    CustWishlist label CustLink,
    CompanyOrders label CustLink,
    CompanyWishlist label CustLink,
    TrustOrders label CustLink,
    TrustWishlist label CustLink,
    OrderCCPayment label OrderPayment,
    OrderBAPayment label OrderPayment,
    OrderItems label ItemLink,
    Wishl
[...]

PL\pgSQL Conversion Gotchas : Functions with Out parameter and return type.
Posted by Deepak Mahto on 2024-04-23 at 06:11

If you’re fascinated by the internals of PL/pgSQL processing and want to handle it efficiently, especially if you’re migrating procedural logic from another database, be sure to check out the previous posts in the series on PL/pgSQL Conversion Gotchas – Part 1 and Part 2

In Oracle, we used to build multiple functionalities within procedural blocks, either in functions or procedures. In these blogs, we will explore some of the gotchas that we need to take care of.

Migrating Functions with Out Parameters and Return Types

Oracle allows functions to have arguments defined as out parameters and also return additional status arguments or execution statuses as part of function execution. Though it is not best practice, when migrating legacy code bases, we encounter all types of scenarios.

Below is one such sample function with out params and return type.

create or replace function func_test1
(var1 in integer,var2 out integer )
return integer
as
begin
var2 := var1 + 1;
return var2*2;
end;

If we run it in Oracle, it will produce two values: one as an out parameter and another as a return value

set serveroutput on;
declare
  l_out_var2 integer; 
  l_out_ret integer;  
begin
  l_out_ret := func_test1(1,l_out_var2 );
  dbms_output.put_line('Out Value - ' || l_out_var2);
  dbms_output.put_line('Return Value - ' || l_out_ret);
end;
/
/

Let’s attempt to convert the sample function to PostgreSQL using the PL/pgSQL language. As a reference, we’ll aim to build similar code logic as in the Oracle sample, creating a function in PostgreSQL with PL/pgSQL that includes both an out parameter and a return type.

create or replace function func_test1
(var1 in integer,var2 out integer )
returns integer
language plpgsql
as
$$
begin
var2 := var1 + 1;
return var2*2;
end;
$$;

Let’s proceed with deploying it within the PostgreSQL database

In PostgreSQL, when there’s a requirement to return multiple values, we typically transform them into record types with attributes corresponding t

[...]

PgTraining Free Online Event: Material Available
Posted by Luca Ferrari on 2024-04-23 at 00:00

The material and the videos are now online!

PgTraining Free Online Event: Material Available

The past Friday, on April 19th, we did our fourth edition of the webinar dedicated entirely to PostgreSQL, provided by PgTraining.


As in the previous editions, we had three talks and an open discussion at the end. The talks (all in italian) were:

  • Introduzione al linguaggio PL/Java (“An introduction to the PL/Java language”), from yours truly;
  • PgVector - in R768 nessuno può sentirti urlare (“PgVector - in R768 nobody can hear you screaming”), by Chris Mair;
  • Repliche logiche e migrazione di versione a caldo da PostgreSQL 12 a PostgreSQL 16 (“Logical replication and hot upgrade from PostgreSQL 12 to PostgreSQL 16”), by Enrico Pirozzi


The material is available on our Gitlab repository and such repository contains also links and material from the previous editions!

Some material is still under upgrading, so if not already there, it will appear any moment soon.

About Talk Selection for POSETTE: An Event for Postgres 2024
Posted by Claire Giordano in CitusData on 2024-04-22 at 17:42

As promised in the CFP for POSETTE: An Event for Postgres 2024, all of the talk selection decisions were emailed out on April 17th. Our talk selection work has now concluded, with the possible exception of accepting proposals from the Reserve list.

So what’s next? First I want to thank all of you Postgres people who submitted such amazing talk proposals into the CFP for POSETTE, now in its 3rd year. I was so impressed by the submissions and wish we could have accepted more of them.

And I also want to thank Alicja Kucharczyk, Daniel Gustafsson, and Melanie Plageman from POSETTE’s Talk Selection Team for contributing their time and expertise to collaborate with me to select the talks for this year’s virtual POSETTE event. It’s not easy to carefully read through and review 184 talk proposals—in just 8 days—to come up with the program for an event like #PosetteConf.

That’s right, 184 talk proposals—from 120 unique speakers. (The CFP had a maximum of 4 submissions per speaker.) With just 38 talks to accept this year, that means POSETTE 2024 has a ~20% talk acceptance rate. Bottom line, we had some difficult decisions to make.

So many great talk proposals we had to lengthen the POSETTE schedule to make space

The original POSETTE plan for 2024 was to have 4 livestreams with 9 talks each. The math looked like this:

  • Each livestream would have:
    • 1 invited keynote—not selected through the CFP talk selection process, but rather an invited keynote speaker
    • 8 unique talks selected via the CFP process
  • Hence, 36 talks total:
    • 32 talks selected via the CFP process + 4 unique keynotes

However, the best laid plans of mice and men and all that, we had to throw that math out the window. There were too many good talk proposals.

Luckily the talk production team led by Teresa Giacomini was able to rejigger their recording schedules to make room for 6 more talks.

So the final POSETTE 2024 schedule will have:

  • 42 talks total: 38
[...]

🎙️ Hacking Postgres s02e03
Posted by David Wheeler in Tembo on 2024-04-22 at 16:28
Screenshot from Hacking Postgres, with a classic TV-shaped view on the left labeled “Ry Walker, CEO Tembo” featuring a middle-aged white man in a black hoodie and large black headphones; and another TV-shaped view on the right labeled “David Wheeler, Principal Architect, Tembo” featuring a middle-aged white man with a grey beard, eyeglasses, blue collared shirt, and large beige headphones.

Last week I appeared on s02e03 of the Hacking Postgres podcast.

The experience I had after my independent Consulting gig for 10 years working in companies was, like, bringing up other people and being supportive of other people and hearing from a diversity of voices and perspectives makes everything better.

That’s part of why I want to get so much input on and feedback on the stuff that I’m hoping do with PGXN v2 — or whatever we ultimately call it. But people matter, more than the technology, more than any of the rest of it.

I quite enjoyed this wide-ranging discussion. We covered my history with the Postgres community, a bunch of the projects I’ve worked on over the years, plans and hopes for the PGXN v2 project, perspectives on people and technology, and exciting new and anticipated features of Postgres. Find it wherever fine podcasts are streamed, including:

PGDay UK 2024 - It's back on September 11th in London, England!!
Posted by Dave Page in PGDay UK on 2024-04-22 at 12:47

PGDay UK 2024 will take place in London on September 11th at the Cavendish Conference Centre, London, UK.

Our Call for Papers and Call for Sponsors are now open.

We are accepting proposals for talks in English. Each session will last 50 minutes, and may be on any topic related to PostgreSQL. The submission deadline is 9th June 2024 23:59:59 BST. Selected speakers will be notified on or before July 1st, 2024. Please submit your proposals here:

https://2024.pgday.uk/call-for-papers/

The proposals will be considered by a committee who will produce a schedule to be published nearer the conference date. The members of the committee are listed on the website linked above. All selected speakers will get free entry to the conference.

Our Call for Sponsors is also open. Take your chance to present your services or products to the PostgreSQL community - or see it as a give-back opportunity. The benefactor sponsorship level also includes a free entrance ticket. Please head to:

https://2024.pgday.uk/become-sponsor/

for more details.

As usual, if you have any questions, don't hesitate to contact us at contact@pgday.uk.

We look forward to seeing you in London on September the 11th!

PGDay UK 2024 is a PostgreSQL Europe event run according to the PostgreSQL Community Conference Recognition programme. Local logistics and other services are provided by Slonik Enterprises Ltd. on a not-for-profit basis.

Recursive CTEs: Transforming and Analyzing Data in PostgreSQL, Part 3
Posted by Ryan Booz in Redgate on 2024-04-22 at 01:12

The first two articles in this series demonstrated how PostgreSQL is a capable tool for ELT – taking raw input and transforming it into usable data for querying and analyzing. We used sample data from the Advent of Code 2023 to demonstrate some of the ELT techniques in PostgreSQL.

In the first article, we discussed functions and features of SQL in PostgreSQL that can be used together to transform and analyze data. In the second article, we introduced Common Table Expressions (CTE) as a method to build a transformation query one step at a time to improve readability and debugging.

In this final article, I’ll tackle one last feature of SQL that allows us to process data in a loop, referencing previous iterations to perform complex calculations: Recursive CTE’s.

SQL is Set-based

SQL is primarily a set-based, declarative language. Using standard ANSII SQL and platform-specific functions, a SQL developer declares the desired outcome of a query, not the process by which the database should retrieve and process the data. The query planner typically uses statistics about the distribution of data to determine the best plan to get the desired result and return the full set of rows. While CTE’s and LATERAL joins make it feel like we can use the output of one query to impact another, those are always a one-shot opportunity. As a set-based language, it’s impossible to do algorithmic calculations, the ability to use the output of a query as input and control to another in a loop.

Stated another way, early versions of the SQL standard did not have procedural capabilities. To do that, most database platforms use their own superset of SQL that provides procedural capabilities. By default, this is T-SQL in SQL Server and pl/pgsql in PostgreSQL.

That changed with the SQL:1999 standard. With this new feature, implemented by all major databases, SQL became a Turing-complete language that can solve complex calculations in a single query.

Recursive Common Table Expressions (aka. Hierarchical Queries)

[...]

Using PL/Java: need for clarifications
Posted by Luca Ferrari on 2024-04-22 at 00:00

Sometimes it happens: I write something in a rush, and present it in a not-optimal way. And then I get advices!

Using PL/Java: need for clarifications

On January, I wrote an article about installing PL/Java on Rocky Linux, and about some of the difficulties I had in achieving a fully operational installation, even if I did not dig enough into the problems that I encountered.


Chapman Flack, the most active developer in the project at the moment, take the time to write to me a very detailed email with a lot of suggestions for improvements and providing corrections to some of the misconceptions I present in such an article.


I’m really glad to have received all those insights, and in order to spread the word, I’m writing here another article that, hopefully, fixes my mistakes. I’m not following the same order that Chapman presented them to me, since in my opinion some issues are much more important than others, so I present from the most important to the least one, according to me.

Editing the java.policy file

In my previous article, I advised readers to edit java.policy in the case there was a problem with Java permissions when executing PL/Java code. Despite the fact that I clearly stated that relaxing the permissions to all permissions was not a good idea, Chapman emphasized two main problems in my example: 1) I was editing the main policy file, therefore changing the policy rules for all the Java code, not only for PL/Java one; 2) adding java.security.AllPermission made no distinction between trusted and untrusted languages.

Chapman pointed out that PL/Java uses a customized policy file, that can be found in the PostgreSQL configuration directory, hence in $(pg_config --sysconfdir). This customizable configuration is available since PL/Java version 1.6, and is documented here in the section “Permissions available in sandboxed/unsandboxed PL/Java”. This file defines two main principals:



grant principal org.postgresql.pljava.PLPrincipal$Sandboxed * {
};


grant princ
[...]

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.