Latest Blog Posts

Postgres 17 Presentation
Posted by Bruce Momjian in EDB on 2024-10-17 at 20:00

I recently created a presentation about what I think are the most important areas of improvement in Postgres 17:

  1. Incremental backup
  2. Improved data manipulation
  3. Improved optimizer handling
  4. Improved logical replicas

Peter Eisentraut and I also did a webinar about this, and a recording is now available.

pg_parquet: An Extension to Connect Postgres and Parquet
Posted by Craig Kerstiens in Crunchy Data on 2024-10-17 at 14:30

Today, we’re excited to release pg_parquet - an open source Postgres extension for working with Parquet files. The extension reads and writes parquet files to local disk or to S3 natively from Postgres. With pg_parquet you're able to:

  • Export tables or queries from Postgres to Parquet files
  • Ingest data from Parquet files to Postgres
  • Inspect the schema and metadata of existing Parquet files

Code is available at: https://github.com/CrunchyData/pg_parquet/.

Read on for more background on why we built pg_parquet or jump below to get a walkthrough of working with it.

Why pg_parquet?

Parquet is a great columnar file format that provides efficient compression of data. Working with data in parquet makes sense when you're sharing data between systems. You might be archiving older data, or a format suitable for analytics as opposed to transactional workloads. While there are plenty of tools to work with Parquet, Postgres users have been left to figure things out on their own. Now, thanks to pg_parquet, Postgres and Parquet easily and natively work together. Better yet, you can work with Parquet without needing yet another data pipeline to maintain.

Wait, what is Parquet? Apache Parquet is an open-source, standard, column-oriented file format that grew out of the Hadoop era of big-data. Using a file, Parquet houses data in a way that is optimized for SQL queries. In the world of data lakes, Parquet is ubiquitous.

Using pg_parquet

Extending the Postgres copy command we're able to efficiently copy data to and from Parquet, on your local server or in s3.

-- Copy a query result into a Parquet file on the postgres server
COPY (SELECT * FROM table) TO '/tmp/data.parquet' WITH (format 'parquet');

-- Copy a query result into Parquet in S3
COPY (SELECT * FROM table) TO 's3://mybucket/data.parquet' WITH (format 'parquet');

-- Load data from Parquet in S3
COPY table FROM 's3://mybucket/data.parquet' WITH (format 'parquet');

Let's take an example products table, but not just a basic

[...]

Implementing Bi-Directional Replication in PostgreSQL
Posted by semab tariq in Stormatics on 2024-10-17 at 07:18

In today's fast-paced digital world, ensuring that your data is always up-to-date and accessible is crucial. For businesses using PostgreSQL, replication is a key feature that helps achieve this. While many are familiar with streaming replication, bi-directional replication offers unique advantages that can enhance data availability and reliability. In this blog post, we'll explore what bi-directional replication is, how it differs from streaming replication, and provide a practical example to setup bi directional replication in PostgreSQL

The post Implementing Bi-Directional Replication in PostgreSQL appeared first on Stormatics.

pgenv 1.3.8 is out!
Posted by Luca Ferrari on 2024-10-17 at 00:00

A new release of pgenv that simplifies the management of PostgreSQL 17.

pgenv 1.3.8 is out!

Yesterday, David Wheeler releader version 1.3.8 of pgenv, that solves a few problems in dealing with the latest PostgreSQL release version 17.

The build workflow of PostgreSQL 17 has slightly changed, so that new dependencies are required to produce the documentation. Thanks to the work by Brian Salehi now the pgenv build command performs a make world-bin (essentially world-bin is the target to build and install PostgreSQL without documentation). The documentation package is downloaded separately, since now the documentation pre-built has been removed from the source tree and is available as a separate tarball.

Moreover, this release includes another Brian’s little contribution that improves the descriptive messages about dependencies.

Enjoy!

Rails World 2024 Conference Recap
Posted by Andrew Atkinson on 2024-10-17 at 00:00

This is Part 1 of my recap of Rails World 2024, a phrenetic two-day conference in Toronto, Canada, September 2024, with 1000+ attendees.

In this post, I’ll describe some sessions, but mostly they’re saved for part 2, once I watch all the sessions I missed now that the full Rails World 2024 Playlist is on YouTube.

As a book author and consultant, the focus for Rails World for me was on meeting people, raising awareness about my book, and generally chatting about how people are using Postgres and Rails. As a long-time Ruby community member, it was great to catch up with a lot of industry friends.

As a first time Rails World visitor, I was really impressed with the energy and vibes.

Let’s get into it.

Arrival: Wednesday

🇨🇦 I Landed in Toronto after a short two hour flight from Minneapolis. I was feeling excited to promote my book, meet attendees, and give away 8 copies over the next few days. I was feeling grateful!

Rails World Conference 2024 Landed in Toronto, let’s go!

To celebrate my book launch and successfully striking out on my own as an independent Postgres and Rails consultant, I hosted a happy hour gathering of Postgres Fans. We had a good turnout, conversations, and new and strengthened connections. We didn’t talk about Postgres much, but the event was a success! 🙌

Rails World Conference 2024 Party time, Postgres fans Happy Hour at The Queen & Beaver Public House

This was my first time hosting an event like this. I was able to sponsor a round of drinks and appetizers thanks to the success in book sales and my consulting business.

It felt great to bring folks together and I appreciated everyone coming out!

Afterwards, most of the group walked over to the Shopify pre-registration event to get badges, hang out, and grab dinner.

Rails World Conference 2024 Pre-registration party by Shopify, with John and Jesper

Conference Day 1: Thursday

In the kick-off, I appreciated the overview of the Rails Foundation activities, and was excited to see my GitHub handle up on screen briefly along with some friends! In the last year, I made s

[...]

Handling BLOBs In PostgreSQL
Posted by Stefanie Janine on 2024-10-15 at 22:00

BLOBs In PostgreSQL

Implementation

PostgreSQL does not have a BLOB data type as specified in the SQL standard. The nearest implementation is the data type BYTEA. Since PostgreSQL 9.0 it does handle data by standard as hexadecimal data.

Limitations

BYTEA has a limit of max 1 GB of data.

Binary data cannot be indexed or searched for content. They can be inserted, updated (fully replaced), or deleted.

What Can Be Stored In 1 GB

You can store up to 3,000 jpg images with a size of up to 5 MB in 1 GB.
An English protestant bible contains roughly 809,000 letters. That results in 1,236 bible texts that can be stored in 1 GB.

That should give you an impression what 1 GB of data is.

More Than 1 GB BLOBs

Binary data can be stored in files, that PostgreSQL is referencing to. That way the binary files are stored on the file system and can be created, updated, or deleted. The functions that can be used are documented, the usage is supported in ODBC, .NET, JDBC, or libpq.

The files are referenced in a table with an OID.

Limitations

But this method has limits, too. 4 TB is what PostgreSQL can handle as maximum per file. The maximum size of all objects is 32 TB

As OIDs are 32 bit numbers, there is a maximum of files, that can be handled, it is 2^32.

Documentation

In addtion to the documentation about Binary Data Types there is a PostgreSQL Wiki Page.
There are listed some pros and cons about BLOBs in databases.

Why I Believe You Should Not Use BLOBs In A Database

Even while the PostgreSQL Wiki](https://wiki.postgresql.org/wiki/BinaryFilesInDB) has some pros, I believe the reasons that are speaking against database BLOBs are more the important ones.

Getting files through databases involves another layer handling the data: the database.
That is obviously comming with cost:

  • It requires more memory than normal data read/writes
  • Backups will take significantly longer
  • Longer transactions for accessing or storing binary data
  • May become
[...]

Is pg_dump a Backup Tool?
Posted by Robert Haas in EDB on 2024-10-15 at 20:03
Recently, I've been hearing a lot of experienced PostgreSQL users reiterate this line: "pg_dump is not a backup tool." In fact, the documentation has recently been updated to avoid saying that it is a backup tool, to widespread relief. Experienced PostgreSQL users and developers have been publicly called out for having the temerity to assert that pg_dump is, in fact, a backup tool. I find this narrative deeply frustrating, for two reasons.Read more »

Dealing with trigger recursion in PostgreSQL
Posted by Laurenz Albe in Cybertec on 2024-10-15 at 13:04

A father tells his son to hit back the next time, but the son is afraid of trigger recursion.
© Laurenz Albe 2024

Many a beginner falls into the trap of trigger recursion at some point. Usually, the solution is to avoid recursion at all. But for some use cases, you may have to handle trigger recursion. This article tells you what you need to know about the topic. If you were ever troubled by the error message “stack depth limit exceeded”, here is the solution.

The beginner's mistake leading to trigger recursion

Triggers are the only good way to change data automatically. Constraints are the “policemen” that make sure rules are not voilated, but triggers are the workers that make the data stay in line. A beginner who has understood that may (quite correctly) wish to use a trigger to set the updated_at column in the following table:

CREATE TABLE data (
   id bigint
      GENERATED ALWAYS AS IDENTITY
      PRIMARY KEY,
   value text NOT NULL,
   updated_at timestamp with time zone
      DEFAULT current_timestamp
      NOT NULL
);

The column default will set updated_at when the row is inserted, but won't change the value when you update the row. For that, our beginner writes a trigger:

CREATE FUNCTION set_updated_at() RETURNS trigger
   LANGUAGE plpgsql AS
$$BEGIN
   UPDATE data
   SET updated_at = current_timestamp
   WHERE data.id = NEW.id;

   RETURN NEW;
END;$$;

CREATE TRIGGER set_updated_at
   AFTER UPDATE ON data FOR EACH ROW
   EXECUTE FUNCTION set_updated_at();

But that won't work as intended:

INSERT INTO data (value) VALUES ('initial') RETURNING id;

 id 
════
  1
(1 row)

UPDATE data SET value = 'changed' WHERE id = 1;
ERROR:  stack depth limit exceeded
HINT:  Increase the configuration parameter "max_stack_depth" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.
CONTEXT:  SQL statement "UPDATE data
   SET updated_at = current_timestamp
   WHERE data.id = NEW.id"
PL/pgSQL function set_updated_at() line 2 at SQL statement
SQL statement "UPDATE data
   SET updated_at = current_timestamp
   WHERE data.id = NEW.id"
PL/pgSQL function set_updated_
[...]

PostgreSQL Anonymizer 2.0 - Generating Fake Data
Posted by damien clochard in Dalibo on 2024-10-15 at 10:17

After several months of development, version 2.0 of PostgreSQL Anonymizer has entered the beta phase, and this is an opportunity for us to launch a series of articles to present its new capabilities in preview!

For this first technical overview, let’s see how to generate fake data (also known as “synthetic data”).

Photo Credit Markus Spiske

Logo PostgreSQL Anonymizer

Why is it important?

PostgreSQL Anonymizer 2.0 offers a wide range of functions to generate fake but realistic data. These functions are useful for writing masking rules and replacing sensitive data with data that “looks real.”

But that’s not the only benefit of these functions. In fact, they can also be used from the very first steps of a new project: when designing a data model, it is essential to “populate” the tables with data so as not to start development “empty-handed.”

Consider a new application that needs to rely on a classic customer table:

CREATE TABLE public.customer (
    id          INT PRIMARY KEY,
    firstname   TEXT,
    lastname    TEXT,
    email       TEXT
);

We want to insert 2000 people into this table, but how can we do it since the application doesn’t exist yet?

Why is it complicated?

Obviously, inserting John Doe 2000 times into the table is not really an option! The challenge is to produce realistic and context-appropriate data.

For example, if the application is intended for adult users, we want the birth dates to be between 1950 and 2006.

This is where PostgreSQL Anonymizer comes into play with its wide range of randomization functions and fake data generators.

INSERT INTO customer
SELECT
        i*100+pg_catalog.random(0,99), -- avoid collisions !
        anon.dummy_first_name(),
        anon.dummy_last_name(),
        anon.dummy_free_email()
FROM generate_series(1,2000) i;

Note: The random(x,y) function is one of the new features of PostgreSQL 17! For earlier versions, the anon.random_int_between(x,y) function is an equivalent alternative.

In total, PostgreSQL Anonymizer provides more than 70 fake data

[...]

PGDay Lowlands 2024 Reflections
Posted by Pavlo Golub in Cybertec on 2024-10-15 at 05:13

Introduction

This year, PGDay Lowlands 2024 was paired with PGDay UK 2024, which gave me the chance to try something new—taking the train directly from London to Amsterdam. Naturally, I was curious about passing through the famous tunnel under La Manche (some folks call it the English Channel, but I am still unsure why 😜). Spoiler alert: there's nothing cool to see, just darkness. 🙂

If you want to read about PGDay UK 2024, I recommend the blog post of my friend Stefan Fercot. Thank God he did a great job, so I can focus only on one event instead of two! 😅

A Special Connection with Amsterdam

PGDay Lowlands 2024 marks the first year of this event, but Amsterdam holds a special place in my professional journey. My first ever PostgreSQL community conference was in this beautiful city, way back in 2011, during pgconf.eu. So, Amsterdam isn't just another city for me; it's a key part of my community and professional life. I always love coming back here.

The conference schedule was pretty packed, so I only had a little time to visit my favorite places in Amsterdam this time. But the experience of the conference itself made up for that.

My Talk: "Customizing the Wordle Game Experience with PostgreSQL"

My presentation was titled "Customizing the Wordle Game Experience with PostgreSQL." Despite the fun name, the talk covered some serious and interesting topics, like full-text search dictionaries and customizing psql. The best part? We ended the talk by playing Wordle with the audience, where they had to guess PostgreSQL 5-letter keywords. I'm happy to report that Gianni Ciolli is officially the best keyword professional among us! 😄

A Second Chance

I initially presented this talk once at PgConf in New York, but unfortunately, my slot was in parallel with Bruce Momjian's session. Naturally, many attendees chose to go to his talk (honestly, I would have done the same!). That left me with little feedback and some doubts about the talk's appeal. So, I'm incredibly

[...]

Tuning the glibc memory allocator (for Postgres)
Posted by Tomas Vondra on 2024-10-14 at 10:00

If you’ve done any Postgres development in C, you’re probably aware of the concept of memory contexts. The primary purpose of memory contexts is to absolve the developers of having to track every single piece of memory they allocated. But it’s about performance too, because memory contexts cache the memory to save on malloc/free calls. But malloc gets the memory from another allocator in libc, and each libc has its own thing. The glibc allocator has some concurrency bottlenecks (which I learned the hard way), but it’s possible to tune that.

pgsql_tweaks 0.10.7 Released
Posted by Stefanie Janine on 2024-10-13 at 22:00

pgsql_tweaks is a bundle of functions and views for PostgreSQL

The soucre code is available on GitLab, a mirror is hosted on GitHub.
One could install the whole package, or just copy what is needed from the source code.

The extension is also available on PGXN.

General changes

It is only a minor release, no function or view has been changed.

What has been changed is the creation of the release files. Especially the test retult output. The file test/sql/out/pgsql_tweaks_test--0.10.7.out is now created with catptions and messages set to en_EN with the parameter LC_MESSAGES.
This does make the test results comparable.
In the README it is also explained, how to compare local test results with the released version test result.

In addtion the description on how to create the extension from source has been extended.

Contributions of w/c 2024-10-07 (week 41)
Posted by Jimmy Angelakos in postgres-contrib.org on 2024-10-12 at 19:45

SQL/JSON is here! (kinda “Waiting for Pg 17”)
Posted by Hubert 'depesz' Lubaczewski on 2024-10-11 at 21:37
Amazing. Awesome. Well, but what is it? We could store json data in Pg since PostgreSQL 9.2 – so it's been there for over 12 years now. How is the new shiny thing different? What does it allow you to do? Let's see if I can shed some light on it… For starters: SQL/JSON is … Continue reading "SQL/JSON is here! (kinda “Waiting for Pg 17”)"

CNPG Recipe 14 - Useful Command-Line Tools
Posted by Gabriele Bartolini in EDB on 2024-10-11 at 16:53

In this CNPG recipe, we explore three essential command-line tools that simplify working with CloudNativePG in Kubernetes: view-secret, view-cert, and stern. These tools enhance tasks such as inspecting secrets, verifying certificates, and tailing logs across multiple pods, streamlining your PostgreSQL management experience in a cloud-native environment. Whether you’re troubleshooting or optimising workflows, these utilities will help you boost productivity and gain better control over your Kubernetes-based PostgreSQL deployments.

Convert JSON into Columns and Rows with JSON_TABLE
Posted by Paul Ramsey in Crunchy Data on 2024-10-11 at 14:30

JSON_TABLE, new in Postgres 17

If you missed some of the headlines and release notes, Postgres 17 added another huge JSON feature to its growing repository of strong JSON support with the JSON_TABLE feature. JSON_TABLE lets you query JSON and display and query data like it is native relational SQL. So you can easily take JSON data feeds and work with it like you would any other Postgres data in your database.

Shaking the Earth with JSON_TABLE

A few days ago, I was awakened in the middle of the night when my house started to shake. Living in the Cascadia subduction zone, when things start to shake I wake up really fast, because you never know if this one is going to be the Big One.

Fortunately this one was only a little one, 4.0 magnitude quake several miles to the north of the city, captured and memorialized in its own USGS earthquake page, almost as soon as it had finished shaking.

The USGS keeps a near-real-time collection of information about the latest quakes online, served up in a variety of formats including GeoJSON.

alt

The weekly feed of magnitude 4.5 quakes has a nice amount of data in it.

If we could import this feed into the database, we could use it for other queries, like finding potential customers to sell tents and emergency supplies to! (When the big one hits me, sell me some tents and emergency supplies.)

This readily available GeoJSON earthquake file seems like the perfect chance to try out the new JSON_TABLE. And maybe give me something to do in the middle of night.

Retrieving a JSON file with HTTP

The first step is to retrieve the feed. The simplest way is to use the http extension, which provides a simple functional API to making HTTP requests.

CREATE EXTENSION http;

The http_get(url) function returns an http_response, with a status code, content_type, headers and content. We could write a wrapper to check the status code, but for this example we will just assume the feed works and look at the content.

SELECT jsonb_pretty(content::jsonb)
  FRO
[...]

PostgreSQL Berlin October 2024 Meetup
Posted by Andreas Scherbaum on 2024-10-11 at 02:36

On August 9th, 2024, we had the PostgreSQL October Meetup in Berlin. Amazon hosted it, and sponsored the Meetup in their office in the Brüderstraße, in Berlin Mitte.

More Release Note Details
Posted by Bruce Momjian in EDB on 2024-10-10 at 20:30

Over the years, we occasionally get requests for more detail on release note items, particularly for the major release notes. Historically, our only response was to tell people to view the SGML/XML file used to generate the web and PDF versions of the release notes. In the file, above each release note item, is a comment block which contains the commit headers responsible for the item, e.g.:

<!--
Author: David Rowley <drowley(at)postgresql(dot)org>
2024-01-23 [b262ad440] Add better handling of redundant IS [NOT] NULL quals
Author: David Rowley <drowley(at)postgresql(dot)org>
2024-04-12 [3af704098] Fix IS [NOT] NULL qual optimization for inheritance tabl
-->

Continue Reading »

Transform PostgreSQL into a Columnar Database Using Citus
Posted by semab tariq in Stormatics on 2024-10-10 at 09:23

Columnar databases are transforming the way we handle large datasets by storing data in columns rather than rows. This approach enhances performance, especially for analytical queries, by allowing faster data retrieval and efficient storage. As businesses generate more data than ever, understanding the benefits of columnar databases becomes crucial. In this blog, we'll explore how these databases work, their advantages over traditional row-based systems, and why they are becoming a popular choice for data-driven organizations.

The post Transform PostgreSQL into a Columnar Database Using Citus appeared first on Stormatics.

Comparing Columnar to Heap Performance in Postgres with pg_timeseries
Posted by Shaun Thomas in Tembo on 2024-10-10 at 09:00
The pg_timeseries extension uses Hydra Columnar storage for historical data. Check this out if you want to know how this kind of storage compares to standard Postgres Heap storage during database writes.

PostgreSQL 17 - A Major Step Forward in Performance, Logical Replication and More
Posted by Ahsan Hadi in pgEdge on 2024-10-10 at 07:01

After a successful 3rd beta in August 2024, the PostgreSQL development group released the GA version on September 26th. Recently, I blogged about some of the key logical replication features that you'll see in PostgreSQL 17 https://www.pgedge.com/blog/logical-replication-features-in-Postgres 17.  In this blog I'll describe a couple of new performance features that you'll find in Postgres 17 as well as another important logical replication feature that I didn't cover in my earlier blog of this series.PostgreSQL has grown remarkably over the years, and with each major release has become a more robust, reliable, and responsive database for both mission critical and non-mission critical enterprise applications. The global and vibrant PostgreSQL community is contributing to PostgreSQL success, diligently ensuring that all changes are carefully scrutinized and reviewed before they are added to the project source code. It is also very encouraging to see big technology names like Microsoft, Google, Apple, and others investing in Postgres by developing in-house expertise and giving back to the open source community.Improvements to logical replication are paving the way for adding distributed PostgreSQL support to the core functionality. Distributed PostgreSQL refers to the implementation of PostgreSQL in a distributed architecture, allowing for enhanced scalability, fault tolerance, and improved performance across multiple nodes. A pgEdge fully distributed PostgreSQL cluster already provides essential enterprise features like improved performance with low latency, ultra-high availability, data residency, and fault tolerance. Now without further adieu let's discuss some PostgreSQL 17 performance features:     

Improved Query Performance with Materialized CTEs

Common Table Expressions (CTEs) in PostgreSQL are temporary result sets that can be referenced within a , , , or  statement. They enhance the readability and organization of complex queries and can be recursive, making them particularly useful for hierarchical [...]

sparql_fdw Foreign Data Wrapper Tested Against PostgreSQL 17
Posted by Stefanie Janine on 2024-10-09 at 22:00

sparql_fdw Forreign Data Wrapper

The sparql_fdw is a foreign data wrapper to connect to query web databases with the SPARQL protocol from inside PostgreSQL written in Python.

Test Against PostgreSQL 17

As multicorn2 is already working with PostgreSQL 17, I tested the sparql_fdw, too.

It worked like a charm and I have added PostgreSQL 17 to the supported versions in the README file, the merge request is pending.

Enhanced Postgres Release Notes
Posted by Greg Sabino Mullane in Crunchy Data on 2024-10-09 at 21:30

There is something new you may not have seen in the release notes for Postgres 17. No, not a new feature - I mean inside the actual release notes themselves! The Postgres project uses the git program to track commits to the project, and now each item in the release notes has a link to the actual commit (or multiple commits) that enabled it.

You may have missed it if you were scanning the release notes, but after the end of each specific item in the release note is a small “section” symbol which looks like this: §. Each of these symbols is a link leading to the relevant commit for that item. Here’s what it looks like on the Postgres 17 release notes page:

pg_commit_plain.png

Clicking the section symbol will send you to the GIT link for each individual patch, for example, this one:

hackers email messsage

Note that there’s a “Discussion” link in each commit linking back to the full thread on the hackers channel.

Writing the release notes is hard work, and involves a good bit of debate in the community. We have to make sure we list all the changes, in a concise yet comprehensible manner, and decide what level of detail to include. Oftentimes, this level is not sufficient for people interested in learning about this feature. That’s where these new commit links are invaluable. They link to the git commit, which not only lets you see the exact code changes that were made, but show the actual commit message, which has more detail than what can be provided in the release notes.

Postgres Notes in Lots of Places

Postgres release notes appear in lots of different places so this addition will surely make its way into other downstream projects. This new link also now appears on “postgres all versions” - a project that I maintain that collates the information from all of the release notes for every version of Postgres (over 500 now!) into a single page. To make the link more visible and easier to use, I converted it to a larger Unicode scroll symbol, then added some tooltip tech to make it show information about the link like so:

[...]

PGXN Certifications RFC
Posted by David Wheeler in Tembo on 2024-10-09 at 16:26

A couple weeks ago, I drafted PGXN RFC–5 — Release Certification, which proposes to replace the simple inclusion of a SHA-1 hash digests in PGXN release META.json files with a JWS-signed release payload. From the introduction:

This RFC therefore proposes to extend v2 distribution metadata with a single additional property, certs, that contains one or more certifications that attest to the authenticity or other characteristics of a release on PGXN.

The certs value is an object that contains at least one property, pgxn, which itself contains a PGXN-generated RFC 7515 JSON Web Signature in the JWS JSON Serialization format. The pgxn property will allow clients not only to assemble the release URL and verify the downloaded file against checksums, but also validate it against a public key provided by PGXN.

The design allows multiple signatures, certifications, or other attestations, which in the future MAY allow authors or other entities to sign releases with their own keys. The new format appends a structure such as this to the distribution META.json file:

{
  "certs": {
    "pgxn": {
      "payload": "eyJ1c2VyIjoidGhlb3J5IiwiZGF0ZSI6IjIwMjQtMDktMTNUMTc6MzI6NTVaIiwidXJpIjoiZGlzdC9wYWlyLzAuMS43L3BhaXItMC4xLjcuemlwIiwiZGlnZXN0cyI6eyJzaGE1MTIiOiJiMzUzYjVhODJiM2I1NGU5NWY0YTI4NTllN2EyYmQwNjQ4YWJjYjM1YTdjMzYxMmIxMjZjMmM3NTQzOGZjMmY4ZThlZTFmMTllNjFmMzBmYTU0ZDdiYjY0YmNmMjE3ZWQxMjY0NzIyYjQ5N2JjYjYxM2Y4MmQ3ODc1MTUxNWI2NyJ9fQ",
      "signature": "cC4hiUPoj9Eetdgtv3hF80EGrhuB__dzERat0XF9g2VtQgr9PJbu3XOiZj5RZmh7AAuHIm4Bh-rLIARNPvkSjtQBMHlb1L07Qe7K0GarZRmB_eSN9383LcOLn6_dO--xi12jzDwusC-eOkHWEsqtFZESc6BfI7noOPqvhJ1phCnvWh6IeYI2w9QOYEUipUTI8np6LbgGY9Fs98rqVt5AXLIhWkWywlVmtVrBp0igcN_IoypGlUPQGe77Rw"
    }
  }
}

Review and feedback would be very much appreciated, especially on the list of unresolved questions toward the end.

Thanks to David Christensen and Steven Miller for the early review

[...]

⛰️ Postgres Ecosystem Summit EU
Posted by David Wheeler in Tembo on 2024-10-09 at 15:27

Given the success of the Extension Ecosystem Summit at PGConf.dev back in May, my colleague Floor Drees has organized a sequel, the Extension Ecosystem Summit EU on Tuesday, October 22, at the Divani Caravel Hotel in Athens. That’s “Day 0” at the same hotel as PGConf.eu. Tembo, Percona, Xata, and Timescale co-sponsor.

While the May event took the form of an open-space technology (OST)-style unconference aimed at extension developers, the EU event aims to inform an audience of Postgres users about the history and some exemplary use cases for extensions. From the invite:

Join us for a gathering to explore the current state and future of Postgres extension development, packaging, and distribution. Bring your skills and your devices and start contributing to tooling underpinning many large Postgres installations.

  • Jimmy Angelakos - pg_statviz: pg_statviz is a minimalist extension and utility pair for time series analysis and visualization of PostgreSQL internal statistics.
  • Adam Hendel (Tembo) - pgmq: pgmq is a lightweight message queue. Like AWS SQS and RSMQ but on Postgres. Adam is pgmq’s maintainer since 2023, and will present a journey from pure Rust → pgrx → pl/pgsql.
  • Alastair Turner (Percona) - pg_tde: pg_tde offers transparent encryption of table contents at rest, through a Table Access Method extension. Percona has developed pg_tde to deliver the benefits of encryption at rest without requiring intrusive changes to the Postgres core.
  • Gülçin Yıldırım Jelínek (Xata) - pgzx: pgzx is a library for developing PostgreSQL extensions written in Zig.
  • Mats Kindahl (Timescale) - TimescaleDB (C), [pgvectorscale] (Rust) and pgai (Python): maintaining extensions written in different languages.

I will also deliver the opening remarks, including a brief history of Postgres extensibility. Please join us if you’re in the area or planning to attend PGConf

[...]

RDS Connectivity
Posted by Grant Fritchey on 2024-10-07 at 14:04

I travel quite a lot for work. Most of it is in the US and Europe, but I get around to other places as well. Most of the time, connectivity, while not the greatest, isn’t that bad, or hard. I run a VPN on my phone and laptop. I can set up my RDS connectivity […]

The post RDS Connectivity appeared first on Grant Fritchey.

Yurii Rashkovskii
Posted by Andreas 'ads' Scherbaum on 2024-10-07 at 14:00
PostgreSQL Person of the Week Interview with Yurii Rashkovskii: I was born and raised in Ukraine, and moved to Canada in 2008 without knowing much about it. I wanted to go as far west as I could to explore. That move changed my life a lot, and I now live on beautiful Vancouver Island in British Columbia.

Contributions of w/c 2024-09-30 (week 40)
Posted by Floor Drees in postgres-contrib.org on 2024-10-07 at 08:15

Contributions of w/c 2024-09-23 (week 39)
Posted by Jimmy Angelakos in postgres-contrib.org on 2024-10-06 at 00:54

Automating Highly Available PostgreSQL Clusters
Posted by Douglas Hunley on 2024-10-04 at 14:07
In March of ‘23, I took over as the Lead Architect of my employer’s Ansible-based automation for creating highly-available PostgreSQL clusters.

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.