Latest Blog Posts

Building Trunk
Posted by David Wheeler in Tembo on 2024-03-18 at 18:06
A line drawing of an old-style trunk with three diamond shaped stars above its closed lid.

This week, my fellow Tembonaut Ian Stanton will present the extension ecosystem mini-summit talk, “Building Trunk: A Postgres Extension Registry and CLI”. We felt it important to get some insight from a couple of the recently Postgres extension registries: what problems they set out to solve, how the were built and operate, their success at addressing their issues, and what issues remain, both for the projects and th ecosystem overall. Ian plans to give us the low-down on trunk.

Join us! Need other information or just want an invitation without using Eventbrite, hit me up at david@ this domain, on Mastodon, or via the #extensions channel on the Postgres Slack.

Florent Jardin
Posted by Andreas 'ads' Scherbaum on 2024-03-18 at 14:00
PostgreSQL Person of the Week Interview with Florent Jardin: I am from Lille, the capital of Flanders in the north of France, and I have been living here since birth.

Look ma, I wrote a new JIT compiler for PostgreSQL
Posted by Pierre Ducroquet on 2024-03-18 at 11:30

Sometimes, I don’t know why I do things. It’s one of these times. A few months ago, Python 3.13 got its JIT engine, built with a new JIT compiler construction methodology (copy-patch, cf. research paper). After reading the paper, I was sold and I just had to try it with PostgreSQL. And what a fun ride it’s been so far. This blog post will not cover everything, and I prefer other communication methods, but I would like to introduce pg-copyjit, the latest and shiniest way to destroy and segfault speed up your PostgreSQL server.

Before going any further, a mandatory warning: all code produced here is experimental. Please. I want to hear reports from you, like “ho it’s fun”, “ho I got this performance boost”, “hey maybe this could be done”, but not “hey, your extension cost me hours of downtime on my business critical application”. Anyway, its current state is for professional hackers, I hope you know better than trusting experimental code with a production server.

In the beginning, there was no JIT, and then came the LLVM JIT compiler

In a PostgreSQL release a long time ago, in a galaxy far far away, Andres Freund introduced the PostgreSQL world to the magics of JIT compilation, using LLVM. They married and there was much rejoicing. Alas, darkness there was in the bright castle, for LLVM is a very very demanding husband.

LLVM is a great compilation framework. Its optimizer produces very good and efficient code, and Andres went further than what anybody else would have thought and tried in order to squeeze the last microsecond of performance in his JIT compiler. This is a wonderful work and I don’t know how to express my love for the madness this kind of dedication to performance is. But LLVM has a big downside : it’s not built for JIT compilation. At least not in the way PostgreSQL will use it: the LLVM optimizer is very expensive, but not using it may be worse than no compilation at all. And in order to compile only the good stuff, the queries that can enjoy the performance boost, the typical que

[...]

What the hell is transaction wraparound?
Posted by Hubert 'depesz' Lubaczewski on 2024-03-18 at 10:59
Recently someone asked on Slack about what is transaction wraparound. Full answer is a bit too much for slack reply, but I can try to explain it in here. So, every row in PostgreSQL containst two hidden columns: xmin, and xmax. You can see them: =$ CREATE TABLE wrapit (a int4); CREATE TABLE   =$ … Continue reading "What the hell is transaction wraparound?"

Back from pgDay Paris 2024.
Posted by Adrien Nayrat on 2024-03-18 at 09:39

I’m back from pgDay Paris. I really enjoyed this edition. I’d already come to the 2019 one, and I must say I wasn’t deceived. As a reminder, pgDay Paris is an international conference. Presentations are in English, attracting more English speakers and audiences.

I often see this conference as a little PGConf Europe: the content is dense with an international dimension.

You meet familiar faces: attendees, speakers, volunteers, contributors: all the people who keep the Postgres community going.

It’s also an opportunity to put faces on names you’ve come across while reading articles or Postgres mailing lists.

Here’s a quick recap of the conferences I attended:

Elephant in a nutshell - Navigating the Postgres community 101

With Carole and Stéphanie, we thought it would be good to have this conference as an introduction.

I found it very complete. It talks about Postgres, but also and above all about its community. That’s what makes Postgres so powerful.

Slides are availables.

Sustainable Database Performance profiling in PostgreSQL

Postgres provides a variety of statistical views to give you information on its activity.

These can be native (pg_stat_statements …) or via extensions pg_stat_kcache, pg_wait_sampling.

However, and this is what this conference emphasizes, they provide a view at a given moment in time.

To use them, they need to be historized. The speaker presents a tool based on what is done on Oracle: pg_profile.

On the other hand, he presents it as the only tool that can be used to process such information. A participant in the audience pointed out that there was another project: PoWA.

The two tools don’t provide the same functions, pg_profile generates an HTML report. It’s fairly easy to install, and requires no external libraries, as it’s written in pl/pgsql.

PoWA provides graphs, index suggestion… but is heavier to install.

Slides are available.

PostgreSQL without permanent local data storage

Now we’re getting into a very tec

[...]

Distributed queries for pgvector
Posted by Jonathan Katz on 2024-03-18 at 00:00

The past few releases of pgvector have emphasized features that help to vertically scale, particularly around index build parallelism. Scaling vertically is convenient for many reasons, especially because it’s simpler to continue managing data that’s located within a single instance.

Performance of querying vector data tends to be memory-bound, meaning that the more vector data you can keep in memory, the faster your database will return queries. It’s also completely acceptable to not have your entire vector workload contained within memory, as long as you’re meeting your latency requirements.

However, they may be a point that you can’t vertically scale any further, such as not having an instance large enough to keep your entire vector dataset in memory. However, there may be a way to combine PostgreSQL features with pgvector to create a multi-node system to run distributed, performant queries across multiple instances.

To see how this works, we’ll need to explore several features in PostgreSQL that help with segmenting and distributing data, including partitioning and foreign data wrappers. We’ll see how we can use these features to run distributed queries with pgvector, and explore the “can we” / “should we” questions.

Partitioning and pgvector

Partitioning is a general database technique that lets you divide data in a single table over multiple tables, and is used for purposes such as archiving, segmenting by time, and reducing the overall portion of a data set that you need to search over. PostgreSQL supports three types of partitioning: range, list, and hash. You use list and range partitioning when you have a defined partition key (e.g. company_id or start_date BETWEEN '2024-03-01' AND '2024-03-31), whereas you use hash partitioning when you want to evenly distribute your data across partitions.

There are many considerations you must make before adopting a partitioning strategy, including understanding how your application will interact with your partitioned table and your partiti

[...]

The Security Talk Maiden Voyage
Posted by Henrietta Dombrovskaya on 2024-03-16 at 18:14

First of all here are the presentation slides:

The animation is off, so some things are definitely lost, but the essential part is there. Now that I have presented this talk to an external audience, I know what exactly I want to change! I am going to submit this talk (again!) to several conferences, and I am going to work on the “middle part. The problem I saw with this first-time presentation was that I knew the problem too well, and I should have highlighted the reasons for having more than one security model.

Still, I was happy with the questions and the fact that several people thanked me after the presentation, and I hope that this one will not be the last one! New and improved is coming!

Postgres 17 highlight: Logical replication slots synchronization
Posted by Bertrand Drouvot on 2024-03-16 at 05:26

Introduction

PostgreSQL 17 will normally (as there is always a risk of seeing something reverted in the beta phase) include this commit: Add a new slot sync worker to synchronize logical slots.

commit 93db6cbda037f1be9544932bd9a785dabf3ff712
Author: Amit Kapila 
Date:   Thu Feb 22 15:25:15 2024 +0530

Add a new slot sync worker to synchronize logical slots.

By enabling slot synchronization, all the failover logical replication
slots on the primary (assuming configurations are appropriate) are
automatically created on the physical standbys and are synced
periodically. The slot sync worker on the standby server pings the primary
server at regular intervals to get the necessary failover logical slots
information and create/update the slots locally. The slots that no longer
require synchronization are automatically dropped by the worker.

The nap time of the worker is tuned according to the activity on the
primary. The slot sync worker waits for some time before the next
synchronization, with the duration varying based on whether any slots were
updated during the last cycle.

A new parameter sync_replication_slots enables or disables this new
process.

On promotion, the slot sync worker is shut down by the startup process to
drop any temporary slots acquired by the slot sync worker and to prevent
the worker from trying to fetch the failover slots.

A functionality to allow logical walsenders to wait for the physical will
be done in a subsequent commit.

It means that logical replication slots synchronization from primary to standby is now part of core PostgreSQL.

Let’s look at an example

On the primary server you can now create a logical replication slot with an extra failover flag:

postgres@primary=# SELECT 'init' FROM pg_create_logical_replication_slot('logical_slot', 'test_decoding', false, false, true);
 ?column?
----------
 init

The failover flag is the third boolean and it has been set to true. This information appears in the pg_replication_slots view:

[...]

Enterprise-grade Replication from Postgres to Azure Event Hubs
Posted by Sai Srirampur on 2024-03-15 at 20:54
At PeerDB, we are building a fast and a cost-effective way to replicate data from Postgres to Data Warehouses and Queues. Today we are releasing our Azure Event Hubs connector. With this, you get a fast, simple, and reliable way for Change Data Captu...

Mini Summit One
Posted by David Wheeler in Tembo on 2024-03-15 at 20:05

Great turnout and discussion for the first in a series of community talks and discussions on the postgres extension ecosystem leading up to the Extension Ecosystem Summit at pgconf.dev on May 28. Thank you!

The talk, “State of the Extension Ecosystem”, was followed by 15 minutes or so of super interesting discussion. Here are the relevant links:

For posterity, I listened through my droning and tried to capture the general outline, posted here along with interspersed chat history and some relevant links. Apologies in advance for any inaccuracies or missed nuance; i’m happy to update these notes with your corrections.

And now, to the notes!

Introduction

  • Introduced myself, first Mini Summit, six leading up to the in-person summit on May 28 at PGConf.dev in Vancouver, Canada.

  • Thought I would get it things started, provide a bit of history of extensions and context for what’s next.

Presentation

  • Postgres has a long history of extensibility, originally using pure SQL or shared preload libraries. Used by a few early adopters, perhaps a couple dozen, including …

  • Explicit extension support added in Postgres 9.1 by Dimitri Fontaine, with PGXS, CREATE EXTENSION, and pg_dump & pg_restore support.

  • Example pair--1.0.0.sql:

    -- complain if script is sourced in psql and not CREATE EXTENSION
    \echo Use "CREATE EXTENSION pair" to load this file. \quit
    
    CREATE TYPE pair AS ( k text, v text );
    
    CREATE FUNCTION pair(text, text)
    RETURNS pair LANGUAGE SQL AS 'SELECT ROW($1, $2)::pair;';
    
    CREATE OPERATOR ~> (LEFTARG = text, RIGHTARG = text, FUNCTION = pair);
    
  • Bagel mak

[...]

PostgreSQL Internals Part 1: Understanding Database Cluster, Database and Tables
Posted by semab tariq in Stormatics on 2024-03-15 at 07:55

Learn about database clusters, databases, and tables to optimize performance and unleash the full potential of PostgreSQL for your projects.

The post PostgreSQL Internals Part 1: Understanding Database Cluster, Database and Tables appeared first on Stormatics.

PG Phriday: Redefining Postgres High Availability
Posted by Shaun M. Thomas on 2024-03-15 at 01:27
What is High Availability to Postgres? I’ve staked my career on the answer to that question since I first presented an HA stack to Postgres Open in 2012, and I still don’t feel like there’s an acceptable answer. No matter how the HA techniques have advanced since then, there’s always been a nagging suspicion in my mind that something is missing. But I’m here to say that a bit of research has uncovered an approach that many different Postgres cloud vendors appear to be converging upon.

PgTraining Online Event 2024 (italian)
Posted by Luca Ferrari on 2024-03-15 at 00:00

We are back with another event!

PgTraining Online Event 2024 (italian)

PgTraining, the amazing italian professionals that spread the word about PostgreSQL and that I joined in the last years, is organizing another online event (webinar) on next 19th April 2024.
Following the success of the previous edition(s), we decided to provide another afternoon full of PostgreSQL talks, in the hope to improve the adoption of this great database.


The event will consist in three hours with talks about PL/Java, PgVector and hot upgrade via logical replication.
As for the previous editions, the webinar will be presented in Italian. Attendees will be free to actively participate and do questions both during the talks and at the end of the whole event.

In the pure spirit of PgTraining, the event will be free of charge, but it is required to register for participate and the number of available seats is limited, so hurry up and get your free ticket as soon as possible!
The material will be available for free after the event has completed, but no live recording will be available.

pgagroal command refactoring (again!) and a new contributor!
Posted by Luca Ferrari on 2024-03-15 at 00:00

Changes in pgagroal-cli and pgagroal-admin.

pgagroal command refactoring (again!) and a new contributor!

Last year I introduced a way in pgagroal-cli and pgagroal-admin to arrange commands in a more consistent and manageable way, deprecating some commands too.

Today, a new contributor to the project, Henrique de Carvalho, committed a patch that greatly improves the way commands are handled internally.

The users will not notice any particular difference, except that also a bug has been fixed in handling deprecated commands, but the changes in the code are very important: now all the commands are organized in a list of structs that provide a more accurate way of handling errors, missing arguments or command parts, and logging.

I became thinking about this refactoring months ago, but never got the time to dig into the changes. However, it all began with an annoying problem with some mispelled commands, that reported a wrong error message to the user.

And now, thanks to the contributions of Henrique, pgagroal has done another step towards a more complete and robust system.

Understanding the PostgreSQL Query Planner to Improve Query Performance
Posted by Umair Shahid in Stormatics on 2024-03-14 at 07:59

Learn how the PostgreSQL query planner estimates costs and leverages configuration parameters for efficient data retrieval and increased database performance.

The post Understanding the PostgreSQL Query Planner to Improve Query Performance appeared first on Stormatics.

A day in the life of a PostgreSQL engineer at Fujitsu – Introducing the blog series
Posted by Amit Kapila in Fujitsu on 2024-03-14 at 00:56

I am constantly impressed by the talent and commitment of the Fujitsu engineers that work hard to make PostgreSQL the best database in the market. So, I thought that more people should know these passionate professionals, and what a day in their life is like.

Postgres Performance Boost: HOT Updates and Fill Factor
Posted by Elizabeth Garrett Christensen in Crunchy Data on 2024-03-13 at 13:00

There’s a pretty HOT performance trick in Postgres that doesn’t get a ton of attention. There’s a way for Postgres to only update the heap (the table), avoiding having to update all the indexes. That’s called a HOT update, HOT stands for heap only tuple.

Understanding HOT updates and their interaction with page fill factor can be a really nice tool in the box for getting performance with existing infrastructure. I’m going to review HOT updates and how to encourage them in your Postgres updates.

Heap Only Tuple (HOT) updates

Modern versions of Postgres are able to perform HOT (Heap Only Tuple) updates. A HOT update occurs when a new version of a row can be stored on the same page as the original version, without the need to move the row to a new page.

With HOT updates, if the updated row can still fit on the same data page as the original row, Postgres adds a new row on the same page, while keeping the old row data since it may still be in use by other processes. Postgres also adds a HOT chain link from the old row to the new row, so it can find the new row when a HOT update occurs.

HOT updates and indexes

So the way this normally works in PostgreSQL without HOT updates is if you have a table that is indexed, and one row(tuple) is updated, the update must be applied to the index. For HOT updates, Postgres will skip the update to the index IF you aren’t updating the index key.

By skipping the index update step, HOT updates reduce the amount of disk I/O and CPU processing required for an update operation, leading to better performance, especially for tables with large indexes or frequent updates.

Postgres 16 HOT updates and BRIN indexes

Prior to Postgres 16, any index on an updated column would block updates from being HOT. An update in Postgres 16 makes HOT updates more feasible since BRIN (summarizing) indexes do not contain references to actual rows, just to the pages. This allows columns indexed with BRIN to be updated and still have HOT updates occur. Though some care shou

[...]

Using binary-sorted indexes
Posted by Daniel Vérité on 2024-03-13 at 10:49
In a previous post, I mentioned that Postgres databases often have text indexes sorted linguistically rather than bytewise, which is why they need to be reindexed on libc or ICU upgrades. In this post, let’s discuss how to use bytewise sorts, and what are the upsides and downsides of doing so.

Enforcing join orders in PostgreSQL
Posted by Hans-Juergen Schoenig in Cybertec on 2024-03-12 at 15:20

After the pgconfeu23 in Prague – which has been an excellent event – I decided to share some of the things I presented as a blog post to maybe shed some light on some of those topics. One of those ideas presented was the way PostgreSQL handles joins and especially join orders. Internally the PostgreSQL does a good job to optimize queries but how does it really work?

Let us create some tables first:

plan=# SELECT   'CREATE TABLE x' || id || ' (id int)' 
 FROM      generate_series(1, 5) AS id;
         ?column?         
--------------------------
 CREATE TABLE x1 (id int)
 CREATE TABLE x2 (id int)
 CREATE TABLE x3 (id int)
 CREATE TABLE x4 (id int)
 CREATE TABLE x5 (id int)
(5 rows)

In PostgreSQL we can easily create SQL using SQL. The beauty of psql is that one can simply run \gexec to use the previous output as new input:

plan=# \gexec
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE

Voila, we have 5 tables which can serve as a sample data structure.

Joining tables in PostgreSQL

The following query shows a simple join using the tables we have just created:

plan=# explain (timing, analyze)  SELECT *
           FROM    x1 JOIN x2 ON  (x1.id = x2.id)
                      JOIN x3 ON  (x2.id = x3.id)
                      JOIN x4 ON  (x3.id = x4.id)
                      JOIN x5 ON  (x4.id = x5.id);
…
Planning Time: 0.069 ms
Execution Time: 0.046 ms

What is the important observation here? Let us take a look at planning time. PostgreSQL needs 0.297 milliseconds to find the best execution plan (= execution strategy) to run the query. The question arising is: Where does the planner need the time to plan the query? The thing is: Even when using explicit joins as shown above PostgreSQL will join those tables implicitly and decide on the best join order. What does that mean in real life? Well let us consider a join “a join b join c”: Even if we write an SQL that says join “a to b” the optimizer might still decide to vote for “c join a join b” in case it guarantees the same

[...]

How to use pgbench to test PostgreSQL® performance
Posted by Francesco Tisiot in Aiven on 2024-03-12 at 15:00

Testing a database performance is a must in every company. Despite everyone's needs beings slightly different, a good starting point for PostgreSQL® database is using pgbench: a tool shipped with the PostgreSQL installation that allows you to stress test a local or remote database.
This blog post showcases how to install (on a Mac) and use pgbench to create load on a remote PostgreSQL database on Aiven.

If you need a FREE PostgreSQL database?
🦀 Check Aiven's FREE plans! 🦀
If you need to optimize your SQL query?
🐧 Check EverSQL! 🐧

Install pgbench locally

In a Mac, pgbench comes with the default PostgreSQL installation via brew. Therefore to have pgbench all you need is to:

brew install postgresql

Create a test PostgreSQL environment

While you could create a test PostgreSQL environment locally (the 1brc challenge blog post contains all the details), this time we'll create an Aiven for PostgreSQL service in minutes:

  • Navigate to Aiven Console
  • Create an account
  • Create an Aiven for PostgreSQL service on your favorite cloud provider and region
  • Select the startup-4 plan, it will be sufficient for the test.

Use pgbench to load test the a PostgreSQL database

The Aiven for PostgreSQL database comes with a defaultdb database we can use for our testing.

Step 1: Initialize the database

All we need is to grab the connection details from the Aiven Console and we are ready to initialize the database with:

pgbench -h        \
    -p            \
    -U            \
    -i                  \
    -s     \
    

Where:

  • is the database hostname
  • is the database port
  • is the database user
  • is the database name
  • SCALEFACTOR is the test scale factor, 100 could be a good place to start. The default is 1 which will create a 16MB database, the 100 scale will create a 1.6GB database.

We'll be prompted to write the password, that

[...]

Managing Time Series Data Using TimeScaleDB-Powered PostgreSQL
Posted by Robert Bernier in Percona on 2024-03-12 at 14:05
Time Series Data Using TimeScaleDB-Powered PostgreSQLPostgreSQL extensions are great! Simply by adding an extension, one transforms what is an otherwise vanilla general-purpose database management system into one capable of processing data requirements in a highly optimized fashion. Some extensions, like pg_repack, simplify and enhance existing features already, while other extensions, such as PostGIS and pgvector, add completely new capabilities.I’d like […]

Transforming and Analyzing Data in PostgreSQL
Posted by Ryan Booz in Redgate on 2024-03-12 at 10:07

In our data hungry world, knowing how to effectively load and transform data from various sources is a highly valued skill. Over the last couple of years, I’ve learned how useful many of the data manipulation functions in PostgreSQL can supercharge your data transformation and analysis process, using just PostgreSQL and SQL.

For the last couple of decades, “Extract Transform Load” (ETL) has been the primary method for manipulating and analyzing the results. In most cases, ETL relies on an external toolset to help acquire different forms of data, slicing and dicing it into a form suitable for relational databases, and then inserting the results into your database of choice. Once it’s in the destination table with a relational schema, querying and analyzing it is much easier.

There are advantages to using a dedicated transformation tool in your workflow. Typically, a small team learns the software and handles all this work, so licensing is clear, and the best tools often allow reusable components that make it easier to build transformation pipelines. However, there are also drawbacks. Most notably, your ability to manipulate the data is tied to an extra tool, and often one that only the small team knows how to use.

What if you could flip the process around a bit and load the data into PostgreSQL before you transform it, using the power of SQL?

We call this the “Extract Load Transform” (ELT) method. And PostgreSQL happens to be very well suited for doing some pretty complex transformations if you know a few basic principles.

In this series, we’re going to discuss:

  • PostgreSQL functions that can slice and dice raw data to make it easier to query
  • How to use those functions with a CROSS JOIN LATERAL to unlock the power of these functions.
  • Using CTEs to build complex queries with functions and CROSS JOIN LATERAL

I hope by the end of this series you’ll appreciate the capabilities of PostgreSQL and some of the advantages for doing data transformation within the database. And alth

[...]

This Friday, I am presenting at SCaLE!
Posted by Henrietta Dombrovskaya on 2024-03-12 at 03:11

It finally happened! My security talk was accepted, and I will present it on March 15! I can’t believe it is happening, and I hope this is the first but not the last time! If you are going to be at the conference, please stop by!

CloudNativePG Recipe 3 - What!?! No superuser access?
Posted by Gabriele Bartolini in EDB on 2024-03-11 at 20:46

Explore the secure defaults of a PostgreSQL cluster in this CloudNativePG recipe, aligning with the principle of least authority (PoLA). Our commitment to security and operational simplicity shines through default configurations, balancing robust protection with user-friendly settings. Advanced users can customize as needed. The article navigates default intricacies, PostgreSQL Host-Based Authentication, and the scenarios for enabling superuser access. We also touch on the careful use of the ALTER SYSTEM command, emphasizing our dedication to secure and simple operations.

The importance of PostgreSQL timelines
Posted by Stefan Fercot in Data Egret on 2024-03-11 at 15:28

Flashpoint: have you ever watched a Sci-Fi movie where the main character goes back in time, change something there (i.e. save his mother’s life) and then comes back to present days but arrives in an alternate reality? Applied to PostgreSQL backups, the alternate reality called timeline is a key notion for Point-in-Time Recovery.


Whenever an archive recovery completes, a new timeline is created to identify the series of WAL records generated after that recovery. The timeline ID number is part of WAL segment file names so a new timeline does not overwrite the WAL data generated by previous timelines. For example, in the WAL file name 0000000100001234000055CD, the leading 00000001 is the timeline ID in hexadecimal.

Let’s take an example:

With continuous WAL archiving enabled and a backup taken at 03.00am, imagine one of your colleague coming to you at 04.00pm: “I forgot a where clause in a DELETE statement 1 hour ago and dropped some important data. Can you help me?

Of course we can! We just have to restore the backup and ask PostgreSQL to stop its recovery before the DELETE with i.e. recovery_target_time = ‘2024-03-08 15:00:00 UTC’ (or better, use recovery_target_xid if we know the transaction id of that DELETE statement).

At the end of the recovery, the restored PostgreSQL cluster is still living in present time (> 04.00pm) but in an alternate reality, with the data that were deleted but without all the data that were added/removed/updated afterwards: timeline 2.

Now, all your other colleagues are angry because they inserted some very important data at 03.30pm and they want those data back! No problem, we still have our backup, our WAL archives and we can use recovery_target_time = ‘2024-03-08 15:30:00 UTC’ 🙂

However, if you’re running PostgreSQL 12 or later, after the recovery you won’t have the inserted data you wanted 🙁

That’s because of recovery_target_timeline! By default, PostgreSQL will follow the latest timeline found. So, at 03.00pm, it will switch to

[...]

Artur Zakirov
Posted by Andreas 'ads' Scherbaum on 2024-03-11 at 14:00
PostgreSQL Person of the Week Interview with Artur Zakirov: I currently reside in Berlin, Germany, and work at Adjust. I grew up in a village in Bashkortostan, Russia. It is located in a green area far from the hustle of big cities. During my early years I didn’t think about living in big cities. Today it’s hard to imagine myself away from a vibrant city. I moved to Berlin around three years ago. And I lived in Tokyo, Japan, around one year before moving to Berlin.

PostgreSQL March Meetup in Berlin
Posted by Andreas Scherbaum on 2024-03-11 at 07:22

On March 5th, 2024, we had the PostgreSQL March Meetup in Berlin. Zalando hosted the Meetup in their Berlin Headquarter near the Mercedes-Benz Arena, close to the River Spree, and the Oberbaum Bridge.

February 20 recording – finally!
Posted by Henrietta Dombrovskaya on 2024-03-10 at 22:49

Here is

Using Polars & DuckDB with Postgres
Posted by Adrian Klaver on 2024-03-08 at 23:37
This post will look at two relatively new programs for transforming data sets and their applicability to moving data into a Postgres database. Both programs have Python API’s that will be used for this exercise. Polars is written in Rust … Continue reading

PG Phriday: Getting It Sorted
Posted by Shaun M. Thomas on 2024-03-08 at 20:36

When it comes to reordering the items in a list, databases have long had a kind of Faustian Bargain to accomplish the task. Nobody really liked any of the more common solutions, least of all the poor database tasked with serving up the inevitable resulting hack.

Postgres is no different in this regard. Consider a list_item table like this, demonstrating five items in a to-do list:

Top posters

Number of posts in the past two months

Top teams

Number of posts in the past two months

Feeds

Planet

  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.

Contact

Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.