My first thought seeing a temporal join in 2008 was, “Why is this query so complex?” The company I was at relied heavily on database queries, as it was a CRM and student success tracking system for colleges and universities. The query returned a filtered list of users and their last associated record from a second table. The hard part about the query isn’t returning the last timestamp or even performing joins, it’s returning only their last associated record from a second table.
Back in 2008, we didn’t have window functions or CTEs, so the query algorithm was a series of nested tables that looked like this:
SELECT
*
FROM users, ( -- find the record for the last second_table by created_at and user_id
SELECT
second_table.*
FROM second_table, ( -- find the last second_table created_at per user_id
SELECT
user_id,
max(created_at) AS created_at
FROM second_table
GROUP BY 1
) AS last_second_table_at
WHERE
last_second_table_at.user_id = second_table.user_id
AND second_table.created_at = last_second_table_at.created_at
) AS last_second_table
WHERE users.id = last_second_table.user_id;
See the Sample Code section below for the schema and data to run these queries.
But, even that query was wrong because the second table may have records with duplicate created_at values. That was the source of a bug back in 2008 that resulted in duplicate rows being listed.
Obviously, we weren't using Postgres at the time because there has always been a simpler way to do this in Postgres using DISTINCT ON:
SELECT DISTINCT ON (u.id)
u.id,
u.name,
s.created_at AS last_action_time,
s.action_type
FROM users u
JOIN second_table s ON u.id = s.user_id[...]
In the last edition of this blog, the passenger list data from the HMS Titanic was loaded into a preliminary database. Now it is time to refine.
I am using DBeaver Enterprise 25.2.0. PostgreSQL 18, and Github Copilot with Gpt-4.
Prompt: Any recommendations on improving this table for storage efficiency? This prompt was entered into the DBeaver AI Assistant.
To improve the storage efficiency of the "titanic" table, we can consider the following recommendations:
### 1. **Optimize Data Types**
- **"PassengerId"**: If the number of passengers is not expected to exceed 2,147,483,647, the `int4` type is fine. However, if the range is smaller (e.g., less than 32,767), you could use `int2` (smallint) to save space.
- **"Survived"**: Since this is likely a binary value (0 or 1), you can use `boolean` instead of `int4`.
- **"Pclass"**: If the class is limited to 1, 2, or 3, you can use `int2` (smallint) instead of `int4`.
- **"Sex"**: If this column only contains "male" and "female", consider using an `enum` type or a `char(1)` ('M' or 'F') to save space.
- **"Age"**: If precision beyond one decimal place is not required, consider reducing the precision or using `numeric` with a defined scale.
- **"SibSp" and "Parch"**: If these values are small integers (e.g., number of siblings/spouses or parents/children), you can use `int2` (smallint).
- **"Ticket"**: If ticket numbers are of fixed length, consider using `char(n)` instead of `varchar`.
- **"Fare"**: If precision beyond two decimal places is not required, consider using `numeric` with a defined scale (e.g., `numeric(10,2)`).
- **"Cabin"**: If cabin identifiers are of fixed length, consider using `char(n)` instead of `varchar`.
- **"Embarked"**: If this column only contains a small set of values (e.g., 'C', 'Q', 'S'), consider using an `enum` type or `char(1)`.
Comment: This is what I was looking for. Saving sp
If you’ve ever wondered how to set up OpenID Connect (OIDC) authentication in PostgreSQL, the wait is almost over.
We’ve spent some time exploring what it would take to make OIDC easier and more reliable to use with PostgreSQL. And now, we’re happy to share the first results of that work.
We’ve spoken to some of our customers and noticed a trend of moving away from LDAP to OIDC. Our MongoDB product is already providing OIDC integration and the team working on PostgreSQL products saw an opportunity coming with PostgreSQL 18.
A hash join is one of the most common join methods used by PostgreSQL and other relational databases. It works by building a hash table from the smaller input (called the build side) and then probing it with rows from the larger input (the probe side) to find matching join keys.
Hash joins are especially efficient for large, unsorted datasets—particularly when there are no useful indexes on the join columns.
This post uses a concrete example to explain how a hash join works. The example is run on PostgreSQL 18.
-- Create tables
CREATE TABLE regions (
id SERIAL PRIMARY KEY,
region_name TEXT
);
CREATE TABLE customers (
id SERIAL PRIMARY KEY,
name TEXT,
region_id INT
);
-- Insert sample data
INSERT INTO regions (region_name)
SELECT 'region_' || g FROM generate_series(1, 10) g;
INSERT INTO customers (name)
SELECT 'customer_' || g FROM generate_series(1, 100000) g;
-- Assign each customer a region
UPDATE customers
SET region_id = (random() * 9)::int + 1;
EXPLAIN
SELECT c.id, c.name, r.region_name
FROM regions r
LEFT JOIN customers c ON c.region_id = r.id;
Output:
Hash Right Join (cost=38.58..2575.95 rows=100000 width=50)
Hash Cond: (c.region_id = r.id)
-> Seq Scan on customers c (cost=0.00..2274.00 rows=100000 width=22)
-> Hash (cost=22.70..22.70 rows=1270 width=36)
-> Seq Scan on regions r (cost=0.00..22.70 rows=1270 width=36)
At first glance, this seems weird: the SQL uses LEFT JOIN, but the plan shows RIGHT JOIN. Why does that happen? This post is going to explain the puzzle.
Let’s assume simplified data:
customers
| id | name | region_id |
|---|---|---|
| 1 | Alice | 1 |
| 2 | Bob | 2 |
Belgium Meetup: Tuesday, 14 October 2025 - organized by Boriss Mejias & Stefan Fercot
Speakers:
Barcelona PostgreSQL User Group met on Tuesday, Oct 7, organized by Dave Pitts & Lauro Ojeda
Speakers - Dave Pitts - Lauro Ojeda
Ellyne Phneah released two books: * The Social Code: Building a PostgreSQL Community Wired for Belonging * Decode PostgreSQL: Understanding the World's Most Powerful Open-Source Database Without Writing Code

© Tobi Albe 2025
PostgreSQL can scan B-tree indexes in both directions. That means that there is little need to create a index in descending order (one created with the DESC clause). However, there are some cases where you need a descending index. There are also some corner cases where a descending index performs better than an ascending one. Follow me to explore use cases for the DESC clause!
ORDER BY clauses
This is the obvious use case for a descending index. There is a reason why CREATE INDEX offers the same clauses as ORDER BY: if you want a B-tree index to support an ORDER BY clause, its sorting order must match the ORDER BY clause precisely. So, to support a query like
SELECT col2 FROM tab ORDER BY col1 DESC NULLS LAST;
you need to create an index like
CREATE INDEX tab_col1_desc_idx ON tab (col1 DESC NULLS LAST);
But since PostgreSQL can scan indexes in both directions, the following index will also work:
CREATE INDEX tab_col1_idx ON tab (col1 NULLS FIRST);
However, you cannot avoid the DESC clause if you want to support an ORDER BY clause with both ascending and descending columns:
SELECT col2 FROM tab ORDER BY col1 DESC, col2 NULLS LAST;
In a previous article, I wrote some more about mixed ORDER BY clauses in the context of keyset pagination. There, you can find some more creative ideas for using descending indexes.
This is a somewhat more exotic use case. I'll demonstrate it with the following table:
CREATE UNLOGGED TABLE large( id bigint NOT NULL ); CREATE INDEX large_id_idx ON large(id); CREATE INDEX large_id_desc_idx ON large (id DESC);
One of these two indexes is clearly redundant. But look what happens if I INSERT rows in descending order:
INSERT INTO large (id)
SELECT * FROM generate_series(10000000, 1, -1);
SELECT ind AS index_name,
pg_size_pretty(pg_relation_size(ind)) AS index_size
FROM (VALUES ('large_id_idx'::regclass),
('large_id_desc_idx'::regclass)) AS i(ind);
index_na[...]
For many Postgres DBAs, Kubernetes feels like a new, complex world. But what if your existing skills were the key to unlocking it? This article demystifies cloud-native Postgres by revealing a first handful of kubectl and kubectl cnpg commands that act as your direct translator. I’ll move past the intimidating YAML to focus on the practical, imperative commands you’ll actually use to troubleshoot, inspect, and even perform a production switchover. You’ll see how your core DBA work maps directly to this new environment, helping you build the confidence to take the next step into the cloud-native world.
In this series, we talk about the many different ways you can speed up PostGIS. Today let’s talk about looking across the queries with pg_stat_statements and some basic tuning.
A reasonable question to ask, if you are managing a system with variable performance is: “what queries on my system are running slowly?”
Fortunately, PostgreSQL includes an extension called “pg_stat_statements” that tracks query performance over time and maintains a list of high cost queries.
CREATE EXTENSION pg_stat_statements;
Now you will have to leave your database running for a while, so the extension can gather up data about the kind of queries that are run on your database.
Once it has been running for a while, you have a whole table – pg_stat_statements – that collects your query statistics. You can query it directly with SELECT * or you can write individual queries to find the slowest queries, the longest running ones, and so on.
Here is an example of the longest running 10 queries ranked by duration.
SELECT
total_exec_time,
mean_exec_time,
calls,
rows,
query
FROM pg_stat_statements
WHERE calls > 0
ORDER BY mean_exec_time DESC
LIMIT 10;
While “pg_stat_statements” is good at finding individual queries to tune, and the most frequent cause of slow queries is just inefficient SQL or a need for indexing - see the first post in the series.
Occasionally performance issues do crop up at the system level. The most frequent culprit is memory pressure. PostgreSQL ships with conservative default settings for memory usage, and some workloads benefit from more memory.
A database server looks like an infinite, accessible, reliable bucket of data. In fact, the server orchestrates data between the disk – which is permanent and slow – and the random access memory – which is volatile and fast – in order to provide the illusion of such a system.
When the balance between slow storage and fast memory is out of whack, syste
[...]I finally shared the set of functions that I use to process pgBadger raw output.
There will be more documentation, I promise, but at least the code is there, along with two of many presentations.
Postgres has earned its reputation as one of the world's most robust and feature-rich open-source databases. But what happens when your application grows beyond what a single database instance can handle? When your user base explodes from thousands to millions, and your data grows from gigabytes to terabytes?This is where Postgres scaling becomes critical. The good news is that Postgres offers multiple pathways to scale, each with its own advantages and use cases. Since pgEdge Distributed Postgres and pgEdge Enterprise Postgres are 100% Postgres, all of the scaling techniques that follow also apply to your pgEdge cluster.In this comprehensive guide, we'll explore three fundamental scaling approaches:
In August of this year, I published a blog entitled PostgreSQL’s incredible trip to the top with developers which shows how Postgres has become the most used, most loved and most desired database according to the Stack Overflow annual developer survey. In that blog I said, I want to do the series in two parts. After some thought, I have decided to make it a 3 part series. It will break down as follows:
Part 1: The aforementioned blog was just the facts regarding Postgres becoming the most loved, most desired and most used database by developers according to the stack overflow survey.
Part 2
In this tutorial we will create PostgreSQL extension ban_sus_query. It will check that DML queries contain predicates, otherwise will just throw an error.
Next, in order not to mislead up, I will use term
contribfor PostgreSQL extension, and forextensionfor PostgreSQL Hacker Helper VS Code extension.
This tutorial is created not only for newbies in PostgreSQL development, but also as a tutorial for VS Code extension PostgreSQL Hacker Helper. Documentation for it you can find here.
PostgreSQL has infrastructure for contrib building and installation. In short, contribs have a template architecture - most parts are common for all.
So, for faster contrib creation we will use command: PGHH: Bootstrap extension.
It will prompt us to bootstrap some files - choose only C sources.
After that we will have our contrib files created:
Query execution pipeline has 3 stages:
Our logic will be added to the 2 stage, because we must check real execution plan, not Query.
This is because after multiple transformations query can be changed in multiple ways - predicates can be deleted or added, therefore we may get a completely different query than in the original query string.
To implement that we will create hook on planner - planner_hook. Inside we will invoke actual planner and check it's output for the existence of predicates.
Starter code is the following:
#include "postgres.h"
#include "fmgr.h"
#include "optimizer/planner.h"
#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif
static planner_hook_type prev_planner_hook;
void _PG_init(void);
void _PG_fini(void);
static bool
is_sus_query(Plan *plan)
{
/* ... */
return false;
}
static PlannedStmt *
ban_sus_query_planner_hook(Query *parse,
const char *query_string,One of the ongoing challenges in database management systems (DBMS) is maintaining consistent data across multiple instances (nodes) that can independently accept client connections. If one node fails in such a system, the others must continue to operate without interruption - accepting connections and committing transactions without sacrificing consistency. An analogy for a single DBMS instance might be staying operational despite a RAM failure or intermittent access to multiple processor cores.
In this context, I would like to revisit the discussion about the Postgres-based multi-master problem, including its practical value, feasibility, and the technology stack that needs to be developed to address it. By narrowing the focus of the problem, we may be able to devise a solution that benefits the industry.
I spent several years developing the multi-master extension in the late 2010s until it became clear that the concept of essentially consistent multi-master replication had reached a dead end. Now, after taking a long break from working on replication, changing countries, residency, and companies, I am revisiting the Postgres-based multi-master idea to explore its practical applications.
First, I want to clarify the general use case for multi-master replication and highlight its potential benefits. Apparently, any technology must balance its capabilities with the needs it aims to address. Let's explore this balance within the context of multi-master replication.
Typically, clients consider a multi-master solution when they hit a limit in connection counts for their OLTP workloads. They often have a large number of clients, an N transaction-per-second (TPS) workload, and a single database. They envision a solution that involves adding another identical server, setting up active-active replication, and doubling their workload.
[...]
PGConf.EU schedule can feel like a parallel query gone wild, so many great talks but not enough CPU.
I built this guide to help my fellow database nerds skip the overwhelm and enjoy the best prod-DBA focussed sessions without a single deadlock.
Follow this path, and you’ll cruise through the conference like a perfectly tuned autovacuum.
11:15 – 12:05 in Omega 1 : Don’t Do That!
Laurenz Albe reminds us that every bad Postgres habit comes with a sequel called “incident report.”
13:05 – 13:35 in Omega 2 : Parsing Postgres Logs the Non-pgBadger Way
Kaarel Moppel shows that pgweasel and caffeine can out-analyze any dashboard.
13:45–14:35 in Alfa : Improved Freezing in Postgres Vacuum: From Idea to Commit
Melanie Plageman walks us through the icy depths of tuple immortality.
14:45–15:35 in Omega 2 : Operational Hazards of Running PostgreSQL Beyond 100 TB
Teresa Lopes shares real stories and engineering lessons from scaling Postgres into the terabyte realm, where every decision costs you.
16:05–16:55 in Omega 2 : What You Should Know About Constraints (and What’s New in 18)
Gülçin Yıldırım Jelínek explores how new enhancement to constraints in PG 18 make data integrity both smarter and more flexible.
17:05–17:55 in Omega 1 : Hacking pgvector for Performance
Daniel Krefl reveals clever hacks to push filtering and indexing deeper into pgvector for faster, leaner similarity searches.
09:25–10:15 in Omega 2 : Unified Observability: Monitoring Postgres Anywhere with OpenTelemetry (Yogesh Jain)
Learn how to unify metrics, logs, and traces across cloud, containers, and bare-metal Postgres instances using OpenTelemetry to build scalable, vendor-agnostic observability.
10:25–10:55 in Omega
When someone asks about Postgres tuning, I always say “it depends”. What “it” is can vary widely but one major factor is the read and write traffic of a Postgres database. Today let’s dig into knowing if your Postgres database is read heavy or write heavy.
Of course write heavy or read heavy can largely be inferred from your business logic. Social media app - read heavy. IoT logger - write heavy. But …. Many of us have mixed use applications. Knowing your write and read load can help you make other decisions about tuning and architecture priorities with your Postgres fleet.
Understanding whether a Postgres database is read-heavy or write-heavy is paramount for effective database administration and performance tuning. For example, a read-heavy database might benefit more from extensive indexing, query caching, and read replicas, while a write-heavy database might require optimizations like faster storage, efficient WAL (Write-Ahead Log) management, table design considerations (such as fill factor and autovacuum tuning) and careful consideration of transaction isolation levels.
By reviewing a detailed read/write estimation, you can gain valuable insights into the underlying workload characteristics, enabling informed decisions for optimizing resource allocation and improving overall database performance.
The challenge here in looking at Postgres like this is that reads and writes are not really equal.
For next month, I'm scheduling 2 or 3 discussions of Matthias van de Meent's talk, Improving scalability; Reducing overhead in shared memory, given at 2025.pgconf.dev (talk description here). If you're interested in joining us, please sign up using this form and I will send you an invite to one of the sessions. Thanks to Matthias for agreeing to attend the sessions, and to Melanie Plageman for agreeing to serve as host. (I normally host, but am taking a month off. We will also skip December due to the end-of-year holidays.)
A couple times within the past month, I’ve had people send me a message asking if I have any suggestions about where to learn postgres. I like to share the collection of links that I’ve accumulated (and please send me more, if you have good ones!) but another thing I always say is that the public postgres slack is a nice place to see people asking questions (Discord, Telegram and IRC also have thriving Postgres user communities). Trying to answer questions and help people out can be a great way to learn!
Last month there was a brief thread on the public postgres slack about the idea of sanatizing SQL and this has been stuck in my head for awhile.
The topic of sensitive data and SQL is actually pretty nuanced.
First, I think it’s important to directly address the question about how to treat databases schemas – table and column names, function names, etc. We can take our cue from the large number industry vendors with data catalog, data lineage and data masking products. Schemas should be internal and confidential to a company – but they are not sensitive in the same way that PII or PCI data is. Within a company, it’s desirable for most schemas to be discoverable by engineers across multiple development teams – this is worth the benefits of better collaboration and better architecture of internal software.
Unfortunately, the versatile SQL language does not cleanly separate things. A SQL statement is a string that can mix keywords and schema and data all together. As Benoit points out in the slack thread – there are prepared (parameterized) statements, but you can easily miss a spot and end up with literal strings in queries. And I would add to this that most enterprises will also have occasional needs for manual “data fixes” which may involve basic scripts where literal values are common.
Benoit’s suggestion was to run a full parse of the query text. This is a good idea – in fact PgAnalyze already maintains a standalone open-source library which can be used to directly leverage Postgres’
[...]Huge thanks to everyone who came to the Prairie Postgres meetup and celebrated our first birthday with us! Thank you for helping me to rehearse my talk, and for your insightful questions!
Here are my presentation slides:
And we had a cake!
If you’ve been following the buzz around PostgreSQL, you’ve probably already heard that database level open source data-at-rest encryption is now available thanks to the Transparent Data Encryption (TDE) extension available in the Percona Distribution for PostgreSQL. So naturally, the next question is:
Where’s Percona Distribution for PostgreSQL 18?
It’s coming.
It’s taking a bit of time, for all the right reasons.
Something I’ve presented about recently (a couple times, now!) is how we can make AI actually useful with Postgres. Not the mystical robot overlord kind of AI that people worry about, but the practical, math-heavy kind that can actually help solve real problems with the data you already have.Let me be clear up front: AI isn't magic. It's math. Lots and lots of math. And if you can write SQL queries (which, let's be honest, is why you're here), you can build AI applications with Postgres. Let me show you how.
As was discussed in #PostgresMarathon 2-002, for a simple SELECT from a table, at planning time, Postgres locks the table and all of its indexes with AccessShareLock. A simple demo to remind it (let me be a bit weird here and save some bytes when typing SQL):
test=# create table t();
CREATE TABLE
test=# create index on t((1));
CREATE INDEX
test=# create index on t((1));
CREATE INDEX
test=# create index on t((1));
CREATE INDEX
test=# create index on t((1));
CREATE INDEX
test=# create index on t((1));
CREATE INDEX
test=# \d t
Table "public.t"
Column | Type | Collation | Nullable | Default
--------+------+-----------+----------+---------
Indexes:
"t_expr_idx" btree ((1))
"t_expr_idx1" btree ((1))
"t_expr_idx2" btree ((1))
"t_expr_idx3" btree ((1))
"t_expr_idx4" btree ((1))
test=#
test=# begin; explain select from t;
BEGIN
QUERY PLAN
-----------------------------------------------------
Seq Scan on t (cost=0.00..39.10 rows=2910 width=0)
(1 row)
test=*# select relation::regclass, mode from pg_locks where pid = pg_backend_pid();
relation | mode
-------------+-----------------
t_expr_idx2 | AccessShareLock
pg_locks | AccessShareLock
t_expr_idx3 | AccessShareLock
t_expr_idx4 | AccessShareLock
t_expr_idx | AccessShareLock
t_expr_idx1 | AccessShareLock
t | AccessShareLock
| ExclusiveLock
(8 rows)
test=*#
– indeed, all indexes locked.
To mitigate it, we can just use prepared statements. Let's create one:
prepare test_query (int) as select from t;
And then run this snippet 7 times:
begin;
explain (verbose) execute test_query(1);
select relation::regclass, mode
from pg_locks
where pid = pg_backend_pid() and relation::regclass <> 'pg_locks'::regclass;
rollback;
Six (6) times, we'll see that all indexes are locked:
relation | mode
-------------+-----------------
t_expr_idx2 | AccessShareLock
t_expr_idx3 | AccessShareLocBy default, the PostgreSQL JDBC driver fetches all rows at once and attempts to load them into memory vs. other drivers such as Oracle that by default only fetches 10 rows at a time. Both defaults have pros and cons, however in the context of the types of workloads I see every day, the PostgreSQL default is typically not optimal.
As a frame of reference, the default PostgreSQL fetch size is just fine if you have queries that always return small result sets. If there is a chance that larger results sets could be retrieved, a high variance of performance between small and large result sets will be seen and an explicit fetch size should be considered.
To demonstrate, I wanted to create a demo application which would create a simulated table with 2 million rows and select them with various fetch sizes:
For a query returning 2 million rows, leveraging the default fetch size produce the following results:
java -cp .:/home/shaneborden_google_com/java/postgresql-42.5.4.jar DatabaseFetchSizeTest
--- Database Setup ---
[MEMORY] Initial Baseline: 6.68 MB Used (Total Heap: 56.00 MB)
Existing table dropped.
New table created: large_data_test
Inserting 2000000 rows... Done in 44.36 seconds.
[MEMORY] After Data Insertion: 6.72 MB Used (Total Heap: 40.00 MB)
------------------------------------------------------------
--- Running Test: Small Chunk (1000 rows) (Fetch Size: 1000) ---
------------------------------------------------------------
Executing query with fetch size 1000...
[MEMORY] 1. Before Query Execution: 6.63 MB Used (Total Heap: 40.00 MB)
[MEMORY] 2. After Query Execution (Data Loaded/Cursor Open): 6.86 MB Used (Total Heap: 40.00 MB)
Test Complete.
Total Rows Read: 2000000
Total Time Taken: 1613 ms
[MEMORY] 3. After All Rows Processed: 6.67 MB Used (Total Heap: 68.00 MB)
Mode: STREAMING. Expect memory usage to remain lowI recently did three interviews for Edd Mann's Compiled Conversations. The first is a general interview about the Postgres project, its history, and challenges. The other is a two-part (1, 2) interview about how a table is created and populated internally. Total recording time is 3.5 hours.
At PGDay Lowlands 2025, I had the chance to give my very first Lightning Talk. My topic? The idea of thinking about PostgreSQL as a village.
It was inspired by a phrase I recently came across: ”If you want to go fast, go alone. If you want to go far, go together.” (I’m not sure who first said this, but I really like it.)
This saying stuck with me, because it beautifully captures the essence of open source — and especially the PostgreSQL community.
Because open source is not a solo journey. It’s a shared effort — like a village. A place where people contribute, share, and look after the common good, while valuing the diversity of skills, backgrounds, and perspectives that make it thrive.
When I think about a village, diversity comes to my mind. But why?
Because what you can find in a village is diversity at its core:
The same is true for PostgreSQL. We might come from different “families” — developers, DBAs, sales, marketers, trainers, consultants and many, many more — but we all live in the same PostgreSQL village. We all contribute in our own way.
Some write code, others teach, promote, or help organize. Some focus on users, others on infrastructure. The perspectives may differ, but the goal is the same: keeping the community and the project strong.
Although I am not a technician, as my origin is marketing related, I found my place in the PostgreSQL village.
Like many others, I first joined the community by connecting through shared interests. For me, the easiest entry point was attending my first PostgreSQL conference — PGConf.EU in Prague (2023). That’s where I truly experienced the spirit of the community, and it left a lasting impression on me. Later, I joined the PostgreSQL Soc
[...]I hope everyone's looking forward to PostgreSQL Conference Europe 2025 in Riga next week!
I can't wait to see all of my favourite Postgres people, and meet some new ones.
Before I get completely lost in last-minute preparations, I thought I'd take a few minutes to share my plans for the week. That way, you'll know where to find me if you want to introduce yourself, catch up, grab a coffee, or just ask me a question.
I'd love to hear what everyone else has planned too. Please let me know!
Without further ado, here's my agenda for the week:
New this year, the conference kicks off with the Community Events Day. We invited the PostgreSQL Europe Community to propose mini community-focused, participative events. There's something for everyone - from AI to Patroni and from Extensions to volunteer training.
I plan to start the week off gently with a morning of Postgres-themed crafting at Crafty Slonik. I'm looking forward to chatting about databases whilst doing some crochet. If crochet's not your thing, there'll be plenty of other options, or bring your own! If you're new to PGConf.EU and/or to the Postgres community, this is the perfect opportunity to meet some new folks and maybe make a conference buddy or two in an informal setting.
In the afternoon, I'll be at the Community Organizers Conf, learning from other PostgreSQL event organisers, and sharing what the PostgreSQL Europe Diversity Committee has been working on, especially how and why we've made PGConf.EU a Sunflower friendly event.
I'm excited to hear the keynote from Karen Sandler, of the Software Freedom Conservancy. It's going to be a great start to 3 days of amazing talks. I have no idea how I'm going to choose which ones to see!
I'll spend some time today hanging around the Community Celebration board - a place where we can pin things that we've drawn, written or created to celebrate what we love about the Postgres community and showcase the rich variety of voices and backgrou
[...]Number of posts in the past two months
Number of posts in the past two months
Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.