PostgreSQL
The world's most advanced open source database
Top posters
Number of posts in the past month
Top teams
Number of posts in the past month
Feeds
Planet
  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.
Contact
  • Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.
Posted by Robert Haas in EnterpriseDB on 2020-01-23 at 16:04
Last summer, in a burst of sunny optimism and out of a desire for transparency, I posted a blog post about the then-new PostgreSQL Contributors Team, which was charged with updating the contributors page as required. Having now been on this mailing list for 7 months or so, I have a few - actually quite a few - comments about the whole problem space.
Read more »

1. Introduction

This is part 3 of the blog “Understanding Security Features in PostgreSQL”, in which I will be discussing how to apply TLS in both PostgreSQL server and client using the principles we have learned in part 2 of the blog. In the end, I will also briefly talk about Transparent Data Encryption (TDE) and security vulnerability.

Here is the overview of the security topics that will be covered in all parts of the blog:

Part 1:

  • PostgreSQL Server Listen Address
  • Host-Based Authentication
  • Authentication with LDAP Server
  • Authentication with PAM
  • Role-Based Access Control
  • Assign Table and Column Level Privileges to Users
  • Assign User Level Privileges as Roles
  • Assign and Column Level Privileges via Roles
  • Role Inheritance

Part 2:

  • Security Concepts around TLS
  • Symmetrical Encryption
  • Asymmetrical Encryption (a.k.a Public Key Cryptography)
  • Block Cipher Mode of Operation (a.k.a Stream Cipher)
  • Key Exchange Algorithm
  • TLS Certificate and Chain of Trust
  • Data Integrity Check / Data Authentication
  • TLS Cipher Suite and TLS handshake
  • TLS versions

Part 3:

  • Preparing TLS Certificates
  • Enabling Transport Layer Security (TLS) to PostgreSQL Server
  • Enabling Transport Layer Security (TLS) to PostgreSQL Client
  • TLS Connect Examples
  • Transparent Data Encryption (TDE)
  • Security Vulnerability

2. Preparing TLS Certificates

Before we can utilize TLS to secure both the server and the client, we must prepare a set of TLS certificates to ensure mutual trust. Normally the CA (Certificate Authority) certificates can be purchased from a trusted organization and used it to create more CA-Signed certificates for services and applications. In this section, I will show you how to create your own CA Certificate and CA-Signed certificates using OpenSSL command line tool for both PostgreSQL server and client.

You may also have heard the term self-signed certificate. This type of certificate is not s

[...]

1. Introduction

This is part 2 of the blog “Understanding Security Features in PostgreSQL”, in which I will be discussing TLS in greater details. I will begin by going over some of the most important security concepts around TLS before jumping into enabling TLS on PostgreSQL server. I believe it is crucial to have sufficient background information on TLS before tweaking the TLS settings in both client and server sides.

In part 1 of this blog, we mostly discussed about authentication and authorization (AA), which is important to identify which client is permitted to connect and which table or column he/she is permitted to operate. Even with the strongest authentication and authorization, the actual communication between client and server will not be encrypted unless Transport Layer Security (TLS) is specifically enabled in the database server. TLS is one of the least understood but commonly used security protocol that ensures the security of many HTTPS sites and other services. TLS is a big protocol and this blog will describe how it works and how to enable TLS in your PostgreSQL server.

Here is the overview of the security topics that will be covered in all parts of the blog:

Part 1:

  • PostgreSQL Server Listen Address
  • Host-Based Authentication
  • Authentication with LDAP Server
  • Authentication with PAM
  • Role-Based Access Control
  • Assign Table and Column Level Privileges to Users
  • Assign User Level Privileges as Roles
  • Assign and Column Level Privileges via Roles
  • Role Inheritance

Part 2:

  • Security Concepts around TLS
  • Symmetrical Encryption
  • Asymmetrical Encryption (a.k.a Public Key Cryptography)
  • Block Cipher Mode of Operation (a.k.a Stream Cipher)
  • Key Exchange Algorithm
  • TLS Certificate and Chain of Trust
  • Data Integrity Check / Data Authentication
  • TLS Cipher Suite and TLS handshake
  • TLS versions

Part 3:

  • Preparing TLS Certificates
  • Enabling Transport Layer Security (TLS) to PostgreSQL Server
  • Enabl
[...]

1. Introduction

PostgreSQL is packed with several security features for a database administrator to utilize according to his or her organizational security needs. The word Security is a very broad concept and could refer to completely different procedures and methodology to achieve in different PostgreSQL components. This blog is divided into part 1, 2 and 3 and I will explain the word Security with regards to PostgreSQL version 12.1 and how it is practiced in different areas within the system.

In Part 1 of the blog, I will be discussing the basic security features that exist in PostgreSQL with emphasis on Host-based authentication methods as well as user-based access control with the concept of roles. If done right, we could have a much more robust database server and potentially reduce the attack surface on the server, protecting it from attacks like SQL injections. I will also briefly discuss a few of the advanced authentication methods such as LDAP and PAM authentication. There are many more advanced authentication methods supported and we will be producing more articles in the near future to cover more of these methods.

In Part 2 of the blog, I will be discussing TLS in greater detail, which I believe is crucial for a database administrator to understand first before enabling TLS in the PostgreSQL server. TLS is a fairly large and one of the least understood protocol today, which contains a lot of security components and methodology related to cryptography that could be quite confusing.

In Part 3 of the blog, I will be discussing how to apply TLS configurations to both PostgreSQL server and client following the TLS principles that have been discussed in Part 2. I will also briefly discuss Transparent Data Encryption (TDE) that the PG community is currently working on that introduces another layer of secured database environment.

Below is the overview of the security topics that will be covered in all parts of the blog:

Part 1:

  • PostgreSQL Server Listen Address
  • Host-Based Au
[...]
Posted by Kaarel Moppel in Cybertec on 2020-01-22 at 08:30

It’s been exactly half a year since the last major release of the pgwatch2 Open Source PostgreSQL monitoring tool, and I’m glad to announce that another huge set of useful features and improvements have found their way into the pgwatch2 code repository! The new version is incremented to v1.7 and, continuing the naming tradition, I’m also calling it “Feature Pack 6” as it’s mostly about new features mixed with some bugfixes, although it’s been surprisingly error-free for the last year. Note that listed below are only the most significant changes since last major version, v1.6, so please go through the GitHub changelog in detail if you’re upgrading from a previous version.

Highlights of pgwatch2 v.1.7.0 – log parsing, recommendations engine, real-time stats tracking and new dashboards

The biggest feature for me is the log parsing support. This means that when the pgwatch2 agent is deployed locally on the DB server (also called in “push” mode), it can “tail” the logs and send event counts to the metric store. And when is that useful? Well, mostly for all alerting and health-check purposes. Note that we’re not storing error message contents to avoid security and performance problems, but only the event counts, grouped by severity.

Other notable features are the “recommendations engine” and some infrastructure changes to treat metrics with “real-time” in their name a bit differently so that they could be executed very often (every second even) thus providing a near real-time glimpse into database activities, with the main practical usage of tracking live (currently executing) server processes conveniently there where all other metrics are viewed – in Grafana. Also included is possible locking info, utilizing “pg_stat_activity” and “pg_locks” data. But not only that – also for example live “explain plan” tracking for longer-lasting queries is now possible.

Call for feedback

And as always, please do let us know on GitHub if you’re still missing something in the tool or are experiencing any operati

[...]
On 20th of January 2020, Amit Kapila committed patch: Allow vacuum command to process indexes in parallel.   This feature allows the vacuum to leverage multiple CPUs in order to process indexes. This enables us to perform index vacuuming and index cleanup with background workers. This adds a PARALLEL option to VACUUM command where the … Continue reading "Waiting for PostgreSQL 13 – Allow vacuum command to process indexes in parallel."

One of the great things about PostgreSQL is its reliability: it is very stable and typically “just works.” However, there are certain things that can happen in the environment that PostgreSQL is deployed in that can affect its uptime, such as:

  • The database storage disk fails or some other hardware failure occurs
  • The network on which the database resides becomes unreachable
  • The host operating system becomes unstable and crashes
  • A key database file becomes corrupted
  • A data center is lost

There may also be downtime events that are due to the normal case of operations, such as performing a minor upgrade, security patching of operating system, hardware upgrade, or other maintenance.

Fortunately, the Crunchy PostgreSQL Operator is prepared for this.

Crunchy Data recently released version 4.2 of the open source PostgreSQL Operator for Kubernetes. Among the various enhancements included within this release is the introduction of distributed consensus based high-availability (HA) for PostgreSQL clusters by using the Patroni high-availability framework.

What does this mean for running high-availability PostgreSQL clusters in Kubernetes, how does it work, and  how to create a high-availability PostgreSQL cluster by example? Read on to find out!

 

 

The Crunchy PostgreSQL Operator High-Availability Fundamentals

Posted by Andrew Dunstan in 2ndQuadrant on 2020-01-20 at 03:14
Recently there were some complaints about the behaviour of the jsonb_set function. Specifically, the complain was that if the value argument of the function is null the result is null. This happens because the function is declared STRICT, like many PostgreSQL functions. STRICT is in fact another way of spelling RETURNS NULL ON NULL INPUT. […]
upgrading postgresql python 3

PostgreSQLPython 2 has officially completed its life as of Jan 1st, 2020 and the Python 2.x branch will not be maintained anymore. You might be seeing repeated notices of deprecation in log files and terminals like:

DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date.

Python 2 had a wonderful life, coming from a humble background and becoming one of the most popular languages and the most popular scripting language. It’s no wonder Patroni-like tools are developed in Python.

Thanks to the growing popularity of Patroni, there are a large number of Patroni cluster setups across different companies. Now “How to migrate from Python 2 to Python 3 with the least possible downtime” becomes a hanging question for Operations teams and DBAs. Additionally, chances are that the cluster will be running an older version of Patroni. If so, the upgrade plan can take up both these challenges at the same time:

  1. Upgrade Patroni Software to Latest release
  2. Switch to Python 3 from Python 2

This blog post is to demonstrate how easy it is, and Patroni’s High Availablity features will be leveraged when we upgrade individual nodes. There are a few key points to note.

  • We should not be disturbing the default Python 2 installation (its binaries and modules) because there could be other programs using them.
  • Python 2 and Python 3 installations will go to different directories so they can coexist in almost all Linux distros.
  • We should avoid multiple Patroni installations (different versions) in a node because they will cause confusion in the long run. So we must remove existing ones.
  • Patroni in each node of the cluster can be using different versions of Python because Patroni is not interfering with PostgreSQL replication.
  • Since Python precompiles scripts it loads to memory for execution, we can replace the Python script while the corresponding program is running in a loop.

Verification of the Vers

[...]
Posted by Andreas Scherbaum on 2020-01-16 at 23:34
Author
Andreas 'ads' Scherbaum

Pavlo recently pointed out that the pg_sleep() function in PostgreSQL has two alternatives: pg_sleep_for() and pg_sleep_until().

What do they do?

 

Continue reading "SELECT pg_sleep_until('#800Monies');"

As I get ready for the PgDay San Francisco event that is happening next Tue 21 January—a one-day, single-track Postgres community event at the awesome Swedish American Hall in SF—I’m reflecting a bit on how important the speakers are to developer events. Let’s face it, without speakers, there would be no conference.

And because I was on the PgDaySF talk selection committee, I’ve had some good conversations these last few months about CFPs, conference talks, how talk selection committees work, and how you can improve your chances at getting your proposals accepted. So I thought it would be useful to walk through the tips I’ve accumulated on how to get your conference talk accepted—at a Postgres conference, or at any developer conference.

These tips are premised on the notion that a good conference talk requires these 4 things:

  1. interestingness: a topic people will care about—and learn from
  2. knowledgeable speaker who knows their subject & can communicate effectively with an audience—so people can follow, understand, and learn
  3. a hook: a compelling title & abstract that will hook people and entice them to attend
  4. fits holistically into the rest of the lineup: a talk that complements the rest of the talks at the event, that adds something unique, and doesn’t overlap the other talks in a significant way

So here we go, let’s walk through 16 tips on how to get your conference talk accepted.

1/ Be sure to fill in your bio

The program committee needs it. To create an effective program for an event, the talk selection committee needs to understand more than just your specific talk proposal. It helps them to get a sense of the knowledge and skills of the speakers. Note I said “skills” and not “experience”. This is important, as many conferences welcome first-time speakers with open arms. So it’s often OK if you don’t have a lot of experience giving conference talks. But the program committee still needs to know more about the person who will go up on stage.

2/

[...]
It's every developer's nightmare: SQL queries that get large and unwieldy. This can happen fairly quickly with the addition of multiple joins, a subquery and some complicated filtering logic. I have personally seen queries grow to nearly one hundred lines long in both the financial services and health industries. Luckily Postgres provides two ways to encapsulate large queries: Views and Materialized Views. In this article, we will cover in detail how to utilize both views and materialized views…
Posted by Andreas Scherbaum on 2020-01-15 at 23:35
Author
Andreas 'ads' Scherbaum

The PostgreSQL Project is present with a booth at FOSDEM ever since 2007. Since 2008 we organize a Devroom, since 2013 we have our own PGDay on the Friday before FOSDEM. This year marks the 8th FOSDEM PGDay.

This blog post presents useful information about the PGDay, the booth and Devroom at FOSDEM.

 

Continue reading "PostgreSQL @ FOSDEM 2020"

FOSDEM PGDay is looking for volunteers! We rely on you to help us host and run a successful FOSDEM booth and devroom.

The areas covered by volunteers include, but are not limited to:

  • room host (Devroom, Sunday)
  • video host (Devroom, Sunday)
  • door bouncer (Devroom, Sunday)
  • booth volunteers (Saturday, Sunday)

Room volunteers have the advantage that they have a guaranteed place in the devroom during their Sunday shift.

If you are interested in volunteering, please contact us at contact@fosdempgday.org. Please include the areas where you want to help, your PostgreSQL Community account, your previous experience as a volunteer at a conference, your Telegram address, and when you plan to arrive and leave in Brussels. Please note that your submission does not automatically qualify you as a volunteer, you will hear back from us and receive an invitation.

Please also note that we switch from Hangouts to Telegram this year. You will be sent invites for the usual channels.

Quite often when visiting customers or doing trainings, I see that people are quite happy seeing and learning about some little time-saving PostgreSQL tricks, commands or SQL snippets on the screen and they usually ask to slow down and explain or just say “wow, didn’t know you could do that”. As these tips and tricks are not too much material on their own, I thought I’d just make note of such titbits and write a post on that later – and now it seems  that this “later” arrived 🙂 So below there is a list without any particular ordering on some Postgres features and tricks that I’ve gathered over the last year and that might be news for more casual Postgres users. Hope you’ll find something useful and a happy new Postgres year!

Dynamic SQL loops with psql’s “gexec”

This could easily be the most useful one, especially for DBA-s doing some maintenance operations. Since a couple of years the PostgreSQL’s native query tool can do kind of “dynamic programming” – meaning you generate some SQL-s based on a queries output (i.e. the resultset is dynamic) and then you execute all of those generated SQL-s in sequence. Typical uses might be: “vacuuming” or gathering statistics for all the biggest tables that haven’t gotten any automatic “care” recently, getting row counts for all tables, generating thousands of tables for test purposes, rebuilding most bloated indexes, etc – in short stuff that previously required leaving the psql environment and writing a small script.

-- counting rows from all user tables dynamically
select format('select count(*) from %s.%s', 
quote_ident(table_schema), quote_ident(table_name)) as sql from information_schema.tables where table_type ~ 'BASE' 
and not table_schema LIKE ANY(array[E'pg\\_%', 'information_schema']) order by 
table_schema, table_name \gexec

Note that gexec is a psql command, not SQL!

Session variables

Officially session level user variables are not really advertised in the documentation and not as convenient as in some other database systems (this might cha

[...]

For connection pooling in PostgreSQL, one of the best and most popular tools out there is PgBouncer. However, monitoring PgBouncer can be challenging due to its use of SHOW commands, which are only available via a special database connection as opposed to making its statistics available via a standard table or view.

In order to more easily monitor PgBouncer, the team at Crunchy Data developed an open source PgBouncer Foreign Data Wrapper (pgbouncer_fdw). This blog post describes why monitoring PgBouncer is important, and how you can do this easily using pgMonitor and pgbouncer_fdw.

Title: Waiting for PostgreSQL 13 – ALTER TABLE … ALTER COLUMN … DROP EXPRESSION On 14th of January 2020, Peter Eisentraut committed patch: ALTER TABLE ... ALTER COLUMN ... DROP EXPRESSION   Add an ALTER TABLE subcommand for dropping the generated property from a column, per SQL standard.   Discussion: https://www.postgresql.org/message-id/flat/-946e-0453-d841-%402ndquadrant.com Since PostgreSQL 12 we … Continue reading "Waiting for PostgreSQL 13 – ALTER TABLE … ALTER COLUMN … DROP EXPRESSION"

The 6th edition of the popular PostgreSQL conference pgDay Paris, a PostgreSQL.Org Recognized Community Conference, will be held on March 26, 2020 in the French capital. All of the talks will be in English.

Registration is open, and the EARLYBIRD discount is going fast so make sure you grab that while you can! https://2020.pgday.paris/registration/

We received over 100 submissions from 60 different speakers—double the interest from last year—and for only 8 slots! This allowed us to prepare an exciting schedule for you, with speakers from around the world, including the United States, Sweden, the Netherlands, the Czech Republic, and of course France; speaking on a wide range of topics, sure to please everyone. https://2020.pgday.paris/schedule/

Finally, we will have a social event after the conference, made possible by our sponsors! See them (or even join them!) here: https://2020.pgday.paris/sponsors/

See you all on March 26th!

2020 January 16 Meeting 6pm-8pm

Location:

PSU Business Accelerator
2828 SW Corbett Ave · Portland, OR
Parking is open after 5pm.

Speaker: Alex Theodore

As a CTO I saw everything that went into and came out of the web development process, from the “client needs” to the source-controlled code. But being skilled in database development showed me time and time again that databases (and their architecture) were often sidelined treated as after-thoughts and often times served as little more than flexible storage spaces. Yet, their contents were the very essence of value for the process/client/company being served. This strange inversion of priorities led me to develop a way of looking at databases as “outside” the application and ultimately to interacting with them as API’s. In my talk I’ll be presenting this concept and showing with simple rules and examples how it can be done.

Bio:

I was the product and technology founder of Fracture.me and CTO up until 2018 where I developed the manufacturing processes and ERP software that powered our production process. That got me deeply involved in database work, especially with architecture and process design and optimization. Now I work part time as a database consultant and part time building modern, minimalistic furniture for my new (small) company MillFinish.com.


1. Overview

PostgreSQL is one of the most popular free open-source relational database management systems in the world. Other than complies to SQL standard, PostgreSQL also provides a great extensibility which allows users to define their own extensions. With such a great feature, PostgreSQL is not only a database but also an application development platform. In this tutorial, we will go through the entire process about creating an extension, running test against this extension, and debugging the extneson using Eclipse.

2. Create an extension

To create an extension in PostgreSQL, you need at least two files: control file and SQL script file. The control file uses a naming convention extension_name.control. let‘s create an extension call get_sum, then the confile file should contain the basic information like below.

$ cat get_sum.control 
# get_sum postgresql extension
comment = 'simple sum of two integers for postgres extension using c'
default_version = '0.0.1'
module_pathname = '$libdir/get_sum'
relocatable = false
  • comment comments about this extension
  • default_version the version of this extension which is also part of the SQL script file
  • module_pathname specifies the shared library path for this extension

The SQL script file must be named as extension_name--version.sql.

$ cat get_sum--0.0.1.sql 
--complain if script is sourced in psql, rather than via CREATE EXTENSION
\echo Use "CREATE EXTENSION get_sum" to load this file. \quit

CREATE OR REPLACE FUNCTION get_sum(int, int) RETURNS int
AS '$libdir/get_sum'
LANGUAGE C IMMUTABLE STRICT;

The second line is to avoid loading this SQL script using psql, and the rest declares the get_sum function will be created in shared library using c language.

Typically, an extension not only defines new objects, but also adds new logic to deal with those new objects. To boost the performance, you should write the logic in C code with a Makefile. PostgreSQL provides a build infrastructure for extension called PGXS, which

[...]

1. Overview

This tutorial provides detailed instructions to help a newbie setup the building and debugging environment with the latest Eclipse IDE for C/C++ Developers for current Postgres 12.1 release on LTS Ubuntu 18.04.

Below is the key requirement.

  • Linux: ubuntu-18.04.3-desktop-amd64
  • Postgres: REL_12_STABLE branch
  • Eclipse: eclipse-cpp-2019-12-R-linux-gtk-x86_64.tar.gz

2. Install Ubuntu 18.04 Desktop

Go to the Ubuntu official website to download the latest LTS Ubuntu 18.04.3 Desktop (64-bit) and Install it on a virtual machine such as VirtualBox. The option “Minimal Installation” with web browser and basic utilities is good enough for this tutorial.

3. Install git and checkout PostgreSQL source code

PostgreSQL has the source code available on github, in order to check out the source code, you need to install git using the command below.

$ sudo apt install git -y

PostgreSQL has a version 12 released in October 2019, and later was upgraded to 12.1 in November. This tutorial will use the latest PostgreSQL12 stable branch to explain how to build and debug the source code using the latest Eclipse IDE. Run the commands below to check out version 12.

$ mkdir sandbox
$ cd sandbox/
$ git clone https://github.com/postgres/postgres
$ cd postgres/
$ git checkout REL_12_STABLE
$ git branch
* REL_12_STABLE

Now, we are on PostgreSQL 12 stable release branch.

4. Install PostgreSQL build dependency

In order to build PostgreSQL source code, we need to install the basic build dependency with below command.

$ sudo apt install -y pkg-config build-essential ibreadline-devbison flex

With the command above, the basic libraries and utilities for building c and cpp code will be installed, such as, dpkg-dev, gcc, g++, make and libc6-dev. Moreover, the libraries and tools required by PostgreSQL such as libreadline, zlib, bison and flex will also be installed as well.

5. Configure PostgreSQL and generate Makefiles

Before importing PostgreSQL source code into Eclipse

[...]

This year's FOSDEM PGDay once again precedes the main FOSDEM event. It will take place on January 31st, 2020, in the Hilton Brussels Grand Place Hotel.

Registration for the PGDay is open, and a registration is required to attend the Friday conference. The number of seats is limited. Visit the Registration page for your ticket!

The PostgreSQL Project also participates in FOSDEM, we will be present with a booth on both days, and we have a Devroom on Sunday. The schedule for Friday and Sunday can be found here.

Participation for the Devroom on Sunday is free, and no registration is required. However the available seats usually fill up quickly, please plan ahead which talks you want to see.

See you in Brussels!

The latest Barman 2.10 release introduces support for the system identifier of a PostgreSQL instance. In this article, I will answer a few questions explaining what a system identifier is and why it is a good thing that Barman uses it. What is the PostgreSQL system identifier? PostgreSQL gives a unique system identifier to every […]

PSQL_EDITOR question

This supposed to be just a short TIL entry about PSQL_EDITOR variable. While trying to figure out all possible pitfalls, I’ve decided to write a full blog post. You may skip to the summary though. All tests done on Windows 10 with PostgreSQL 12 installed.

As you know, PostgreSQL ships with an interactive console tool named psql, which allows you to use a lot of enhanced features, like watching, for example. There is a lots of information about tweaking psql in POSIX environments. Still, there is a lack of such tips for Windows users. If you, my dear reader, are a Linux user, I’m sure you will also find this information useful anyway. Who knows what the next purchase of Microsoft will be. After LinkedIn, GitHub, and Citus, you cannot be sure anymore! 😉

I’ve chosen Sublime as a target because it’s not a simple console or single window application, but rather an advanced text processor with enhanced file handling, e.g. it will restore the unsaved files on startup.

According to the manual PSQL_EDITOR, EDITOR and VISUAL are environment variables, which are used inside psql to call an editor for input.

Let’s try to set the environment variable then.


$ SET PSQL_EDITOR=C:\Program Files\Sublime Text 3\subl.exe

$ psql

postgres=# \e

postgres=# \e
could not open temporary file "C:\Temp\psql.edit.3288.sql": File exists

As you can see nothing was executed after the first \e command, even though I’ve edited and saved the content of the query.
What’s wrong here? First of all, I’ve used subl.exe instead of sublime.exe which is special command line tool:

Sublime Text includes a command line tool, subl, to work with files on the command line. This can be used to open files and projects in Sublime Text, and also works as an EDITOR for unix tools, such as git and subversion.

psql runs editor and then waits for the editor process to finish. But subl is just a special tool, which in fact launches sublime.exe and then just dies. So psql gets signal about subl finishes and returns, but

[...]
For very long time plans with parallel execution showed bogus values. Not any more. For example, check this: Now, with new logic, the same plan looks like this: Note fixed times, and modified values in loops column. Hope you like it 🙂
Posted by Julien Rouhaud on 2020-01-06 at 12:23

Coming up with good index suggestion can be a complex task. It requires knowledge of both application queries and database specificities. Over the year multiple projects tried to solve this problem, one of which being PoWA with the version 3, with the help of pg_qualstats extension. It can give pretty good index suggestion, but it requires to install and configure PoWA, while some users wanted to only have the global index advisor. In such case and for simplicity, the algorithm used in PoWA is now available in pg_qualstats version 2 without requiring any additional component.

What is pg_qualstats

A simple way to explain what is pg_qualstats would be to say that it’s like pg_stat_statements working at the predicate level.

The extension will save useful statistics for WHERE and JOIN clauses: which table and column a predicate refers to, number of time the predicate has been used, number of execution of the underlying operator, whether it’s a predicate from an index scan or not, selectivity, constant values used and much more.

You can deduce many things from such information. For instance, if you examine the predicates that contains references to different tables, you can find which tables are joined together, and how selective are those join conditions.

Global suggestion?

As I mentioned, the global index advisor added in pg_qualstats 2 uses the same approach as the one in PoWA, so the explanation here will describe both tools. The only difference is that with PoWA you’ll likely get a better suggestion, as more predicates will be available, and you can also choose for wich time interval you want to detect missing indexes.

The important thing here is that the suggestion is performed globally, considering all interesting predicates at the same time. This approach is different to all other approaches I saw that only consider a single query at a time. I believe that a global approach is better, as it’s possible to reduce the total number of indexes, maximizing multi-column indexes usefulne

[...]
This is a story about how I found myself trying programming languages. I’ve been running an OLTP type database test (DBT-2, if you’ve heard of it), and noticed the post-processing reporting scripts for analyzing the database test results were running longer than I thought they would. I was running tests on 16 systems at the […]

out of shared memory”: Some of you might have seen that error message in PostgreSQL already. But what does it really mean, and how can you prevent it? The problem is actually not as obscure as it might seem at first glance. max_locks_per_transaction is the critical configuration parameter you need to use to avoid trouble.

out of shared memory”: When it happens

Most of the shared memory used by PostgreSQL is of a fixed size. This is true for the I/O cache (shared buffers) and for many other components as well. One of those components has to do with locking. If you touch a table inside a transaction, PostgreSQL has to track your activity to ensure that a concurrent transaction cannot drop the table you are about to touch. Tracking activity is important because you want to make sure that a DROP TABLE (or some other DDL) has to wait until all reading transactions have terminated. The trouble is, you have to store information about tracked activity somewhere– and this point is exactly what you have to understand.

Let us run a simple script:

BEGIN;

SELECT 'CREATE TABLE a' || id || ' (id int);' 
       FROM generate_series(1, 20000) AS id;

\gexec

What this script does is to start a transaction and to generate 20.000 CREATE TABLE statements. It simply generates SQL which is then automatically executed (\gexec treats the result of the previous SQL statement as input). 

Let us see what the SELECT statement produced …

BEGIN
          ?column?          
----------------------------
 CREATE TABLE a1 (id int);
 CREATE TABLE a2 (id int);
 CREATE TABLE a3 (id int);
 CREATE TABLE a4 (id int);
 CREATE TABLE a5 (id int);
...

And now let us see what PostgreSQL does:

...
CREATE TABLE
CREATE TABLE
ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, commands ignored until end of transaction block
ERROR:  current transaction is aborted, 
[...]

DigitalOcean is a cloud service provider, more of an IaaS (Infrastructure-as-a-Service) provider which is more suitable for small to medium scale businesses. You can get to know more about DigitalOcean here. What it does is a bit different to other cloud vendors like AWS or Azure and is not heavily global yet, take a look at this video which compares DigitalOcean with AWS. 

They provide a geographically distributed computing platform in the form of virtual machines where-in businesses can deploy their applications on cloud infrastructure in an easy, fast and flexible manner. Their core focus is to provide cloud environments which are highly flexible, easy-to-set-up and can scale for various types of workloads. 

What attracted me in DigitalOcean is the “droplets” service. Droplets are Linux based VMs which can be created as a standalone or can be part of a large cloud infrastructure with a chosen Linux flavoured operating systems like CentOS, Ubuntu, etc. 

PostgreSQL on DigitalOcean

With DigitalOcean, building PostgreSQL environments can be done in two ways, one way is to build manually from scratch using droplets (only Linux based VMs) or the other way is to use managed services.

DigitalOcean started managed services for PostgreSQL with an intention to speed up the provisioning of database servers in the form of VMs on a large cloud infrastructure. Otherwise, the only way is to build PostgreSQL environments is manually by using droplets. The supported capabilities with managed services are high-availability, automatic failover, logging, and monitoring. Alerting capability does not exist yet. 

The managed services more-or-less are similar to AWS RDS. The PostgreSQL instances can be only accessed using UI, there is no access to host running the database instance. Managing, Monitoring, parameter configuration, everything must be done from a UI.

PostgreSQL Compatibility with DigitalOcean

You can build PostgreSQL environments on Di

[...]