PostgreSQL
The world's most advanced open source database
Top posters
Number of posts in the past month
Top teams
Number of posts in the past month
Feeds
Planet
  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.
Contact
  • Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.
Posted by Jignesh Shah in Amazon RDS on 2019-09-20 at 01:55
My slides from my session "PostgreSQL Extensions - A Deeper Look" at PostgresOpen 2019 and PostgresConf SV 2019








This blog represents my own view points and not of my employer, Amazon Web Services.

Oftentimes, people want to know about “That One Place” to get all their learning and training resources for PostgreSQL. When I get such a question from a colleague, my typical response it to tell them to look it up online. But I know as soon as they hit the “.com” highway, they will be confronted with a barrage of resources about PostgreSQL from blogs, articles, whitepapers, videos, webinars, cookbooks for dummies, cheat sheets, and more.

In this blog, I am going to take you on a journey of some of the important avenues to quickly obtain most of the knowledge you would need to know about PostgreSQL

Here we go...

Read the PostgreSQL Manual

The first stop are the online manuals of PostgreSQL. The official documentation (or docs as they are referred to in short) of any product is the best place to find the largest wealth of information. For most people nowadays, manuals are typically the last place to look for help.  It should, however, always be the first stop on the list for various reasons as listed below:

  • Official docs explain the internals of various components of a product and how they relate to each other
  • They link to various other sections of manuals discussing a concept when a new concept is introduced
  • There is sample code to be executed and its expected output with explanation
  • There is a logical flow from one idea to another
  • There is a “Tip” and “Quick Setup” section wherever required that gives bonus information for newbies
  • Most of the other online resources lead you to official documentation in one way or the other
  • The manuals are divided into appropriate sections as per the need such as developer oriented, administrator related, programming focused, utilities, command reference, internals and appendices etc.

One excellent feature of using manuals that I liked the most is the “Supported Versions” subtitle on top of the page which

[...]

Postgres Execution Plans — Field Glossary

I’ve talked in the past about how useful Postgres execution plans can be. They contain so much useful information that here at pgMustard we’ve built a whole tool to visualise and interpret them.

There are lots of guides out there to the basics of execution plans, but a lot are quite scarce on the details — how to interpret particular values, what they really mean, and where the pitfalls are.

We’ve spent a lot of time over the last 18 months learning, clarifying, and downright misinterpreting how each of these fields work — and there’s still further for us to go on that.

But we have come a long way, and I’d like to share the guide that I wish had existed when we started out — a glossary of the most common fields you’ll see on the operations in a query plan, and a detailed description of what each one means.

If you work with query plans often, hopefully you’ll still learn a thing or two by reading this guide start-to-finish, but for people who do less performance analysis, you may want to use it as a reference, to look up fields as and when you need them.

I’ve broken down this glossary into sections, based on which flag to EXPLAIN causes the field to be shown, to make it easier to find your way around:

  • Query Structure Fields — always present.
  • Estimate Fields — present when the COSTS flag is set.
  • Actual Value Fields — present when the ANALYZE flag is set.
  • Buffers Fields — present when the BUFFERS flag is set.
  • Verbose Fields — present when the VERBOSE flag is set.

Query Structure Fields

These fields represent what the plan will actually do: how the database will process the data and return the results for your query. When applicable, they’ll be present whatever flags you use to generate the query plan.

Node Type

The operation the node is performing. The best guides I’ve read on what different operation types actually do are Depesz’s series on different operations and the annotations to the code itself — bot

[...]

How hard can it be to grab a copy of PostgreSQL 12 (still in beta) and install on your computer for testing, without having to deal with your existing database?

PostgreSQL 12 beta 4 up and running in less than six minutes

I have realized a very short, and to some extent, boring video to demonstrate how pgenv can simplify the installation of PostgreSQL 12 beta 4 (as well as other versions of course).

The video shows how automated it could be to install the beta version on a FreeBSD machine. For the very impatients, the commands are essentially:

% pgenv build 12beta4
% pgenv use 12beta4
% psql -h localhost -U postgres template1

but the last command is, of course, the proof that all is up and running.

As you will see, the most of the time is spent in doing the actual compilation of the software. The value added by pgenv is that you don’t have to deal with download links and commands to initialize your database. And once you are done, you can simply nuke the pgsql-12beta4 directory that will remove binaries and data.




Of course, pgenv can do a lot more than just downloading and compiling PostgreSQL, but the above demonstrate how it simplifies even the boring setup tasks.

pg_cgroups puts PostgreSQL in a cage
© Laurenz Albe 2018

 

In another blog post, I described how Linux control groups work and how they are useful for PostgreSQL.

Here I will present my little PostgreSQL plugin pg_cgroups which makes this simple to handle for the DBA.

What is pg_cgroups good for?

If you have a machine dedicated to a single PostgreSQL cluster, you normally do not want to limit PostgreSQL’s resource usage.

But sometimes you have more than one cluster running on a single machine, or you have an application server running on the same machine.
In that case, you may want to prevent one PostgreSQL cluster from using up so many resources that it starves the other programs on the same machine.

To some extent, the CPU and I/O schedulers can take care of that, but it can be nice to have some control. For example, you may want a small, unimportant database to not use up more than 10 percent of your CPU time and I/O bandwidth. And while you can limit PostgreSQL’s memory usage through shared_buffers and work_mem, it might be nice to keep one cluster from blowing other cluster’s data out of the file system cache.

How does pg_cgroups work?

I won’t describe the complete setup here (see the project page for that), but here is a short description how it works:

  • Edit /etc/cgconfig.conf so that an empty control group /postgres is created at boot time. This control group can be administrated by the postgres operating system user.
  • Build and install pg_cgroups. This will require the PostgreSQL headers to be installed.
  • Add pg_cgroups to shared_preload_libraries in postgresql.conf and restart PostgreSQL.

pg_cgroups creates a new control group under /postgres postgres and moves the postmaster to this group. All PostgreSQL processes will automatically inherit the control group.

Now you can set and adjust resource consumption limits on the fly simply by changing parameters in postgresql.conf! There is support for limits on memory consumption, CPU time and I/O bandwidth. One particularly nice fe

[...]
Posted by pgCMH - Columbus, OH on 2019-09-18 at 04:00

The October meeting will be held at 18:00 EST on Tues, the 22nd. Once again, we will be holding the meeting in the community space at CoverMyMeds. Please RSVP on MeetUp so we have an idea on the amount of food needed.

What

Our very own Doug will be presenting this month. He’s going to tell us all about the new and exciting PostgreSQL 12 release!

Where

CoverMyMeds has graciously agreed to validate your parking if you use their garage so please park there:

You can safely ignore any sign saying to not park in the garage as long as it’s after 17:30 when you arrive.

Park in any space that is not marked ‘24 hour reserved’.

Once parked, take the elevator/stairs to the 3rd floor to reach the Miranova lobby. Once in the lobby, the elevator bank is in the back (West side) of the building. Take a left and walk down the hall until you see the elevator bank on your right. Grab an elevator up to the 11th floor. (If the elevator won’t let you pick the 11th floor, contact Doug or CJ (info below)). Once you exit the elevator, look to your left and right; one side will have visible cubicles, the other won’t. Head to the side without cubicles. You’re now in the community space:

Community space as seen from the stage

The kitchen is to your right (grab yourself a drink) and the meeting will be held to your left. Walk down the room towards the stage.

If you have any issues or questions with parking or the elevators, feel free to text/call Doug at +1.614.316.5079 or CJ at +1.740.407.7043

On 16th of September 2019, Alexander Korotkov committed two patches: First patch: Support for SSSSS datetime format pattern   SQL Standard 2016 defines SSSSS format pattern for seconds past midnight in jsonpath .datetime() method and CAST (... FORMAT ...) SQL clause. In our datetime parsing engine we currently support it with SSSS name.   This … Continue reading "Waiting for PostgreSQL 13 – Support for FF1-FF6 and SSSS datetime format patterns"
Table partitioning has been evolving since the feature was added to PostgreSQL in version 10.  Version 11 saw some vast improvements, as I mentioned in a previous blog post. During the PostgreSQL 12 development cycle, there was a big focus on scaling partitioning to make it not only perform better, but perform better with a […]
Posted by Dimitri Fontaine on 2019-09-16 at 22:15

Here we are, another SQL query to write. You wish you knew how to write that mechanically, like you would a loop in your favorite programming language. Or at least have a pretty clear idea of a skeleton to tweak until it gives the result set you expect. So instead of working on your SQL query, you google How to write a SQL query? or maybe even How to learn SQL? Right. I feel you, I’ve been there too, even if quite some time ago…

So here my article where I teach you how to learn SQL.

I want to share with you how I did it, and how I continue to do it. There’s no magic secret sauce to it though, it’s all basic work. Again, we have to learn the main concepts and how they play together, then practice simple steps, and then build from there.

Posted by Luca Ferrari on 2019-09-16 at 00:00

There is a new release of PL/Proxy out there!

New Release of PL/Proxy

There is a new exciting release of PL/Proxy: version 2.9 has been released a few hours ago!

This is an important release because it adds support for upcoming PostgreSQL 12. The main problem with PostgreSQL 12 has been that Oid is now a regular column, meaning that HeapTupleGetOid`` is no longer a valid macro. I first proposed a patch that was based on the C preprocessor to get rid of older PostgreSQL version.


The solution implemented by Marko Kreen is of course much more elegant and is based on defining helper functions that are pre-processed depending on the PostgreSQL version.

Enjoy proxying!

Postgres is the leading feature-full independent open-source relational database, steadily increasing its popularity for the past 5 years. TimescaleDB is a clever extension to Postgres which implements time-series related features, including under the hood automatic partioning, and more.

Because he knows how I like investigate Postgres (among other things) performance, Simon Riggs (2ndQuadrant) prompted me to look at the performance of loading a lot of data into Postgres and TimescaleDB, so as to understand somehow the degraded performance reported in their TimescaleDB vs Postgres comparison. Simon provided support, including provisioning 2 AWS VMs for a few days each.

Summary

The short summary for the result-oriented enthousiast is that for the virtual hardware (AWS r5.2xl and c5.xl) and software (Pg 11.[23] and 12dev, TsDB 1.2.2 and 1.3.0) investigated, the performance of loading up to 4 billion rows in standard and partioned tables is great, with Postgres leading as it does not have the overhead of managing dynamic partitions and has a smaller storage footprint to manage. A typical loading speed figure on the c5.xl VM with 5 data per row is over 320 Krows/s for Postgres and 225 Krows/s for TimescaleDB. We are talking about bites of 100 GB ingested per hour.

The longer summary for the performance testing enthousiast is that such investigation is always much more tricky than it looks. Although you are always measuring something, what it is really is never that obvious because it depends on what actually limits the performance: the CPU spent on Postgres processes, the disk IO bandwidth or latency… or even the process of generating fake data. Moreover, performance on a VM with the underlying hardware systems shared between users tend to vary, so that it is hard to get definite and stable measures, with significant variation (about 16%) from one run to the next the norm.

Test Scenario

I basically reused the TimescaleDB scenario where many devices frequently send timespamped data points wh

[...]
Posted by Pavel Stehule on 2019-09-12 at 04:58
there is just one new feature - sort is supported on all columns, not only on numeric columns.
Posted by Laurenz Albe in Cybertec on 2019-09-11 at 07:00

Edgar Allan Poe on view dependencies

 

We all know that in PostgreSQL we cannot drop an object if there are view dependencies on it:

CREATE TABLE t (id integer PRIMARY KEY);

CREATE VIEW v AS SELECT * FROM t;

DROP TABLE t;
ERROR:  cannot drop table t because other objects depend on it
DETAIL:  view v depends on table t
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

ALTER TABLE t DROP id;
ERROR:  cannot drop column id of table t because other objects depend on it
DETAIL:  view v depends on column id of table t
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

Some people like it because it keeps the database consistent; some people hate it because it makes schema modifications more difficult. But that’s the way it is.

In this article I want to explore the mechanics behind view dependencies and show you how to track what views depend on a certain PostgreSQL object.

Why would I need that?

Imagine you want to modify a table, e.g. change a column’s data type from integer to bigint because you realize you will need to store bigger numbers.
However, you cannot do that if there are views that use the column. You first have to drop those views, then change the column and then run all the CREATE VIEW statements to create the views again.

As the example shows, editing tables can be quite a challenge if there is a deep hierarchy of views, because you have to create the views in the correct order. You cannot create a view unless all the objects it requires are present.

Best practices with views

Before I show you how to untangle the mess, I’d like to tell you what mistakes to avoid when you are using views in your database design (excuse my switching to teacher mode; I guess holding courses has that effect on you).

Views are good for two things:

  • They allow you to have a recurring SQL query or expression in one place for easy reuse.
  • They can be used as an interface to abstract from the actual table definitions, so that you can reorganize the tables without having to modify the
[...]

Where are you? Go ahead and figure out your answer, I'll wait.

No matter what your answer, whether you said "sitting in my office chair" or "500 meters south-west of city hall" or "48.43° north by 123.36° west", you expressed your location relative to something else, whether that thing was your office layout, your city, or Greenwich.

A geospatial database like PostGIS has to have able to convert between these different reference frames, known as "coordinate reference systems". The math for these conversions and the definition of the standards needed to align them all is called "geodesy", a field with sufficient depth and detail that you can make a career out of it.

Fortunately for users of PostGIS, most of the complexity of geodesy is hidden under the covers, and all you need to know is that different common coordinate reference systems are described in the spatial_ref_sys table, so making conversions between reference systems involves knowing just two numbers: the srid (spatial reference id) of your source reference system, and the srid of your target system.

PostgreSQL 12 is in beta right now, and we need everyone to test it!

We have been building repos even from daily builds, Below are the steps to install PostgreSQL 12 on RHEL 8: Continue reading "Installing PostgreSQL 12 beta/RC on RHEL/CentOS/Fedora"
Posted by Pavel Stehule on 2019-09-09 at 06:11
I released new version of pspg 2.0.1. I started work on pspg three years ago, and now there is almost all features what is possible with this design - the data are stored in original form (created by some sql client). The alternative of pspg can be sophisticated CSV viewers, because psql can produce a content in CSV format. Maybe (in far future) pspg 3.0 will be based on internal CSV storage with formatting on pspg side. But it is not a plan for few next years (or somebody can do it). So last month I wrote vertical cursor support (necessary for next step), and last week I finished "sort by column" feature. The sort command has sense only on numeric columns (works with numbers only - I really would not supply ORDER BY clause in SQL). With this limit is has maybe interesting feature - some size units (kB, MB, GB, TB) are supported.

Please, use it freely. And if you like this software, send me a postcard from your country - pspg is postcardware.
In my last entry, we saw how to setup PostgreSQL 12 beta 3 in Amazon RDS. In that entry I purposely left out how to change database parameters as I realized that it deserves an entry (or more) by itself.

Using the AWS CLI you can create a new database parameter as follows:

$ aws rds create-db-parameter-group  --db-parameter-group-name jkpg12pg    \
--db-parameter-group-family postgres12 --description "My PostgreSQL 12 Parameter Group" \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com


We have just created a group and not applied the parameters to any database. Before we apply, we do to see what are the default values created you can run a command as follows to see values being set by default

$ aws rds describe-db-parameters --db-parameter-group-name jkpg12pg  \
--region us-east-2 --endpoint https://rds-preview.us-east-2.amazonaws.com \
--query 'Parameters[].[ParameterName,ParameterValue]' --output text

You see an output containing a list of parameters with values. Lets look at some of the values to see how to interpret them.


application_name None
..
autovacuum_max_workers GREATEST({DBInstanceClassMemory/64371566592},3)
autovacuum_vacuum_cost_limit GREATEST({log(DBInstanceClassMemory/21474836480)*600},200)
..
effective_cache_size {DBInstanceClassMemory/16384}
..
jit None
..
maintenance_work_mem GREATEST({DBInstanceClassMemory*1024/63963136},65536)
max_connections LEAST({DBInstanceClassMemory/9531392},5000)
..
shared_buffers {DBInstanceClassMemory/32768}
shared_preload_libraries pg_stat_statements
..
work_mem None
xmlbinary None
xmloption None


When you see None it basically is equivalent to null which means in such cases it is not set in postgresql.conf and the default value of the PostgreSQL version engine is set by PostgreSQL. In the above example, you will notice that jit is set to None which means it will take the default  ON value of PostgreSQL 12 and enable jit in the instance.

If you change a parameter set to a specific value based
[...]
Posted by Regina Obe in PostGIS on 2019-09-08 at 03:28

We are pleased to provide binaries for file_textarray_fdw and odbc_fdw for PostgreSQL 11 Windows 64-bit.

To use these, copy the files into your PostgreSQL 11 Windows 64-bit install folders in same named folders and then run CREATE EXTENSION as usual in the databases of your choice. More details in the packaged README.txt

Continue reading "PostgreSQL 11 64-bit Windows FDWs"
Posted by Dimitri Fontaine on 2019-09-07 at 20:55
Ok, let’s face it, I like SQL. A lot. I think it’s a fine DSL given how powerful it is, and I respect its initial goal to attract non developers and try to build English sentences rather than code. Also, I understand that manually hydrating your collection of objects in your backend developement language is not the best use of your time. And that building SQL as strings makes your code ugly.
Posted by Dimitri Fontaine on 2019-09-07 at 20:00
In our previous articles we had a look at What is an SQL relation? and What is a SQL JOIN?. Today, I want to show you what is an aggregate in SQL. You might have heard about Map/Reduce when Google made it famous, or maybe some years before, or maybe some years later. The general processing functions map and reduce where invented a very long time ago. The novelty from the advertising giant was in using them in a heavily distributed programming context.
Posted by Dimitri Fontaine on 2019-09-07 at 19:30
It took me quite some time before I could reason efficiently about SQL JOINs. And I must admit, the set theory diagrams never helped me really understand JOINs in SQL. So today, I want to help you understand JOINs in a different way, hoping to make the concept click at once for you too! As we saw in the previous article What is an SQL relation?, in SQL a relation is a collection of objects, all sharing the same definition.
Posted by Dimitri Fontaine on 2019-09-07 at 19:00
If you’re like me, understanding SQL is taking you a long time. And some efforts. It took me years to get a good mental model of how SQL queries are implemented, and then up from the lower levels, to build a mental model of how to reason in SQL. Nowadays, in most case, I can think in terms of SQL and write queries that will give me the result set I need.
Posted by Kaarel Moppel in Cybertec on 2019-09-07 at 09:00

Soon it’s that time of the year again – basically a 2nd Christmas for followers of the “blue elephant cult” if you will :). I’m, of course, referring to the upcoming release of the next PostgreSQL major version, v12. So I thought it’s about time to go over some basics on upgrading to newer major versions! Database upgrades (not only Postgres) are quite a rare event for most people (at least for those running only a couple of DB-s). Since upgrades are so rare, it’s quite easy to forget how easy upgrading actually is. Yes, it is easy – so easy that I can barely wrap my head around why a lot of people still run some very old versions. Hopefully, this piece will help to convince you to upgrade faster in the future :). For the TLDR; (in table version) please scroll to the bottom of the article.

Why should I upgrade in the first place?

Some main points that come to my mind:

  • More security
    Needless to say, I think security is massively important nowadays! Due to various government regulations and the threat of being sued over data leaks, mistakes often carry a hefty price tag. Both new and modern authentication methods and SSL/TLS version support are regularly added to new PostgreSQL versions.
  • More performance without changing anything on the application level!
    Based on my gut feeling and on some previous testing, typically in total around ~5-15% of runtime improvements can be observed following an upgrade (if not IO-bound). So not too much…but on some releases (9.6 parallel query, 10.0 JIT for example) a possibly game changing 30-40% could be observed. As a rule, the upgrades only resulted in upsides – over the last 8 years, I’ve only seen a single case where the new optimizer chose a different plan so that things got slower for a particular query, and it needed to be re-written.
  • New features for developers
  • Every year, new SQL standard implementations, non-standard feature additions, functions and constructs for developers are introduced, in order to reduce the amounts of code writt
[...]
Any long-time user of Postgres is likely familiar with VACUUM, the process that ensures old data tuples are identified and reused to prevent unchecked database bloat. This critical element of maintenance has slowly, but surely, undergone incremental enhancements with each subsequent release. Postgres 12 is no different in this regard. In fact, there are two […]

The world’s most valuable resource is no longer the oil or gold, but data. And at the heart of data lies the database which is expected to store, process and retrieve the desired data as quickly as possible. But having a single database server doesn’t mostly serve the purpose. A single server has its drawbacks with a huge risk of data loss and downtime.

So most data platforms use multiple replicated database servers to ensure high availability and fault tolerance of their databases. In such an environment, how can we maximize performance against the resourced being used?

One of the questions I get asked very often from Pgpool-II users is, what is the performance benefit of using load balancing of Pgpool-II? Once in a while, we get complains from users that they get better performance when they connect directly to PostgreSQL than through Pgpool-II, even when the load balancing is working fine. How true is this complain?

In this blog, I’ll benchmark the performance of the Pgpool-II load balancing feature against a direct connection to PostgreSQL.

Before we start measuring the actual performance, let’s start with what is Pgpool-II load balancer and why we should use it.

Replicated PostgreSQL servers.

Almost in every data setup, we need more than one copy of database servers to ensure minimum or zero downtime and to safeguard against data loss. For that, we use the replicated database servers.

There are many ways to create a replicated PostgreSQL cluster, which is the topic for some other blog post, but most of the replication solution exists for PostgreSQL supports one-way replication. That means one master PostgreSQL server feeding data to one or multiple standby servers. and in this setup, only maser server is capable of handling the write queries while standby servers sit idle waiting for the moment when the master goes down and one of them gets promoted to become a new master.

This setup is good enough to handle server failures and to provide the high availability but it is not

[...]

The PostreSQL Operator provides users with a few different methods to perform PostgreSQL cluster operations, via:

  • a REST API
  • pgo, PostgreSQL Operator Command Line Interface (CLI)
  • Directly interfacing with Kubernetes, including various APIs and custom resource definitions (CRDs).

While the REST API and pgo provide many conveniences, there are use cases that require the third method, i.e. interacting directly with Kubernetes to create the desired PostgreSQL cluster setup.

To demonstrate this, let's look at how we can manipulate the CRD responsible for managing PostgreSQL clusters to create a new cluster managed by the PostgreSQL Operator.

PostgreSQL connection strings embedded in your application can take two different forms: the key-value notation or the postgresql:// URI scheme. When it comes to using psql though, another form of connection string is introduced, with command line options -h -p -U and environment variable support.

In this short article you will learn that you can use either of the three different forms in psql and thus easily copy & paste you application connection string right at the console to test it!

In my previous post about pgBackRest, we saw how to install and setup pgBackRest and make a backup of a PostgreSQL database with it. It was a very basic single server setup,  only intended to get the hang of the tool. Such setups are not used in a production environment, as it is not recommended (or rather does not serve the purpose) to perform the backup on the same server where the database is running.

So: let’s get familiar with how remote backup servers are set up with pgBackRest, and how a full and incremental backup is performed from the backup server and restored on the database server.

We need two servers. Let’s call ours:

  1. pgbackup
  2. db1

 

Installing pgbackrest:

We need to install pgBackRest on the database and the backup server. Make sure you install the same version on both.

 

For the database server, please follow the installation steps from my previous post. The steps are slightly different for the backup server, since it is a better practice to create a separate user to own the pgBackRest repository.

 

Create a pgbackrest user on the backup server

 


sudo adduser --disabled-password --gecos "" pgbackrest

 

Install required Perl package and pgBackRest from a package or manually on pgbackup as below

 

sudo apt-get install libdbd-pg-perl
sudo scp BUILD_HOST:/root/pgbackrest-release-2.14/src/pgbackrest /usr/bin/
sudo chmod 755 /usr/bin/pgbackrest

 

Create pgBackRest configuration files, directories and repository on pgbackup

 


sudo mkdir -p -m 770 /var/log/pgbackrest
sudo chown pgbackrest:pgbackrest /var/log/pgbackrest
sudo mkdir -p /etc/pgbackrest
sudo mkdir -p /etc/pgbackrest/conf.d
sudo touch /etc/pgbackrest/pgbackrest.conf
sudo chmod 640 /etc/pgbackrest/pgbackrest.conf
sudo chown pgbackrest:pgbackrest /etc/pgbackrest/pgbackrest.conf

sudo mkdir -p /var/lib/pgbackrest
sudo chmod 750 /var/lib/pgbackrest
sudo chown pgbackrest:pgbackrest /var/lib/pgbackrest

 

Now we are ready to proceed with enabling c

[...]
Posted by Pavel Stehule on 2019-09-03 at 13:48
Master branch of pspg supports sort by column selected by vertical cursor.

Why I wrote this feature? You can try to list of tables in psql by \dt+ command. The result is sorted by schema and by name. Sometimes can be interesting to see result ordered by table's sizes. Now it is easy. Press Alt-v to show vertical cursor. Later move to right to "Size" column. Then press d as descendent sort. You can get resuly like attached screenshot:



The sort is working only on numeric columns (but units used by psql are supported)

PostgreSQL 12, the latest version of the "world's most advanced open source relational database," is being released in the next few weeks, barring any setbacks. This follows the project's cadence of providing a raft of new database features once a year, which is quite frankly, amazing and one of the reasons why I wanted to be involved in the PostgreSQL community.

In my opinion, and this is a departure from previous years, PostgreSQL 12 does not contain one or two single features that everyone can point to and say that "this is the 'FEATURE' release," (partitioning and query parallelism are recent examples that spring to mind). I've half-joked that the theme of this release should be "PostgreSQL 12: Now More Stable" -- which of course is not a bad thing when you are managing mission critical data for your business.

And yet, I believe this release is a lot more than that: many of the features and enhancements in PostgreSQL 12 will just make your applications run better without doing any work other than upgrading!

(...and maybe rebuild your indexes, which, thanks to this release, is not as painful as it used to be)!

It can be quite nice to upgrade PostgreSQL and see noticeable improvements without having to do anything other than the upgrade itself. A few years back when I was analyzing an upgrade of PostgreSQL 9.4 to PostgreSQL 10, I measured that my underlying application was performing much more quickly: it took advantage of the query parallelism improvements introduced in PostgreSQL 10. Getting these improvements took almost no effort on my part (in this case, I set the max_parallel_workers config parameter).

Having applications work better by simply upgrading is a delightful experience for users, and it's important that we keep our existing users happy as more and more people adopt PostgreSQL.

So, how can PostgreSQL 12 make your applications better just by upgrading? Read on!