The world's most advanced open source database
Top posters
Number of posts in the past month
Top teams
Number of posts in the past month
  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.
  • Get in touch with the Planet PostgreSQL administrators at planet at
Posted by Bruce Momjian in EnterpriseDB on 2017-10-18 at 02:00

I had the pleasure of speaking at Alibaba's Computing Conference last week in Hangzhou, China. I gave two presentations. The first covered Postgres 10 features (images, video). The second was a generic talk about the future of open source databases (images, video starts at 153 minutes).

This week I am attending a community Postgres conference in Shenzhen, China. Postgres users here have a strong desire to increase Postgres adoption in China in the coming years.

Posted by Craig Kerstiens in CitusData on 2017-10-17 at 19:35

At Citus we spend a lot of time working with customers on data modeling, optimizing queries, and adding indexes to make things snappy. My goal is to be as available for our customers as we need to be, in order to make you successful. Part of that is keeping your Citus cluster well tuned and performant which we take care of for you. Another part is helping you with everything you need to know about Postgres and Citus. After all a healthy and performant database means a fast performing app and who wouldn’t want that. Today we’re going to condense some of the information we’ve shared directly with customers about Postgres indexes.

Postgres has a number of index types, and with each new release seems to come with another new index type. Each of these indexes can be useful, but which one to use depends on 1. the data type and then sometimes 2. the underlying data within the table, and 3. the types of lookups performed. In what follows we’ll look at a quick survey of the index types available to you in Postgres and when you should leverage each. Before we dig in, here’s a quick glimpse of the indexes we’ll walk you through:

  • B-Tree
  • Generalized Inverted Index (GIN)
  • Generalized Inverted Seach Tree (GiST)
  • Space partitioned GiST (SP-GiST)
  • Block Range Indexes (BRIN)
  • Hash

Now onto the indexing

In Postgres, a B-Tree index is what you most commonly want

If you have a degree in Computer Science, then a B-tree index was likely the first one you learned about. A B-tree index creates a tree that will keep itself balanced and even. When it goes to look something up based on that index it will traverse down the tree to find the key the tree is split on and then return you the data you’re looking for. Using an index is much faster than a sequential scan because it may only have to read a few pages as opposed to sequentially scanning thousands of them (when you’re returning only a few records).

If you run a standard CREATE INDEX it creates a B-tree for you. B-tree indexes are valuable on the most common data types such as text, num

Posted by William Ivanski in 2ndQuadrant on 2017-10-17 at 10:44

1. Introduction

Postgres-XL (or just XL, for short) is an open source project from 2ndQuadrant. It is a massively parallel database built on top of PostgreSQL, and it is designed to be horizontally scalable and flexible enough to handle various workloads.

Here we will show how to build a test environment to play with XL and how to configure it using the OmniDB 2.2 web interface.

2. Building test environment

In this experiment, we will build a cluster with 1 GTM, 1 coordinator and 2 data nodes. It would be simpler to put them in the same virtual machine, however split them across multiple virtual machines is a more realistic scenario. So we will build 3 virtual machines:

Machine IP Role
xl_gtmcoord GTM and coordinator
xl_datanode1 data node
xl_datanode2 data node

For this test environment to work, you need to put both machines in the same internal network. If necessary, you can edit IP addresses in files Vagrantfile and bootstrap.shlocated inside the folder of each machine.

2.1. Pull OmniDB repo

The first thing you need to do is to download OmniDB repo from GitHub and make sure you are in the development branch. Run the following:

git clone
cd OmniDB
git checkout dev

2.2. Creating virtual machines

On your host machine, you need to have installed:

  • VirtualBox
  • Vagrant
  • Vagrant plugin vbguest

Please refer to VirtualBox and Vagrant websites for more information.

If necessary, please edit the file Vagrantfile for all machines now, like this: "private_network", ip: '', :name => 'vboxnet0', :adapter => 2
You also will need to replace the IP addresses on the file for all machines.

On your terminal (assuming you are on the root directory of OmniDB repo), to create the first virtual machine with XL, you need to do:

cd OmniDB/OmniDB_app/tests/vagrant/multinode/xl_gtmcoord
vagrant up
Please note that this may take a while to finish. It will perform the following tasks automat
Posted by Douglas Hunley in OpenSCG on 2017-10-16 at 13:09

With the release of PostgreSQL 10, I’ve updated my pg script. You might recall from previous posts that this script is for Homebrew users that have tapped Peter’s brew recipes. It allows for installing and switching between multiple version of PostgreSQL seemlessly. While I was in there adding v10 support, I tweaked and tuned the code a bit and tidyied up the output significantly. I’m pretty pleased with the new version actually.

As always, it’s been added as a gist:


There’s a lot of information out there on how to configure PostgreSQL, on the importance of backups and testing them, etc.

But what about the server you run PostgreSQL on? We tend to pay a lot less attention to tools that you won’t need unless something breaks. But it’s worth taking some time to do so now, because that’s time you won’t have when your server is down and you’re in a rush.

Debuginfo and gdb

Seriously consider installing debug-info packages for the major services you run on a server, and for the libraries that service uses. It wastes some disk space, but it saves you a lot of time if you end up needing that information in a rush.

Debug info lets tools like the GNU Debugger gdb show variables, program arguments, etc when it is connected to a running program, or to a crash dump (core file). It lets people doing advanced support and bug fixing help you a lot more effectively. I have been able to fix problems in running PostgreSQL instances in gdb without having to shut the server down due to the availability of gdb and debuginfo.

Ideally, you wouldn’t need to install debuginfo before you intend to use it. You could just fetch it on demand. But debuginfo needs to exactly match the packages installed, and often repositories only carry a limited history of old versions, so it might not be there when you need it. It’s also fairly common for servers to be Internet-isolated, so you can’t just install packages on demand. And it’s possible the person doing the debugging won’t be the sysadmin with root access, so they can’t install them if they want to.

So please install it along with the PostgreSQL packages. For example, on Debian/Ubuntu:

sudo aptitude install postgresql-9.6-dbg

or for RHEL/Fedora:

sudo dnf install postgresql96-debuginfo

Better, use the debuginfo-install program for RHEL/Fedora:

sudo debuginfo-install postgresql

strace is nice to have too.

Performance: sysstat, iotop, perf

You should always install sysstat and make sure that it is sar is collecting information for sar. I recommend also lo


1. Introduction

Setting up extra extensions / features on proprietary platform like Windows is a challenge because we don’t have a default build environment. Setting up a complete build environment for installing a language extension is bigger challenge. Thanks to vibrant community, many of the heavy lifting is already done. I would like to Acknowledge the contribution Dave Cramer who built the Windows extension binaries and Shawn from BigSQL user forum for leading me to this learning. Special thanks to Joe Conway and PL/R community for maintaining the language extension.

2. getting pl/R extension binaries

Windows binaries for PL/R Repository is available here

As the name indicating, PostgreSQL 9.6 extension is built against R version 3.4.1 and extension for Postgres 10 is built against R version 3.4.2. Download the corresponding zip file and unpack into a direcotory. For this demonstration I am using Postgres 9.6 extension. Downloaded file and unpacked into a directory plr-

3. Installion of R

As the name indicates, this needs R version 3.4.1. Windows installer for the same is available here One important aspect I would suggest to take care is to avoid directory names with spaces while installing R or Postgres. In the installer screen, I Installed into a directory C:\R\R-3.4.1 instead of “Program Files” to avoid spaces and potential problems that can cause.

We can leave rest of defaults in the Installer.

4. Install Postgres if not already done.

For this demonstration, I am using portable PostgreSQL binaries provided by BigSQL project. its just a matter of opening windows command line in administrator mode and run a single line power shell script.

5. Installing PL/R Extension

We have to do this manually, by copying the files to corresponding directories. In order to detect the location of files, few environment variables also required. I have done the following from the command prompt (from the directory where PL/R extensions are unpacked)


amcheck, a tool for index corruption detection, now has packages available from the community Debian/Ubuntu apt repository, as well as packages from the community Redhat/CentOS/SLES yum repository.

This means that installations built on those community resources can easily install amcheck, even on PostgreSQL versions before PostgreSQL 10, the release that contrib/amcheck actually first appears in.

Full details on installing these packages are available from the README:

It's also possible to install the packages on PostgreSQL 10, because the extension these packages install is actually named "amcheck_next" (not "amcheck"). Currently, it isn't really useful to install "amcheck_next" on PostgreSQL 10, because its functionality is identical to contrib/amcheck. That's expected to change soon, though. I will add a new enhancement to amcheck_next in the coming weeks, allowing verification functions to perform "heap matches index" verification on top of what is already possible.

Many thanks to Christoph Berg and Devrim Gündüz for their help with the packaging.

Now that we are in midst of moving our databases and client databases to PostgreSQL 10, we started to build our favorite extensions. One popular one which several have asked when we'll have ready for PostgreSQL 10, is the plv8 extensions.

We now have version 1.4.10 for PostgreSQL 10 windows both the 32-bit and 64-bit. The 64-bit versions will work for PostgreSQL EDB Windows as well as the BigSQL distribution.

Continue reading "PLV8 binaries for PostgreSQL 10 windows both 32-bit and 64-bit"

Now that we are in midst of moving our databases and client databases to PostgreSQL 10, we started to build our favorite extensions. One popular one which several have asked when we'll have ready for PostgreSQL 10, is the plv8 extensions.

PostGIS releases a new minor version of PostGIS every one or two years. Each minor version of postgis has a different libname suffix. In PostGIS 2.1 you'll find files in your PostgreSQL lib folder called postgis-2.1.*, rtpostgis-2.1.*, postgis-topology-2.1.*, address-standardizer-2.1.* etc. and in a PostGIS 2.2 you'll find similar files but with 2.2 in the name. I believe PostGIS and pgRouting are the only extensions that stamp the lib with a version number. Most other extensions you will find are just called e.g. hstore is always called hstore.dll / even if the version changed from 9.6 to 10. On the bright side this allows people to have two versions of PostGIS installed in a PostgreSQL cluster, though a database can use at most version. So you can have an experimental database running an very new or unreleased version of PostGIS and a production database running a more battery tested version.

On the sad side this causes a lot of PostGIS users frustration trying to use pg_upgrade from an older version of PostGIS/PostgreSQL to a newer version of PostGIS/PostgreSQL; as their pg_upgrade often bails with a message in the loaded_libraries.txt log file something to the affect:

could not load library "$libdir/postgis-2.2": ERROR:  could not access file "$libdir/postgis-2.2": No such file or directory
could not load library "$libdir/postgis-2.3": ERROR:  could not access file "$libdir/postgis-2.3": No such file or directory

This is also a hassle because we generally don't support a newer version of PostgreSQL on older PostGIS installs because the PostgreSQL major version changes tend to break our code often and backporting those changes is both time-consuming and dangerous. For example the DatumGetJsonb change and this PostgreSQL 11 crasher we haven't isolated the cause of yet. There are several changes like this that have already made the PostGIS 2.4.0 we released recently incompatible with the PostgreSQL 11 head development.

Continue reading "Using pg_upgrade to upgrade PostGIS without installi[...]
Primer: PsqlForks aims to support all DB Engines that (even partially) speak Postgres (psqlforks = psql for Postgres forks). Given that PsqlForks has been in development for a few weeks, it's time to stabilize a bit and towards that, we finally have Menon, PsqlForks first Alpha Release. Being an alpha, by definition it isn't ready for production, but it feels stable enough ... feel free to test
Posted by Kaarel Moppel in Cybertec on 2017-10-13 at 13:30

Last week a new PostgreSQL major version with the number 10 was released! Announcement, release notes and the „What’s new“ overview can be found from here, here and here – it’s highly recommended reading, so check them out. As usual there have been already quite some blog postings covering all the new stuff, but I […]

The post Best of PostgreSQL 10 for the DBA appeared first on Cybertec - The PostgreSQL Database Company.

PostgreSQL 10 is now available for everyone to use, and hinted by David Fetter I had to review my previous article on Json and SQL to adapt to Set Returning Functions changes.

A Set Returning Function is a PostgreSQL Stored Procedure that can be used as a relation: from a single call it returns an entire result set, much like a subquery or a table.

It used to be possible to use SRF in the SELECT clause, with dubious (but useful at times) semantics, and also in scalar contexts. The semantics have been fixed and are now much clearer, and the uses in scalar contexts are forbidden — they were a hack and never made sense anyway.

Posted by gabrielle roth on 2017-10-12 at 17:00

When: 6-8pm Thursday October 19, 2017
Where: iovation
Who: Grant Holly
What: Golang + Pg

This month, Grant will be talking about using Golang with Postgres 9.6 and showing us some of the client libraries.

Grant Holly is a Service Reliability Engineer at iovation, where he works with a lot of different database systems.

If you have a job posting or event you would like me to announce at the meeting, please send it along. The deadline for inclusion is 5pm the day before the meeting.

Our meeting will be held at iovation, on the 32nd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak). It’s right on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots. No bikes in the office, sorry!

iovation provides us a light dinner (usually sandwiches or pizza).

Elevators open at 5:45 and building security closes access to the floor at 6:30.

See you there!

Posted by Joshua Drake in CommandPrompt on 2017-10-12 at 16:51

Devrim Gunduz

One of the fantastic charateristics of Postgres leaders is their willingness to serve the community. Yesterday I found out that one of our former team members, Devrim Gunduz, has created a new London PostgreSQL user group and they are having their innaugural meeting in November. At the time of this writing there were 123 members in the group. That shows a great demand for Postgres content considering they haven't even had their first meetup yet! I spoke with Devrim yesterday and he has a mission to provide the London community with new content each month. It is a bold goal as running meetups that frequently can be daunting, but we believe there is enough support in the area to warrant it. Devrim has even asked me to cross the pond to present, which I plan on doing in 2018. It will be great to see a leader of Postgres provide consistent growth opportunities for the community in London.

Rock on and @amplifypostgres!

Posted by Craig Kerstiens in CitusData on 2017-10-12 at 15:00

Dear Postgres,

I’ve always felt an affinity for you in my 9 years of working with you. I know others have known you longer, but that doesn’t mean they love you more. Years ago when others complained about your rigidness or that you weren’t as accommodating as others I found solace in your steadfast values:

  1. Don’t lose data
  2. Adhere to standards
  3. Move forward with a balancing act between new fads of the day while still continuously improving

You’ve been there and seen it all. Years ago you were being disrupted by XML databases. As companies made heavy investment into what such a document database would do for their organization you proceeded to “simply” add a datatype that accomplished the same and brought your years of progress along with it.

In the early years you had the standard format of index b-tree that most database engines leveraged. Then quietly but confidently you started adding more. Then came K-nearest neighbor, generalized inverted indexes (GIN), and generalized search-tree (GiST), only to be followed by space partitioned GiST and block range indexes (BRIN). Now the only question is which do I use?

All the while there was this other camp using for something that felt cool but outside my world: GIS. GIS, geographical information systems, I thought was something only civil engineers used. Then GPS came along, then the iPhone and location based devices came along and suddenly I wanted to find out the nearest path to my Peets, or manage geographical region for my grocery delivery service. PostGIS had been there all along building up this powerful feature set, sadly to this day I still mostly marvel from the sideline at this whole other feature set I long to take advantage of… one day… one day.

A little over 5 years ago I fell in love with your fastly improving analytical capabilities. No you weren’t an MPP system yet, but here came window functions and CTEs, then I almost understood recursive CTEs (still working on that one). I can iterate over data in a recursive fashion without PL/PgSQL? Yes please! I

Posted by Craig Kerstiens in CitusData on 2017-10-11 at 15:10

Postgres indexes make your application fast. And while one option is to analyze each of your relational database queries with pg_stat_statements to see where you should add indexes… an alternative fix (and a quick one at that) could be to add indexes to each and every database table—and every column—within your database. To make this easy for you, here’s a query you can run that will create the CREATE INDEX commands for every table and column in your Postgres database.

Disclaimer: Adding an index to every column of every table is not a best practice for production databases. And it’s certainly not something we recommend at Citus Data, where we take scale out Postgres and take database performance very seriously. But indexing all the things is a fun “what if” thought exercise that is well worth thinking about, to understand the value of indexes in Postgres. What if you indexed all the things?

With that disclaimer out of the way, onto my advice of how to index all the things as a way to understand your database’s performance.

What if you really did index all the things in your Postgres database?

Index all the things

Yes, we cheated and didn’t do every combination of every column, we thought this might be sufficient enough

SELECT 'CREATE INDEX ' || table_name || '_' || column_name || ' ON ' || table_name || ' ("' || column_name || '");' 
FROM information_schema.columns
WHERE table_schema = 'public'
  AND table_name != 'pg_stat_statements'
  AND table_name != 'pg_buffercache';

Based on our Citus Data multi-tenant tutorial you’d then get something like this as the result:

 CREATE INDEX companies_id ON companies ("id");
 CREATE INDEX companies_name ON companies ("name");
 CREATE INDEX companies_image_url ON companies ("image_url");
 CREATE INDEX companies_created_at ON companies ("created_at");
 CREATE INDEX companies_updated_at ON companies ("updated_at");
 CREATE INDEX campaigns_id ON campaigns ("i
Posted by Simon Riggs in 2ndQuadrant on 2017-10-11 at 13:05

Someone recently tweeted about the fantastic news that MySQL fixed a bug.

Now in my world, bugs get fixed quickly and well. Bugs happen and they need to be fixed. It never occurred to me that we should ever tweet or blog about the fixing of a bug. I guess I assume it’s just quality: bugs get fixed, no drama – people depend upon us to do just that so that the (literally) millions of PostgreSQL servers out there run well. That’s our job and I’m happy and proud to do that job alongside my colleagues at 2ndQuadrant and my colleagues in other PostgreSQL companies and in the wider community.

So the bug in question was “number 199″… check this out

It’s always been a big argument in the PostgreSQL community about whether we need a bug tracker or not. Obviously, if you fix bugs, why track them? And if you don’t fix bugs, why track them? Hmmm, not sure that’s a hugely rational argument or not, but let’s look at the MySQL bug tracker for bug 199

Yes, that bug was filed 14.5 years ago. And now it’s fixed.


Yay? 14.5 years guys! That’s long enough for my pre-school children to have gone to school, grown up and start University. OMG, or if I was crueler, LMFAO.

Maybe its wrong to start a flame war on this, but I need to say, at a personal level, that this is exactly the reason why I have spent years contributing to PostgreSQL and never once contributed to MySQL.

PostgreSQL 10 just came out, if you didn’t hear. I’m certain it will contain bugs, but not that many. I’m also equally certain that we will fix those bugs and that you can rely on the quality and robustness of PostgreSQL.

If you use PostgreSQL in the cloud or from a services company, make sure you ask them what they have contributed to PostgreSQL and what their capability is to fix bugs and with what SLA. The answer to that question is what keeps open source well funded for these latest d


For this last post in our set_user series, let’s look at how we can limit who is allowed to escalate to superuser at a super granular level.


By default, the whitelist parameter is set to allow all (*) users with set_user permissions to escalate to superuser. But what if you want to limit these privileges to an elite few? Like in our last post where we added block setting parameters to the postgres.conf file, you can control who gets to be part of the “in crowd” with the whitelist parameter:

set_user.superuser_whitelist = '<user1>,<user2>,...,<userN>'

Exercise: Setting the whitelist parameter

Edit postgresql.conf

Add set_user.superuser_whitelist to the postgresql.conf using your favorite editor:

Linux / OSX:

vi $PGDATA/postgresql.conf


start notepad %PGDATA%/postgresql.conf

Earlier you added this line to the end of the file:

shared_preload_libraries = 'set_user'

Now underneath that entry, you will add the following:

set_user.superuser_whitelist = 'dba_super' 

Save the change you made and restart PostgreSQL by navigating to the directory (bigsql or PostgreSQL) and run:

./pgc restart

Create the database and users

If you haven’t already created the testdb database from the Installation and Set-Up exercise, do that now and create the set_user extension as well:

postgres=# CREATE DATABASE testdb;

postgres=# \connect testdb;
You are now connected to database "testdb" as user "postgres".

testdb=# CREATE EXTENSION set_user;

Then create the following two roles:

testdb=# CREATE USER dba_notsuper with PASSWORD 'passnotsuper';

testdb=# CREATE USER dba_super with PASSWORD 'passsuper';

Grant execute privileges

Grant the new roles priviliges to execute the set_user and set_user_u functions:

testdb=# GRANT EXECUTE ON FUNCTION set_user(text) TO dba_notsuper;
testdb=# GRANT EXECUTE ON FUNCTION set_user_u(text) TO dba_notsuper;
testdb=# GRANT EXECUTE ON FUNCTION set_user(text) TO dba_super;
testdb=# GRANT EXECUTE ON FUNCTION set_user_u(text) TO
Posted by Joshua Drake on 2017-10-10 at 15:50

The International Postgres conference  series continues to grow! PGConf APAC 2018 is the latest PGConf.Org addition.

Following the success of two consecutive pan-Asia Postgres event - pgDay Asia 2016 and pgDay Asia 2017 held along with FOSSASIA, we are pleased to announce PGConf APAC 2018 to be held in Singapore from 22nd to 24th of March, 2018. Once again the conference will be held along with FOSSASIA - one of the largest FOSS conference on the planet. PGConf APAC will be PostgreSQL conference series for all PostgreSQL enthusiasts and users in the Asia Pacific region.
For more details on FOSSASIA -
You will be able to enjoy one of the largest PostgreSQL conference in Asia and one of the largest FOSS conferences to meet like minded individuals in the same week!
pgDay Asia 2016 and pgDay Asia 2017 would not have been possible without your awesome talks and we would like to invite speakers from all over the globe to present at the PGConf APAC 2018. We would like to open Call For Paper for PGConf APAC 2018. Some of the topics which can be used for submitting a talk are-
  1. Migration projects
  2. Performance troubleshooting and tuning
  3. noSQL and geo-spatial features of Postgres
  4. Unique use-case and customer stories
  5. Useful new features in PostgreSQL 10
For more examples you can refer to the papers which were presented at last year's event - 2016 and 2017.
Of course we are happy to accept paper proposals on any other interesting topics as well.
To submit you proposal for presentation please go to this link -
To be Announced
Call for Papers
2017-10-10: Proposals acceptance begins
2017-12-04: Proposals acceptance ends
2017-12-13: Authors of accepted proposals contacted
If you need any additional information please contact our team at pgconfapac(at)googlegroups(dot)com.
This conference is organized by the PostgreSQL people from Asian communities. If you have any question, feel free to contact us on pgconfapac(at)googlegroups(dot)com.
For sponsorship related queri

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.


Posted by Pavel Stehule on 2017-10-07 at 04:58
I wrote a article about PostgreSQL It is in Czech language, but Google Translator should to work.
Posted by Simon Riggs in 2ndQuadrant on 2017-10-06 at 16:35

We recently hosted the London PostgreSQL Meetup at a new location in Covent Garden. Room was great and its about 100m from Leicester Square tube, so very centrally located and easy to get to.

I’ve been the organizer of the group for more than 6 years now, in my role as the PostgreSQL Project’s UK Press representative, though a few others have also helped out, principally Bart Swedrowski, though now also Sam Marshall. Anybody interested in PostgreSQL is welcome to come along and we have many speakers from a variety of companies and types of company. Please come along and join in with the other 849 members.

At our recent meeting we discussed all the new features of PostgreSQL 10. Lots of interest in the new SCRAM authentication mechanism, as well as solid interest in logical replication and partitioning.

Come along to the next meeting to discuss your feedback on PostgreSQL 10.

Great news! PostgreSQL 10 is out. Release notes look amazing and there are a lot of new, wonderful changes and additions. In my personal list, the most desirable are improvements related to parallel queries which have been introduced in 9.6. In the new Postgres, more operations can be executed in parallel, such as B-tree index scan, bitmap heap scan, merge joins and non-correlated subqueries. Of course, new Postgres has other great features such as tables’ publication/subscription based on logical replication and declarative partitioning.

Here, I would like to shed some light on another novation related to the trigger functions - transition relations. What does the official documentation say about transition relations? These are sets of rows that include all of the rows inserted, deleted, or modified by the current SQL statement. Key point here is - all. With transition relations, triggers have a global overview of statement’s output, not just one row at a time. Note, that transition relations are only used in conjunction with AFTER statement triggers.

Below I'm going to consider a simple example where it can be used.
Here are two tables, the first one is the insert-only table with historical data, the second - stores average aggregates based on historical data from the first table.

# CREATE TABLE trendbar (
        period INTEGER,
        create_date TIMESTAMP,
        open BIGINT,
        high BIGINT,
        low BIGINT,
        close BIGINT,
        volume BIGINT,
        symbol_id BIGINT
# CREATE TABLE trendbar_avg (
        create_date TIMESTAMP,
        avg_open NUMERIC(10,2),
        avg_high NUMERIC(10,2),
        avg_low NUMERIC(10,2),
        avg_close NUMERIC(10,2),
        sum_volume BIGINT,
        symbol_list BIGINT[]

Let's imagine that our application populates this table with the following query:

# INSERT INTO trendbar
                1 AS period,
                now() AS create_date,
                (random() * 10000)::bigint AS open,
                (random() * 10000):
Posted by Kaarel Moppel in Cybertec on 2017-10-06 at 08:45

Although Postgres being a great all-around product that can’t be beaten in price-performance ratio, there’s one area where things could be a bit better – that’s validation of replicas, or maybe even more widely all clusters. So the problem that not so many people know about is that after building a new replica there’s actually […]

The post Verifying PostgreSQL replicas appeared first on Cybertec - The PostgreSQL Database Company.

Posted by Umair Shahid in 2ndQuadrant on 2017-10-05 at 10:03

The short answer … Hell Yeah!

The long answer lies in extensive improvements and the impressive new feature list that makes up this major release – which, by the way, changes the version scheme of PostgreSQL as well (more details on that here). This wiki page lists out, in detail, all the new features in PostgreSQL 10, but for the purpose of this blog, I will focus on some of the exciting features contributed by 2ndQuadrant.

Year over year, 2ndQuadrant, as an organization, has been a thought leader in open source PostgreSQL development. This release is no different – we were able to get lots of code committed for the release, both in terms of new features and bug fixes.

Logical Replication

This is possibly the most talked-about feature in PostgreSQL 10 and also a big reason for the version number jumping from 9.x to 10. The feature acceptance comes after multiple years of persistent efforts from Petr Jelinek, which was later championed by Peter Eisentraut as well.

Logical replication allows more flexibility than physical replication, including replication between different major versions of PostgreSQL and selective-table replication. As part of this feature, we also contributed the ability to create temporary replication slots, that are automatically removed on session exit or error.

Multi-Column Statistics

Another very exciting feature 2ndQuadrant contributed to PostgreSQL 10 is the ability to add multi-column optimizer statistics to compute the correlation ratio and number of distinct values. This addition is yet again the result of a multi-year effort, this time by Tomas Vondra. He was later assisted by David Rowley and Álvaro Herrera, who helped with reviews and code structure rework to get the feature into final shape for the release.

The feature is helpful in estimating query memory usage and combining the statistics from individual columns. It adds the following new commands: CREATE STATISTICS, ALTER STATISTICS, and DROP STATISTICS.

But that’s not all …

Other major enhancements contributed by 2ndQuadr

Posted by Bruce Momjian in EnterpriseDB on 2017-10-04 at 14:45

You might be aware that Postgres uses invisible columns to track concurrent row access and modifications. My MVCC talk covers much of this.

Someone at a conference mentioned they were using the invisible xmin column to perform UPDATEs. I was initially skeptical of this approach, but once he explained the purpose, it made sense, e.g.:

INSERT INTO mvcc_test VALUES (1, 'Sal');
SELECT * FROM mvcc_test;
 x | name
 1 | Sal
SELECT xmin, * FROM mvcc_test;
 xmin | x | name
  715 | 1 | Sal
UPDATE mvcc_test SET name = 'Sally' WHERE x = 1 AND xmin = 715;
SELECT xmin, * FROM mvcc_test;
 xmin | x | name
  716 | 1 | Sally

Continue Reading »

We recently formed a new London PostgreSQL Users Group recently, and our first meetup is on Oct 5th.


In this first meetup, we have two talks:

- I will speak about the new and exciting features in PostgreSQL 10.

- Ivan Novick will speak about "Merging PostgreSQL into Greenplum v5 and future roadmap"

Each session will be 45 mins.


2nd floor, The Warehouse, The Bower, 211 Old Street EC1V 9NR, London

Please click here for Google Maps link.


Please follow us on Twitter.
Posted by Jim Mlodgenski on 2017-10-04 at 09:55

We just finished the first PGConf South Africa and it was a fantastic success. When we first started thinking about the event, we were considering 30 people attending as a good turnout and there were nearly 60 people in attendance. To put that in perspective, the first PGDay New York event in 2012 had 55 people attend. I can honestly say that South Africa and even the whole continent of Africa was craving a PostgreSQL event.

There were some great technical talks on topics like  PL/pgSQL stored procedures, PL/Python stored procedures, replication, and security, but what was truly fascinating was the use cases. PostgreSQL is the key data store behind the MeerKAT telescope handling a massive amount of sensor data, is key to helping South African farmers to know where to plant their crop and is utilized extensively through the medical industry in South Africa. The coolest thing learning how PostgreSQL was used for quality control from a 3D printer. In this one, it was not how PostgreSQL was used, it was that the 3D printer was actually a high powered laser that would melt through titanium powder.

Thank you to the sponsors, Quant Solutions and OfferZen to help to make the event possible. And a special thank you to Kobus Wolvaardt who organized the event.

PostgreSQL 10 will provide end users with countless new features. One of those features is related to “Foreign Data Wrappers” and is generally known as “aggregate pushdown”. To be honest: This stuff is one of my favorite new features of PostgreSQL 10 and therefore it might be worth, sharing this piece information with a broader […]

The post PostgreSQL: Distributed aggregates / aggregate pushdown appeared first on Cybertec - The PostgreSQL Database Company.