PostgreSQL
The world's most advanced open source database
Top posters
Number of posts in the past month
Top teams
Number of posts in the past month
Feeds
Planet
  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.
Contact
  • Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.

A key part of running a reliable database service is ensuring you have a good plan for disaster recovery. Disaster recovery comes into play when disks or instances fail, and you need to be able to recover your data. In those type of cases logical backups, via pg_dump, may be days old and in such cases not ideal for you to restore from. To remove the risk of data loss, many of us turn to the Postgres WAL to keep safe.

Years ago Daniel Farina, now a principal engineer at Citus Data, authored a continuous archiving utility to make it easy for Postgres users to prepare for and recover from disasters. The tool, WAL-E, has been used to keep millions of Postgres databases safe. Today we’re excited to introduce an exciting new version of this tool: WAL-G. WAL-G, the successor to WAL-E, was created by a software engineering intern here at Citus Data, Katie Li, who is an undergraduate at UC Berkeley.

Introducing WAL-G from Citus, the successor to WAL-E

WAL-G is a complete rewrite that provides the same functionality as WAL-E, but boasts performance improvements of 4x faster restores on recovery. WAL-G brings:

  • Parallelization on restore for performance improvements
  • Backwards compatibility
  • Safety enhancements that check for incompletely restored backups

Let’s dig in deeper to all that’s new and improved with WAL-G.

4X Faster Disaster Recovery for your Postgres database

The goal of WAL-G was always to provide a noticeable improvement in terms of performance over WAL-E, and not just a rewrite for the sake of a rewrite. We sought to either reduce the footprint of the process that was running, or improve restore times, and if at all possible accomplish both.

We’re happy to say that WAL-G excels at both objectives—giving your database more resources and delivering faster restores from archives.

Performance histogram of WAL-E vs WAL-G | | Average throughput | Standard Deviation | Median | |——-|——————–|——————–|———-| | WAL-E | 323 Mb/s | 236 | 307 Mb/s | | WAL-G | 838 Mb/s | 4.2 | 838 Mb/s |

The figure above shows the distribution of throughput over the course of a res

[...]
Posted by Umair Shahid in 2ndQuadrant on 2017-08-18 at 01:40

jOOQ is an ORM-alternative that is relational model centric rather than domain model centric like most ORMs. For example, while Hibernate lets you write Java code and then automatically translates it to SQL, jOOQ let’s you write relational objects in your database using SQL and then automatically generates Java code to map to those relational objects.

The writers of jOOQ believe in the power of SQL and assume that you want low level control of the SQL running for your application. This means that you can write your database and your schema without having to worry about how (and if!) it can be handled in Java.

Why Use jOOQ?

While JPA provides a huge framework with a great deal of flexibility and power, it can very quickly become quite complex. jOOQ provides a simpler interface for cases where the developer doesn’t really require all the intricacies and fine tuning tools available with JPA.

Because of the way jOOQ is designed, it becomes very easy to write Java applications on top of an existing database. jOOQ helps you generate all the required classes and object automatically and you are all set go (as demonstrated in the ‘Prominent Features’ section below).

Like Hibernate, database portability is a huge advantage of jOOQ. Again, like Hibernate, typesafety ensures you get to know about errors at compile time rather than at runtime (which is one of the main irritants of JDBC). As opposed to writing SQL in JDBC, you can also enjoy the auto-complete features of your favorite IDE.

And of course, jOOQ is free to use with PostgreSQL (and all other open source databases)!

Prominent Features

Comprehensive documentation around the feature set available with jOOQ is listed on their website. To illustrate a few prominent features here, let’s use the same ‘largecities’ table that we used for HQL in the previous section.

Before starting off, please make sure you have downloaded jOOQ from http://www.jooq.org/download. Also, please ensure that you have the table ‘largecities’ available and data loaded in it.  

Generating the C

[...]
Posted by Joshua Drake on 2017-08-16 at 16:00
We caught up with Alex Tatiyants after finding out about his Pev project. This is an awesome web based visual explain analyzer that is similar to the awesome explain.depesz . 

Tell us a little bit (one or two paragraphs) about your project or how you use Postgres: 

I created Pev (Postgres EXPLAIN Visualizer) to scratch my own itch. EXPLAIN generates a wealth of information, but isn’t easy to make sense of. I wanted to create a tool that helps me quickly diagnose problems with queries. Apparently, other people found it useful as well.


Pev plan


Why did you chose Postgres for your project? 


Postgres is a fantastic database: performant, mature, feature rich, and of course open source. And in addition to being a first rate relational database, it has very strong document store features as well.


Have you attended a PgConf US event or do you plan to? 

I haven't had a chance to attend PgConf.

Are you interested in contributing to the community further and if so, in what fashion? 

I don’t have any concrete plans at the moment.

Any closing comments? 

Thank you for your interest.
Posted by Bruce Momjian in EnterpriseDB on 2017-08-16 at 14:45

Docker uses Linux containers (LXC) to allow application deployment in a pre-configured environment. Multiple such isolated environments can run on top of the same operating system. Docker is ideal for rapid, reproducible deployments.

How that relates to database deployments is an open question. The full power of Docker is that everything is in the container and the container can be easily destroyed and recreated. Because databases require a persistent state, a hybrid approach is necessary. Typical Docker database deployments use Docker for database installation and setup, and persistent storage for persistent state. This email thread explains the benefits of Docker for databases:

  • Isolation between environments
  • Deployment of tested and verified Docker images
  • Allows developers to use the same Docker images as production

and explains how to add persistent state:

  • Run one container on one node
  • Use bind mounts
  • Use --net=host

Continue Reading »

Posted by David Rader in OpenSCG on 2017-08-16 at 14:29

When you first read the title of this post the words from the Cars’ song, “You Might Think” may rattle through your head because of course it’s _obvious_ what a migration is…

But, actually, there are at least three (3!) things people mean when they talk about database migrations:

As you ca

[...]

As Joe just announced, all ftp services at ftp.postgresql.org has been shut down.

That of course doesn't mean we're not serving files anymore. All the same things as before area still available through https. This change also has an effect on any user still accessing the repositories (yum and apt) using ftp.

There are multiple reasons for doing this. One is that ftp is an old protocol and in a lot of ways a pain to deal with when it comes to firewalling (both on the client and server side).

The bigger one is the general move towards encrypted internet. We stopped serving plaintext http some time ago for postgresql.org, moving everything to https. Closing down ftp and moving that over to https as well is another step of that plan.

There are still some other plaintext services around, and our plan is to get rid of all of them replacing them with secure equivalents.

PGConf US, in partnership with Ohio Linux Fest, is pleased to announce that the call for papers for PGConf Local: Ohio is now open.

The inaugural PGConf US Local: Ohio Conference (PGConf Ohio) will be held September 29th - 30th at the Hyatt Regency Columbus Ohio (350 North High StreetColumbus, Ohio, USA43215).

This two day, single track conference is a perfect opportunity for users, developers, business analysts, and enthusiasts from Ohio to amplify Postgres and participate in the Postgres community.


The Call for Papers for PGConf Ohio can be found here.

Call for papers will be open until Sunday, August 24th, 2017 and speakers will be notified of acceptance/decline no later than Monday, September 1st, 2017.

Conference Schedule:
  • Friday, September 29, 2017: Trainings
Mastering Postgres Administration: Bruce Momjian
Postgres Performance and Maintenance: Joshua D. Drake 
  • Saturday, September 30, 2017: Breakout Sessions (To be announced)

Registration for the
PGConf Ohio trainings is open now.

Conference speakers receive complimentary entry to the breakout sessions on September 30th. The half-day training options on September 29th are separately priced sessions. As a nonprofit event series, funding is currently not available for speaker travel and lodging accommodations.

Sponsorship Opportunities
The PGConf US Local series is supported by its generous sponsors: Diamond Sponsor Amazon Web Services and Platinum Sponsors Compose, 2ndQuadrant, and OpenSCG. Please contact us if you are interested in joining our wonderful sponsors for Ohio or National!

About PGConf US:
PGConf US is a nonprofit conference series with a focus on growing the community through increased awareness and education of Postgres. PGConf US is known for its highly attended national conference held in Jersey City, New Jersey, and has expanded to a local series for 2017.

The PGConf Local series partners with regional Postgres and open source groups to bring dynamic and engaging Postgres related content and professional training experien
[...]
Join the fantastic and growing Postgres community in Cape Town, South Africa for a single day event on October 3rd, 2017! The event is being hosted by fellow Postgres advocates who travel from South Africa each year to attend our National Event in order to increase their knowledge of Postgres and be a part of the community. This year they are joining us and making a commitment to build out our International community and conferences!

This single day event takes place at the same venue as PyCon South Africa and is scheduled the day before PyCon to ensure the greatest possible value in attending.

Image result for PGConf US

Local events are designed to bring comprehensive educational content and networking opportunities to the "local" Postgres community where the event is being held. They are perfect opportunities to show support for Postgres, find leads, and build relationships with other professionals and companies using and supporting Postgres.

Posted by Pavel Stehule on 2017-08-15 at 06:54
lot of done, lot of code rewritten

please, test it - https://github.com/okbob/pspg.

After much deliberation with the CMD community team we have launched the Denver Postgres User Group! We hope that our community in Denver, Boulder, and Colorado Springs will join us at upcoming events and submit content. It has been a long time since we have had an active Denver group and Denver is a hot bed for Postgres external development. Our first meeting will be announced soon and should be expected in October. Have ideas on facilities or content? Please contact us on the meetup page. We would love to see 3 - 4 meetings before PGConf US National!

Posted by Robert Haas in EnterpriseDB on 2017-08-14 at 19:48
The reaction to the new table partitioning feature in PostgreSQL 10 has been overwhelmingly positive, but a number of people have already astutely observed that there is plenty of room for improvement.  PostgreSQL 10, already at beta3, will go GA some time in the next month or two, and presumably once it does, more people will try out the new feature and find things they would like to see improved.  The good news is that there is already a substantial list of people, many of them my colleagues at EnterpriseDB, working on various improvements.  The following list of projects is complete to my knowledge, but of course there may be projects of which I'm unaware especially at companies where I don't work.  If you're working on something else, please leave a comment about it!
Read more »
Posted by Bruce Momjian in EnterpriseDB on 2017-08-14 at 17:15

On the server side, high availability means having the ability to quickly failover to standby hardware, hopefully with no data loss. Failover behavior on the client side is more nuanced. For example, when failover happens, what happens to connected clients? If no connection pooler is being used, clients connected to the failed machine will need to reconnect to the new server to continue their database work. Failover procedures should guarantee that all connections to the failed server are terminated and that no new connections happen. (Reconnecting to the failed server could produce incorrect results and lost changes.) If a client is connected to a standby that is promoted to primary, existing client connections and new connections are read/write.

Clients connect to the new primary via operating-system-specific methods, usually either virtual IP addresses (VIP, good blog entry) or DNS entries with a short time to live (TTL). This is normally accomplished using dedicated high-availability or clustering software. Postgres 10 will also allow multiple host names to be tried by clients.

For clients using a connection pooler, things are even more complicated. Logically, you would think that, since clients didn't connect directly to the failed server, they should be able to continue their queries in the same session uninterrupted. Generally, this is not the case.

Continue Reading »

Posted by Dimitri Fontaine on 2017-08-14 at 14:37

There’s a very rich set of PostgreSQL functions to process text, you can find them all at the String Functions and Operators documentation chapter, with functions such as overlay, substring, position or trim. Or aggregates such as string_agg. And then regular expression functions, including the very powerful regexp_split_to_table.

Posted by pgCMH - Columbus, OH on 2017-08-14 at 04:00

The August meeting will be held at 18:00 EST on Tues, the 22nd. Once again, we will be holding the meeting in the community space at CoverMyMeds. Please RSVP on MeetUp so we have an idea on the amount of food needed.

Topic

OpenSCG’s very own Doug Hunley will be presenting this month. He’s going to tell us all about how PostgreSQL uses MVCC to handle concurrency and what it means for your application and database maintenance.

Parking

Please park at a meter on the street or in the parking garage (see below). You can safely ignore any sign saying to not park in the garage as long as it’s after 17:30 when you arrive. Park on the first level in any space that is not marked ‘24 hour reserved’. Once parked, take the elevator to the 3rd floor to reach the Miranova lobby.

Finding us

The elevator bank is in the back of the building. Take a left and walk down the hall until you see the elevator bank on your right. Grab an elevator up to the 11th floor. Once you exit the elevator, look to your left and right. One side will have visible cubicles, the other won’t. Head to the side without cubicles. You’re now in the community space. The kitchen is to your right (grab yourself a drink) and the meeting will be held to your left. Walk down the room towards the stage.

If you have any issues or questions with parking or the elevators, feel free to text/call Doug at +1.614.316.5079

Posted by Bruce Momjian in EnterpriseDB on 2017-08-11 at 13:45

With the addition of logical replication in Postgres 10, we get a whole new set of replication capabilities. First, instead of having to replicate an entire cluster, you can replicate specific tables using streaming replication. With this granularity, you can broadcast a single table to multiple Postgres databases, or aggregate tables from multiple servers on a single server. This provides new data management opportunities.

Another big advantage of logical replication is migrating between major Postgres versions. If both major Postgres versions support logical replication, you can set up logical replication between them and then switch over to the new major-version Postgres server with only seconds of downtime. It also allows you to downgrade back to the old major version, assuming logical replication is still working properly.

Quicker upgrade switching and the ability to downgrade in case of problems have been frequent feature requests that pg_upgrade has been unable to fulfill. For users who need this, setting up logical replication for major version upgrades will certainly be worth it.

Continue Reading »

Posted by David Rader in OpenSCG on 2017-08-10 at 21:42

A common first question during a database migration is “How do Oracle datatypes compare to PostgreSQL?”

The simple answer is that they are very compatible, and map easily. The table below shows an Oracle to PostgreSQL data type comparison and mapping for the most common Oracle types.

Oracle Type PostgreSQL Type Comments
Char() Char()
Char(1) Char(1) If used as a boolean flag, use the boolean type instead
Varchar2() Varchar()
Timestamp Timestamptz In general, we recommend storing timestamps as time stamp with time zone (timestamptz), which is equivalent to Oracle’s timestamp with local time zone. This stores all values in UTC, even if the server or db client are in different timezones which avoids a lot of problems. But, some app code might have to be changed to timezone aware types – if significant, use the “timestamp” without timezone to minimize migration changes.
Date Timestamptz PostgreSQL “Date” type stores the “date” only – no time portion
Date Date
Number() Numeric() PostgreSQL Numeric is similar to Oracle Number with variable precision and scale so could be used for any numerical fields, but native integer and floating point fields are sometimes preferred.
Number(5,0) Integer Integer and Bigint perform better than Number() when used for joins of large tables, so consider mapping to Int for primary and foreign key fields commonly used for joins.
Number(10,0) Bigint
Number( ,2) Numeric( ,2)  PostgreSQL Numeric( ,2) is ideal for money types since it is exact precision (unless you’re dealing with Yen and need a ( ,0) type.  The “money” type is equivalent to numeric in precision but occasionally causes surprises for applications because of implicit assumptions about formatting. Never use a floating point representation (float/double) due to potential rounding during arithmetic.
CLOB Text Text is much easier to use, no LOB functions, just treat it as a character field. Can store up to 1GB of text.
Long Text
BLOB Bytea
Long raw
Raw
XMLTYPE XML
UROWID OID Check the app
[...]
I saw an interesting presentation recorded and delivered on LinkedIn on contempt culture by Aurynn Shaw, delivered this year at PyCon.  I had worked with Aurynn on projects back when she used to work for Command Prompt.  You can watch the video below:



Unfortunately comments on a social media network are not sufficient for discussing nuance so I decided to put this blog post together.  In my view she is very right about a lot of things but there are some major areas where I disagree and therefore wanted to put together a full blog post explaining what I see as an alternative to what she rightly condemns.

To start out, I think she is very much right that there often exists a sort of tribalism in tech with people condemning each others tools, whether it be Perl vs PHP (her example) or vi vs emacs, and I think that can be harmful.  The comments here are aimed at fostering a sort of inclusive and nuanced conversation that is needed.

The Basic Problem

Every programming culture has norms, and many times groups from outside those norms tend to be condemned in some way or another. There are a number of reasons for this.  One is competition and the other is seeking approval in one's in group.   I think one could take her points further and argue that in part it is about an effort to improve the relative standing of one's group relative to others around it.

Probably the best example we can come up with in the PostgreSQL world is the way MySQL is looked at.  A typical attitude is that everyone should be using PostgreSQL and therefore people choosing MySQL are optimising for the wrong things.

But where I would start to break with Aurynn's analysis would be when we contrast how we look at MySQL with how we look at Oracle.  Oracle, too, has some major oversights (empty string being null if it is a varchar, no transactional DDL, etc).  Almost all of us may dislike the software and the company.  But people who work with Oracle still have prestige.  So bashing tools isn't quite the same thing as bashing the people who use
[...]
Posted by Bruce Momjian in EnterpriseDB on 2017-08-09 at 16:45

For WAL archiving, e.g. archive_command, you are going to need to store your WAL files somewhere, and, depending on how often you take base backups, it might be very large.

Most sites that require high availability have both a local standby in the same data center as the primary, and a remote standby in case of data center failure. This brings up the question of where to store the WAL archive files. If you store them in the local data center, you get fast recovery because the files are quickly accessible, but if the entire data center goes down, you can't access them from the remote standby, which is now probably the new primary. If you store your WAL archive files remotely, it is difficult to quickly transfer the many files needed to perform point-in-time recovery.

My guess is that most sites assume that they are only going to be using WAL archive files for local point-in-time recovery because if you are running on your backup data center, doing point-in-time recovery is probably not something you are likely to do soon — you probably only want the most current data, which is already on the standby. However, this is something to consider because with lost WAL you will need to take a base backup soon you can do point-in-time recovery in the future.

Continue Reading »

When your database is small (10s of GB), it’s easy to throw more hardware at the problem and scale up. As these tables grows however, you need to think about other ways to scale your database.

In one way, sharding is the best way to scale. Sharding enables you to linearly scale your database’s cpu, memory, and disk resources by separating your database into smaller parts. In other ways, sharding is a controversial topic. The internet is full of advice on sharding, from “essential to scaling your database infrastructure” to “why you never want to shard”. So the question is, whose advice should you take?

We always knew when the topic of sharding came up, the answer was, “it depends.”

The theory of sharding is simple: Pick one key (column) that evenly distributes your data. Make sure that most of your queries can be addressed by that key. This theory is simple, but once you dive into sharding your database, the practice becomes messy.

At Citus, we helped hundreds of teams as they looked into sharding their databases. As we helped them, we saw that some key patterns emerged.

In this blog post, we’ll first look at key properties that impact a sharding project’s success. Then we’ll dig into the underlying reason why opinions on sharding differ from one another. When it comes to sharding a mature database, the type of application you’re building impacts your success more than anything else.

Sharding’s Success Depends on Three Key Properties

We found that when you think about sharding your database, three key properties impacted your project’s success. The following diagram shows those properties on three axes and also gives well-known company names as examples.

Axis of sharding

The x-axis in the diagram shows the workload type. This axis starts with transactional workloads on the left and continues onto data warehousing on the right. This dimension is the most recognized one when making scaling decisions.

The z-axis demonstrates another important property: where in your application lifecycle are you? How many tables do you have in your

[...]
Community,

The Chairs of PGConf US have rescheduled the Seattle and Austin Local events. After much deliberation we believe moving the events to a weekday format later in the year will offer a better opportunity for those who wish to attend.

New dates:
  • Seattle: November 13th and 14th, 2017
  • Austin: December 4th and 5th, 2017
The CFP for Seattle is closed but Austin is still open!

People, Postgres, Data

Posted by Dimitri Fontaine on 2017-08-08 at 15:55

In a previous article here we saw How to Write SQL in your application code. The main idea in that article is to maintain your queries in separate SQL files, where it’s easier to maintain them. In particular if you want to be able to test them again in production, and when you have to work and rewrite queries.

Posted by Bruce Momjian in EnterpriseDB on 2017-08-07 at 20:15

When the ability to run queries on standby servers (called hot_standby) was added to Postgres, the community was well aware that there were going to be tradeoffs between replaying WAL cleanup records on the standby and canceling standby queries that relied on records that were about to be cleaned up. The community added max_standby_streaming_delay and max_standby_archive_delay to control this tradeoff. To completely eliminate this trade-off by delaying cleanup on the primary, hot_standby was added.

So, in summary, there is no cost-free way to have the primary and standby stay in sync. The cost will be either:

  1. Standby queries canceled due to the replay of WAL cleanup records
  2. Stale standby data caused by the delay of WAL replay due to cleanup records
  3. Delay of cleanup on the primary

The default is a mix of numbers 1 and 2, i.e. to wait for 30 seconds before canceling queries that conflict with about-to-be-applied WAL records.

Continue Reading »

Posted by Colin Charles on 2017-08-07 at 15:53

Not quite a “tab sweep”, this is me dumping out my OmniFocus todos!

  • QOTD: “Yesterday, you said tomorrow.” – Nike
  • Do you think about using MariaDB ColumnStore? Back in January, there was an interesting Twitter thread. It is in French, and things have improved, but this is a colourful description.
  • A PostgreSQL response to Uber is the Hacker News discussion, slides from Christophe Pettus presented at Percona Live Santa Clara 2017. Well worth reading, this wasn’t an easy talk to get on the agenda, and the commentary is also particularly interesting.
Posted by Hubert 'depesz' Lubaczewski on 2017-08-07 at 14:43
Long time ago I wrote about my project – Versioning. Since then nothing really changed. But recently I found a case where I could use some more logic from versioning, so I changed it. In proceess, I also added somewhat better docs. The change itself is that now, when you write patch using Versioning you […]
Posted by Pavel Stehule on 2017-08-05 at 05:17
I spent lot of time on work on pspg. These points are done:
  • support for expanded mode
  • fixed resizing
  • two new styles
  • start is significantly faster
  • lot of display errors was fixed
 This code should be compiled from source code. If you want to test it - you need develop packages of ncursesw. If you don't need wide char support, you can compile pspg against ncurses library (in this case a Makefile should be modified).

Usage:
\setenv PAGER 'pspg -s 2'
\pset pager always
select * from pg_stat_activity;

Posted by Regina Obe in PostGIS on 2017-08-05 at 00:00

The PostGIS development team is pleased to announce the release of PostGIS 2.4.0alpha This is the first version to support PostgreSQL 10. Best served with PostgreSQL 10beta2 See the full list of changes in the news file

Continue Reading by clicking title hyperlink ..

In Part 3 of this series (here are Part 1 and Part 2), I would like to demonstrate how the development of a new feature for Barman would flow through the Kanban board.

The Scenario

Suppose, as a team leader in the Barman project, one day I suddenly have the brilliant idea of adding the “Super Feature” functionality to Barman.

After speaking with the development team I create a post-it for the whiteboard, accompanied by a ticket in Redmine containing all the details of the task. I write the task ticket ID in the upper left corner of the post-it and then place it in the development board Backlog.

During the morning stand-up meeting (an informal meeting that takes place in front of our Kanban board with all the team standing. This is done deliberately so that it does not take too much time from our day!) I share with the team that I would like to set priority to the “Super Feature” and then move the ticket to the Ready column on the board, writing the start date in the bottom left corner.

From this moment on, we have committed, as a team, to bring this task to the front of our priorities.

Giulio, lead developer of Barman, follows through with his commitment and decides to volunteer to begin analysis and moves the post-it into Analysis-WIP on the board. Since he requires more information, Giulio arranges a brainstorming meeting, involving the entire team, including Ops.

At the end of the technical meeting, we have a clearer idea of what this feature should do and we have also asked that:

  1. Francesco (PostgreSQL and Linux expert, with passion for QA and automated testing) writes integration tests for the “Super Feature”
  2. Alessandro begins working on the documentation and the user interface (configuration and command line).

Two post-its will be created for these two tasks, and a clear dependency will be reported in Redmine and on the “Super Feature” post-it.

The analysis phase is over and the “Super Feature” post-it is now placed in the Analysis-Done.

At this point, Giulio begins to develop the code and moves the pos

[...]

With the heyday of bigdata and people running lots of Postgres databases, sometimes one needs to join or search data from multiple absolutely regular and independent PostgreSQL databases (i.e. no built in clustering extensions or such are in use) to present it as one logical entity. Think sales reporting aggregations over logical clusters or matching […]

The post Joining data from multiple Postgres databases appeared first on Cybertec - The PostgreSQL Database Company.

Posted by REGINA OBE in PostGIS on 2017-08-01 at 21:22
Reminder: Right after the Free and Open Source GIS conference in Boston is the OSGeo / LocationTech code sprint on Saturday August 19th 9AM-5PM at District Hall where project members from various Open Source Geospatial projects will be fleshing out ideas, documenting, coding, and introducing new folks to open source development. All are welcome including those who are unable to make the conference.

We are getting a final head-count this week to plan for food arrangements. If you are planning to attend, add your name to the list https://wiki.osgeo.org/wiki/FOSS4G_2017_Code_Sprint#Registered_Attendees. If you are unable to add your name to the list, feel free to send Regina an email at lr@pcorp.us with your name and projects you are interested in so I can add you to the list. Looking forward to hanging out with folks interested in PostgreSQL and PostGIS development.

District Hall is a gorgeous community space. Check out the District Hall View http://bit.ly/2f61J8c

Posted by Dave Cramer in credativ on 2017-08-01 at 19:00
The PostgreSQL JDBC team is pleased to announce the release of version 4.1.4.

Below are changes included since 42.1.1

Version 42.1.4 (2017-08-01)

Notable changes

  • Statements with non-zero fetchSize no longer require server-side named handle. This might cause issues when using old PostgreSQL versions (pre-8.4)+fetchSize+interleaved ResultSet processing combo. see issue 869

Version 42.1.3 (2017-07-14)

Notable changes
  • fixed NPE in PreparedStatement.executeBatch in case of empty batch (regression since 42.1.2) PR#867

Version 42.1.2 (2017-07-12)

Notable changes
  • Better logic for returning keyword detection. Previously, pgjdbc could be defeated by column names that contain returning, so pgjdbc failed to "return generated keys" as it considered statement as already having returning keyword PR#824 201daf1d
  • Replication API: fix issue #834 setting statusIntervalUpdate causes high CPU load PR#83559236b74
  • perf: use server-prepared statements for batch inserts when prepareThreshold>0. Note: this enables batch to use server-prepared from the first executeBatch() execution (previously it waited for prepareThreshold executeBatch() calls) abc3d9d7