PostgreSQL
The world's most advanced open source database
Top posters
Number of posts in the past month
Top teams
Number of posts in the past month
Feeds
Twitter
Planet
  • Policy for being listed on Planet PostgreSQL.
  • Add your blog to Planet PostgreSQL.
  • List of all subscribed blogs.
  • Manage your registration.
Contact
  • Get in touch with the Planet PostgreSQL administrators at planet at postgresql.org.
Posted by Francisco Figueiredo Jr on 2014-08-01 at 20:21:00
The Npgsql Development team is proud to announce that Npgsql 2.2.0 beta1 is now available!

This release has the following highlights:


Visual Studio DDEX support


Kenji Uno added support for DDEX. Now you can use Npgsql with Visual Studio data designer. This is a missing feature a lot of our users requested in the past. Kenji added a tutorial about how to use Npgsql with DDEX. You can find it here.


Support for EFMigration and Database creation in EF6+

David Karlaš added support for that. Now it is possible to start Code First projects without needing to create a database upfront. EntityFramework and Npgsql will take care of it.

Emil Lenngren added support for a lot of missing features of EntityFramework.

Check out this list containing fixes from David and Emil as well as others for EntityFramework.

SSLStream support


In this release, SSLStream is on by default.

If you want to use Mono.Security, you will need to use the following line in your code:

NpgsqlConnection.UseSslStream = false;

Note that in future releases, this option may not be available anymore as we are removing the dependency on Mono.Security assembly. Let us know in the forums if you have any problem with ssl when using SSLStream. This will help us check if we will be able to remove Mono.Security assembly.

Added support for JSON, JSONB and HSTORE datatypes


Shay added support for those datatypes.

Added GSSAPI support on Windows


Brian Crowell added support for GSSAPI on windows. Now it is easier to use Active Directory when authenticating users.


Removed support for protocol v2


Postgresql versions which use only protocol 2 are very old and don't receive support from postgresql group anymore. By removing support for protocol version 2, we removed a lot of code and can focus on functionalities provided by protocol 3.


Removed support for Postgresql 8.3 and below


In this thread, we talked about making Npgsql support the officially supported postgresql versions. Postgresql has a table of officially supported versions and we decided t

[continue reading]

Posted by Shaun M. Thomas on 2014-07-29 at 20:23:42

Well, my publisher recently informed me that the book I’ve long been slaving over for almost a year, is finally finished. I must admit that PostgreSQL 9 High Availability Cookbook is somewhat awkward as a title, but that doesn’t detract from the contents. I’d like to discuss primarily why I wrote it.

When Packt first approached me in October of 2013, I was skeptical. I have to admit that I’m not a huge fan of the “cookbook” style they’ve been pushing lately. Yet, the more I thought about it, the more I realized it was something the community needed. I’ve worked almost exclusively with PostgreSQL since at late 2005 with databases big and small. It was always the big ones that presented difficulties.

Back then, disaster recovery nodes were warm standby through continuous recovery at best, and pg_basebackup didn’t exist. Nor did pg_upgrade, actually. Everyone had their own favorite backup script, and major upgrades required dumping the entire database and importing it in the new version. To work with PostgreSQL then required a much deeper understanding than is necessary now. Those days forced me to really understand how PostgreSQL functions, which caveats to acknowledge, and which needed redress.

One of those caveats that still called out to me, was one of adoption. With a lot of the rough edges removed in recent releases of PostgreSQL, came increased usage in small and large businesses alike. I fully expected PostgreSQL to be used in a relatively small customer acquisition firm, for instance, but then I started seeing it in heavy-duty financial platforms. Corporate deployments of PostgreSQL require various levels of high availability, from redundant hardware, all the way to WAL stream management and automated failover systems.

When I started working with OptionsHouse in 2010, their platform handled 8,000 database transactions per second. Over the years, that has increased to around 17k, and I’ve seen spikes over 20k. At these levels, standard storage solutions break down, and even failover systems are disruptive. A

[continue reading]

Posted by Josh Berkus in pgExperts on 2014-07-28 at 18:48:00
Recently I had reason to try to locate TOAST corruption on an older server (running 8.4.0), and found that the prior script I posted didn't always find the corruption.  As such, below is an improved function, which can easily be changed to a DO statement or whatever you want.

Note that TOAST corruption isn't a common thing; in fact, if you've kept up with your PostgreSQL updates like the update last Thursday, you should never see it.  This issue was subsequently fixed in 8.4.3, but the owner never applied that update.

Oh, also note that version 8.4 is now EOL.  If you haven't upgraded already, you should have a schedule for doing so.

This function assumes that you have a primary key column which is a SERIAL.  It would need to be altered for other designs.  It also assumes the postgres user can write to /tmp/testout.  See the original posts for how to use this.

create or replace function find_bad_toast2 (
   tablename text,
   pk_col text
)
returns text
language plpgsql
as
$f$
declare
   curid BIGINT := 0;
   badid BIGINT;
begin
FOR badid IN EXECUTE 'SELECT ' || pk_col || ' FROM ' || tablename LOOP
   curid = curid + 1;
   if curid % 100000 = 0 then
       raise notice '% rows inspected', curid;
   end if;
   begin
       EXECUTE 'COPY ( SELECT * FROM ' || tablename || ' WHERE ' ||
            pk_col || ' = ' || cast(badid as text) || ') TO ''/tmp/testout'';';
   exception
       when others then
           raise notice 'data for id % is corrupt', badid;
           continue;
   end;
end loop;
return 'done checking all rows';
end;
$f$;
Posted by Michael Paquier in VMware on 2014-07-28 at 01:44:28

Postgres has in-core support for Dynamic tracing, which is the possibility to use an external utility to track specific code path calls and have an execution trace at disposition for further analysis. Enabling this feature can be done when compiling code by specifying --enable-dtrace at configure step. Note as well that on Linux systems you will need systemtap installed (development package for compilation may be needed depending on your platform) to be able to compile code with this support, and extra kernel-level packages like kernel-devel to be able to take traces. In order to check if your installation is able to do dynamic tracing, for example simply run the following "Hello World" command.

$ stap -e 'probe begin { printf("Hello World\n") exit() }'
Hello World

Now, there are many things that can be done using the probes that are defined within Postgres and the functions defined natively, the most intuitive thing being to print on-the-fly information about things being done on the Postgres server. Here is for example a script able to track transaction start and commit, giving at the same time some extra information about the process doing the operation:

probe process("/path/to/bin/postgres").mark("transaction__start")
{
    printf ("Start PID: %d, CPU: %d\n", pid(), cpu())
}

probe process("/path/to/bin/postgres").mark("transaction__commit")
{
    printf ("Commit PID: %d, CPU: %d\n", pid(), cpu())
}

When using systemtap, the separator for mark points is not a single dash "-" but a double underscore "__". The argument values within a probe mark can be accessed as wel in the stap script as $arg1, $arg2, etc. Now, running a simple transaction like this one...

=# BEGIN;
BEGIN
=# CREATE TABLE dtrace_tab (a int);
CREATE TABLE
=# select pg_backend_pid();
 pg_backend_pid
 ----------------
           14411
(1 row)
=# COMMIT;
COMMIT

Results in the following output when running stap and the script.

$ sudo stap tx_track.d
Start PID: 14411, CPU: 0
Commit PID: 14411, CPU: 0

It is as well possible to track

[continue reading]

Shaun Thomas’s recent post about client-side loops as an SQL anti-pattern is well worth a read if you’re relatively new to SQL-based application development.

It’s reminded me of another SQL coding anti-pattern that I see quite a lot: the naïve read-modify-write cycle. I”ll explain what this common development mistake is, how to identify it, and options for how to fix it.

Imagine your code wants to look up a user’s balance, subtract 100 from it if doing so won’t make it negative, and save it.

It’s common to see this written as three steps:

SELECT balance FROM accounts WHERE user_id = 1;
-- in the application, subtract 100 from balance if it's above
-- 100; and, where ? is the new balance:
UPDATE balance SET balance = ? WHERE user_id =1;

and everything will appear to work fine to the developer. However, this code is critically wrong, and will malfunction as soon as the same user is updated by two different sessions at the same time.

Imagine two concurrent sessions, each subtracting 100 from the user’s balance, starting with an initial value of 300.

Session 1 Session 2
SELECT balance FROM accounts WHERE user_id = 1; (returns 300)  
  SELECT balance FROM accounts WHERE user_id = 1; (also returns 300)
UPDATE balance SET balance = 200 WHERE user_id = 1; (300 – 100 = 200)  
  UPDATE balance SET balance = 200 WHERE user_id = 1; (300 – 100 = 200)

Whoops!. The balance is 200, but you took out 200 from a starting point of 300. So 100 has just vanished.

Most testing and development is done on standalone servers running single sessions, so unless you’re doing rigorous testing this sort of thing often doesn’t get noticed until production, and can be painful to debug. It’s important to know about it so you can code defensively.

I often have people on Stack Overflow ask things to the tune of “Don’t transactions prevent this?”. Unfortunately, while great, transactions aren’t magic secret sauce you can add for easy concurrency. The only way to let you completely ignore concurrency issues is to LOCK TABLE every table

[continue reading]

Posted by Shaun M. Thomas on 2014-07-25 at 17:12:06

Programming is fun. I love programming! Ever since I changed my career from programming to database work, I’ve still occasionally dabbled in my former craft. As such, I believe I can say this with a fair amount of accuracy: programmers don’t understand databases. This isn’t something small, either; there’s a fundamental misunderstanding at play. Unless the coder happens to work primarily with graphics, bulk set-based transformations are not something they’ll generally work with.

For instance, if tasked with inserting ten thousand records into a database table, a programmer might simply open the data source and insert them one by one. Consider this basic (non-normalized) table with a couple basic indexes:

CREATE TABLE sensor_log (
  sensor_log_id    SERIAL PRIMARY KEY,
  location         VARCHAR NOT NULL,
  reading          BIGINT NOT NULL,
  reading_date     TIMESTAMP NOT NULL
);

CREATE INDEX idx_sensor_log_location ON sensor_log (location);
CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);

Now suppose we have a file with ten thousand lines of something like this:

38c-1401,293857,2014-07-25 10:18:38-05:00
69a-8921,209574,2014-07-25 10:18:25-05:00
9e5-0942,690134,2014-07-25 10:18:16-05:00

To load this data, our coder chooses Python and whips up an insert script. Let’s even give the programmer the benefit of the doubt, and say they know that prepared queries are faster due to less overhead. I see scripts like this all the time, written in languages from Java to Erlang. This one is no different:

import psycopg2

db_conn = psycopg2.connect(database = 'postgres', user = 'postgres')
cur = db_conn.cursor()

cur.execute(
  """PREPARE log_add AS
     INSERT INTO sensor_log (location, reading, reading_date)
     VALUES ($1, $2, $3);"""
)

file_input = open('/tmp/input.csv', 'r')
for line in file_input:
    cur.execute("EXECUTE log_add(%s, %s, %s)", line.strip().split(','))
file_input.close()

cur.execute("DEALLOCATE log_add");

db_conn.commit()
db_conn.close()

It’s unlikely we have the /tmp/input.csv

[continue reading]

Todays release marks the end of life for PostgreSQL 8.4, after 5 years of service.

I've found it interesting that PostgreSQL 8.4 was probably the first release to actually make it possible to run fairly busy PostgreSQL installations without any hands-on at all. There were many important parts to 8.4 that made that happen, such as much better standby database support (though not hot standby yet - that came in 9.0) and better statistics and monitoring systems. 8.4 also came with Recursive CTEs (WITH-queries) and Window Aggregates, both of which made a whole new set of applications possible.

But I think nothing actually brought about this "run without hands-on" as much as the new code for Visibility Map (VM) and in particular the automatic sizing of the Free Space Map (FSM). Anybody who deployed 8.3 or earlier in any kind of busy environment knows the pain of trying to configure max_fsm_pages correctly. It was almost impossible to get it right, and the value of right kept changing with your load and data. And the worst part of it all was that if you got it wrong you were in trouble - there wasn't really any remedy other than taking your database offline (or at least read-only) for manual full database vacuums (there were tricks to get around the vacuum specifics, but the core problem was there). So what happened was that a lot of those people who knew what to do just increased that number to something that was "big enough", which usually meant "way too big" and thus wasting resources. And even with that, sometimes ran into it not being big enough because the workload changed.

In fact, I think more or less every single system I did reviews for customers for on those versions had a substantially wrong number in max_fsm_pages - usually the default value because they had no idea what to do with it - and were paying the price for it.

Extra funny is that I've been talking to Heikki (who wrote the dynamic FSM and VM code) a few times recently, and he's still surprised that these changes had such a big impact on real life d

[continue reading]


Image by Flickr user Rebecca Siegel (cropped)

Bucardo's much publicized ability to handle multiple data sources often raises questions about conflict resolution. People wonder, for example, what happens when a row in one source database gets updated one way, and the same row in another source database gets updated a different way? This article will explain some of the solutions Bucardo uses to solve conflicts. The recently released Bucardo 5.1.1 has some new features for conflict handling, so make sure you use at least that version.

Bucardo does multi-source replication, meaning that users can write to more than one source at the same time. (This is also called multi-master replication, but "source" is a much more accurate description than "master"). Bucardo deals in primary keys as a way to identify rows. If the same row has changed on one or more sources since the last Bucardo run, a conflict has arisen and Bucardo must be told how to handle it. In other words, Bucardo must decide which row is the "winner" and thus gets replicated to all the other databases.

For this demo, we will again use an Amazon AWS. See the earlier post about Bucardo 5 for directions on installing Bucardo itself. Once it is installed (after the './bucardo install' step), we can create some test databases for our conflict testing. Recall that we have a handy database named "shake1". As this name can get a bit long for some of the examples below, let's make a few databases copies with shorter names. We will also teach Bucardo about the databases, and create a sync named "ctest" to replicate between them all:

createdb aa -T shake1
createdb bb -T shake1
createdb cc -T shake1
bucardo add db A,B,C dbname=aa,bb,cc
## autokick=0 means new data won't replicate right away; useful for conflict testing!
bucardo add sync ctest dbs=A:source,B:source,C:source tables=all autokick=0
bucardo start

Bucardo has three general ways to handle conflicts: built in strategies, a list of databases, or using custom conflict handlers. The primary strategy, and al

[continue reading]

I am glad to announce the beta release of TPC-C.js, which implements one of the most popular database benchmarks, TPC-C. It’s not a coincidence that today is also the 22nd anniversary of the TPC-C benchmark.

It currently supports Postgres database, but can be easily extended to test other database systems.

You might ask “Why another TPC-C implementation when we already have so many of them?”“

Short answer: This one is very light on system resources, so you can

  1. Run the benchmark strictly adhering to the specification, and
  2. Invest more in database hardware, rather than client hardware.

Long answer: It’s covered in the Motivation section of TPC-C.js, which I’ll quote here:

Motivation

The TPC-C benchmark drivers currently available to us, like TPCC-UVa, DBT2, HammerDB, BenchmarkSQL, etc., all run one process (or thread) per simulated client. Because the TPC-C benchmark specification limits the max tpmC metric (transactions per minute of benchmark-C) from any single client to be 1.286 tpmC, this means that to get a result of, say, 1 million tpmC we have to run about 833,000 clients. Even for a decent number as low as 100,000 tpmC, one has to run 83,000 clients.

Given that running a process/thread, even on modern operating systems, is a bit expensive, it requires a big upfront investment in hardware to run the thousands of clients required for driving a decent tpmC number. For example, the current TPC-C record holder had to run 6.8 million clients to achieve 8.55 million tpmC, and they used 16 high-end servers to run these clients, which cost them about $ 220,000 (plus $ 550,000 in client-side software).

So, to avoid those high costs, these existing open-source implementations of TPC-C compromise on the one of the core requirements of the TPC-C benchmark: keying and thinking times. These implementations resort to just hammering the SUT (system under test) with a constant barrage of transactions from a few clients (ranging from 10-50).

So you can see that even though a decent modern database (running on a single mach

[continue reading]

Planner estimates have already been discussed on this blog in my previous posting and also in some posting before that. A couple of years ago I stumbled over an interesting issue which is commonly known as “cross correlation”. Let us consider the following example: test=# CREATE TABLE t_test (a int, b int); CREATE TABLE test=# […]
Posted by Andrew Dunstan in pgExperts on 2014-07-21 at 19:25:00
Someone was just talking about the size of some source files in PostgreSQL. The source code (.c, .h, .y and .l files) weighs in at a bit over 1 million lines of code. The documentation source has another roughly 300,000 lines. That's a large project, but by no means enormous by today's standards. The biggest source code file is pg_dump.c, at around 15,700 lines. The biggest documentation file is funcs.sgml, at around 17,600 lines. Both of these might well do with a bit of reorganization.

Postgres 9.5 is coming up with a new ECHO mode for psql that has been introduced by this commit:

commit: 5b214c5dd1de37764797b3fb9164af3c885a7b86
author: Fujii Masao <fujii@postgresql.org>
date: Thu, 10 Jul 2014 14:27:54 +0900
Add new ECHO mode 'errors' that displays only failed commands in psql.

When the psql variable ECHO is set to 'errors', only failed SQL commands
are printed to standard error output. Also this patch adds -b option
into psql. This is equivalent to setting the variable ECHO to 'errors'.

Pavel Stehule, reviewed by Fabrízio de Royes Mello, Samrat Revagade,
Kumar Rajeev Rastogi, Abhijit Menon-Sen, and me.

Up to now, there have been two ECHO modes:

  • "all", to print to the standard output all the queries before they are parsed or executed. This can be set when starting psql with option -a.
  • "queries", to have psql print all the queries sent to server. This can be set additionally with option -e of psql.

The new mode is called "errors" and can be either set with the option -b when starting psql or with "set" command in a psql client like that:

=# \set ECHO errors

The feature added is simple: have psql print all the failed queries in the standard error output. The failed query is printed in an additional field prefixed with STATEMENT:

=# CREATE TABLES po ();
ERROR:  42601: syntax error at or near "TABLES"
LINE 1: CREATE TABLES po ();
           ^
LOCATION:  scanner_yyerror, scan.l:1053
STATEMENT:  CREATE TABLES po ();

If multiple queries are specified within a single input only the query that failed is displayed:

=# CREATE TABLE aa (a int); CREATE FOO po; CREATE TABLE bb (a int);
CREATE TABLE
ERROR:  42601: syntax error at or near "FOO"
LINE 1: CREATE FOO po;
               ^
LOCATION:  scanner_yyerror, scan.l:1053
STATEMENT:  CREATE FOO po;
CREATE TABLE

Also, queries that are typed in multiple lines are showed as they are, spaces included:

=# SELECT
      col1_not_here,
      col2_not_here
   FROM
      table_not_here;
ERROR:  42P01: relation "table_not_here" does not exist
LINE 

[continue reading]

Posted by Hans-Juergen Schoenig in Cybertec on 2014-07-17 at 13:15:38
In 99% of all the cases the PostgreSQL planner is doing a perfect job to optimize your queries and to make sure, that you can enjoy high performance and low response times. The infrastructure ensuring this is both sophisticated as well as robust. However, there are some corner cases, which can turn out to be quite […]
On 14th of July, Alvaro Herrera committed patch: psql: Show tablespace size in \db+   Fabrízio de Royes Mello As I previously mentioned – I'm sucker for psql additions. And while todays patch is rather small, I really like it. What it does? It simply makes \db+ show size of all objects in given tablespace. […]
Posted by gabrielle roth on 2014-07-15 at 15:53:00

When: 7-8pm Tuesday
Where: Oregon Convention Center, Room E147

We’re having a Birds of a Feather session at OSCON instead of our usual July meeting. Come hang out with Pg peeps who are in town for the conference! You do not need to be registered for the conference to attend the BoF.

There will be an additional social hour afterwards should we need it.

See you there!


On 10th of July, Tom Lane committed patch: Implement IMPORT FOREIGN SCHEMA.   This command provides an automated way to create foreign table definitions that match remote tables, thereby reducing tedium and chances for error. In this patch, we provide the necessary core-server infrastructure and implement the feature fully in the postgres_fdw foreign-data wrapper. Other […]

PostgreSQL can, as many other products, use SSL to secure client/server communications. It can be configured to be optional or required, to require a client certificate, or even to use the SSL client certificate to log in. However, the DBA tools are currently a bit lacking in this regard. In particular, there is no way for a DBA to see what SSL parameters are in force (or indeed if SSL is enabled at all) for an already existing connection.

There are multiple ways to see the status of a specific connection (such as the libpq PQgetssl() function, the psql startup message or the sslinfo module. Unfortunately all these methods have one thing in common - they are only available to the process making the connection to the database, not to a DBA inspecting the system from the outside.

9.4 will make it a little bit better, because log_connections now include SSL information when the user connects, similar to:
LOG:  connection authorized: user=mha database=postgres SSL enabled (protocol=TLSv1.1, cipher=ECDHE-RSA-AES256-SHA)

But this can still be a bit of a pain to figure out for existing connectioons of course.

To deal with this problem, I've written a small PostgreSQL extension called pg_sslstatus. When loaded using shared_preload_libraries it will keep track of the SSL status for each connection, and expose it through a system view named pg_sslstatus like this:

postgres=# SELECT * FROM pg_sslstatus;
  pid  | ssl | bits | compression | version |        cipher        |                         clientdn                         
-------+-----+------+-------------+---------+----------------------+----------------------------------------------------------
 27286 | t   |  256 | f           | TLSv1   | ECDHE-RSA-AES256-SHA | 
 26682 | t   |  256 | t           | TLSv1   | ECDHE-RSA-AES256-SHA | /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=magnus
 26693 | f   |      |             |         |                      | 
(3 rows)

It will include one row for each connection, and can then be joined with either pg_stat_activity or

[continue reading]

IMPORT FOREIGN SCHEMA is a SQL query defined in the SQL specification allowing to import from a foreign source a schema made of foreign tables. Its support has been added in Postgres 9.5 with the following commit:

commit 59efda3e50ca4de6a9d5aa4491464e22b6329b1e
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Thu Jul 10 15:01:31 2014 -0400

Implement IMPORT FOREIGN SCHEMA.

This command provides an automated way to create foreign table definitions
that match remote tables, thereby reducing tedium and chances for error.
In this patch, we provide the necessary core-server infrastructure and
implement the feature fully in the postgres_fdw foreign-data wrapper.
Other wrappers will throw a "feature not supported" error until/unless
they are updated.

Ronan Dunklau and Michael Paquier, additional work by me

This feature is made of two parts:

  • New API available for foreign data wrappers to support this SQL query
  • Support for this query in postgres_fdw, foreign-data wrapper (FDW) for PostgreSQL available in core.

The new API available has the following shape:

List *
ImportForeignSchema (ImportForeignSchemaStmt *stmt, Oid serverOid);

ImportForeignSchemaStmt is a parsed representation of the raw query of IMPORT FOREIGN SCHEMA and serverOid is the OID of the FDW server used for the import. The parsed statement contains all the information needed by a FDW to fetch all the information to rebuild a schema fetched from a remote source, mainly being:

  • Type of import done with stmt->list_type with the table list (not for ALL)
    • FDW_IMPORT_SCHEMA_LIMIT_TO (LIMIT clause specified in query) for a restricted list of table names imported
    • FDW_IMPORT_SCHEMA_EXCEPT (EXCEPT clause specified in query) for a list of tables to not fetch during import
    • FDW_IMPORT_SCHEMA_ALL (no LIMIT TO or EXCEPT clauses in query) to let the FDW know that all the tables from the foreign schema
  • Remote schema name
  • List of options to customize the import

Then this API needs to return a list of raw queries that will be applied as-is by the server after pars

[continue reading]

This is a maintenance release with a few interesting upgrades

Version 9.3-1102 (2014-07-10)

Author:epgrubmair bug #161
    fix copyOut close hanging bug #161 from epgrubmair

Author:romank0

    backpatch exception during close of fully read stream from romank0

Author:Christophe Canovas

    Added caching for ResultSetMetaData  complete commit

Author:Elizabeth Chatman
    NullPointerException in AbstractJdbc2DatabaseMetaData.getUDTs

    setNull, setString, setObject may fail if a specified type cannot be transferred in a binary mode #151

    backpatch fix for changing datestyle before copy

Author:TomonariKatsumata
    binary transfer fixes new feature -1 for forceBinaryTransfer

Author:Sergey Chernov
    connectTimeout property support backpatch
   
Author:Naoya Anzai
    fix prepared statement ERROR due to EMPTY_QUERY defined as static.

9.4 jars can also be found on the site 

I work with a lot of open source projects, and I use the command-line for almost everything. It often happens that I need to examine a file from a project, and thanks to bash, Github, and curl, I can do so easily, without even needing to have the repo handy. One of the things I do sometimes is compare a file across versions to see what has changed. For example, I needed to see what changes were made between versions 1.22 and 1.23 to the file includes/UserMailer.php which is part of the MediaWiki project. For this trick to work, the project must be on Github, and must label their versions in a consistent manner, either via git branches or git tags.

MediaWiki exists on Github as wikimedia/mediawiki-core. The MediaWiki project tags all of their releases in the format X.Y.Z, so in this example we can use the git tags 1.22.0 and 1.23.0. Github is very nice because you can view a specific file at a certain commit (aka a tag), and even grab it over the web as a plain text file. The format is:

https://raw.githubusercontent.com/PROJECTNAME/BRANCH-OR-TAG/FILE

Note that you can use a tag OR a branch! So to compare these two files, we can use one of these pairs:

https://raw.githubusercontent.com/wikimedia/mediawiki-core/REL1_21/includes/UserMailer.php
https://raw.githubusercontent.com/wikimedia/mediawiki-core/REL1_22/includes/UserMailer.php

https://raw.githubusercontent.com/wikimedia/mediawiki-core/1.21.0/includes/UserMailer.php
https://raw.githubusercontent.com/wikimedia/mediawiki-core/1.22.0/includes/UserMailer.php

All that is left is to treat git as a web service and compare the two files at the command line ourselves. The program curl is a great tool for downloading the files, as it dumps to stdout by default. We will add a -s flag (for "silent") to prevent it from showing the progress meter as it usually does. The last bit of the puzzle is to use <(), bash's process substitution feature, to trick diff into comparing the curl outputs as if they were files. So our final command is:

diff <(curl -

[continue reading]

Posted by Kirk Roybal on 2014-07-08 at 18:40:04

Just a quick reminder that Dallas/Fort Worth PostgreSQL Users Group has a Meetup the first Wednesday of every month.

What: PostgreSQL-backed Websites: Middleware Considered Harmful

Who: Justin Tocci

When: Wednesday, August 6, 2014 7:00 PM

Where:
Improving Enterprises
16633 Dallas Parkway Suite 110 Addison, TX 75001

DFW PUG on Meetup

Posted by Paul Ramsey on 2014-07-07 at 21:07:00

Just a quick public service announcement for blog followers in the Pacific Northwest and environs: you've got a once in a not-quite-lifetime opportunity to attend the "Free and Open Source Software for Geospatial" (aka FOSS4G) conference this year in nearby Portland, Oregon, a city so hip they have trouble seeing over their pelvis.

Anyone in the GIS / mapping world should take the opportunity to go, to learn about what technology the open source world has available for you, to meet the folks writing the software, and the learn from other folks like you who are building cool things.

September 8th-13th, be there and be square.

I've seen one of our customer is migrating a table from SQL Server to PostgreSQL using EnterpriseDB's Migration ToolKit.  This table has a boolean datatype column. In migration process, MTK converts datatype "boolean" to "bit" in PostgreSQL and the process was taking 6 hrs to complete. Customer wanted to change the datatype from "Bit" to "Integer" and alter command for changing type was taking another 6 hrs in PostgreSQL. If he migrates only structure to PostgreSQL first, and then change the type to "Integer" from "Bit", then it does not allow you to load the data with below error. If it allows, it takes only 6 hrs as no need of alter the type after data load.

ERROR: column "hidehelm" is of type integer but expression is of type boolean
Hint: You will need to rewrite or cast the expression.

So I found a work around to type cast from "boolean" to "Integer" implicitly by updating "pg_cast" table as below. By this, he can directly load the boolean data into integer column which saves the time of altering the type from BIT to Integer after migrating.

postgres=# insert into tarik values (1::boolean);
ERROR: column "t" is of type integer but expression is of type boolean
LINE 1: insert into tarik values (1::boolean);
HINT: You will need to rewrite or cast the expression.
postgres=# select * from pg_cast where castsource='boolean'::regtype and casttarget='int4'::regtype;
castsource | casttarget | castfunc | castcontext | castmethod
------------+------------+----------+-------------+------------
16 | 23 | 2558 | e | f
(1 row)
postgres=# update pg_cast set castcontext ='i' where castsource='boolean'::regtype and casttarget='int4'::regtype;
UPDATE 1
postgres=# select * from pg_cast where castsource='boolean'::regtype and casttarget='int4'::regtype;
castsource | casttarget | castfunc | castcontext | castmethod
------------+------------+----------+-------------+------------
16 | 23 | 2558 | i | f
(1 row)
postgres=# insert into tarik values (1::boolean);

[continue reading]


The way I started the title might be confusing, "I corrupted my table", so everyone starts with "Crap !! why did you do that !!, ...", so just to justify it.......
I see many customers coming for recovering the corrupted tables without any backup. In such cases, hard to recover the tables completely and it needs lot of work, however we can recover salvaged data if they dont care about corrupted rows.

Let me corrupt the table first.. :-)

 I created a million-row table called "damaged"
postgres=# select count(*) from to_be_damaged ;
count
---------
1000000
(1 row)
postgres=# select relfilenode,relname from pg_class where relname='to_be_damaged';
relfilenode | relname
-------------+---------------
461257 | to_be_damaged

(1 row)
I've used "hexedit" to damage it. Open relfilenode file from OS level using hexedit and try picking a line which is the start of an 8K boundary and typing hex DE AD BE EF across it.

postgres=# select count(*) from to_be_damaged ;
ERROR: invalid page in block 0 of relation base/12896/461257

Now create an identical table "salvaged" to recover salvaged data from "to_be_damaged" table.
postgres=# create table salvaged(t int);
CREATE TABLE
Prepared below function which copies the rows which are still salvageable:
create or replace function salvage_damaged()
returns void
language plpgsql
as $$
declare
pageno int;
tupno int;
pos tid;
begin
<<pageloop>>
for pageno in 0..35930 loop -- pg_class.relpages for the damaged table
for tupno in 1..1000 loop
pos = ('(' || pageno || ',' || tupno || ')')::tid;
begin
insert into salvaged select * from damaged where ctid = pos;
exception
when sqlstate 'XX001' then
raise warning 'skipping page %', pageno;
continue pageloop;
when others then
raise warning 'skipping row %', pos;
end;
end loop;
end loop;
end;
$$;

Now run the function to copy salvagable rows:
postgres# select salvage_damaged();
WARNING: skipping page 0
salvage_damaged
-----------------

(1 row)
postgres=# select

[continue reading]

Postgres 9.5 is going to improve the performance of subqueries using window functions by allowing the pushdown of WHERE clauses within them. Here is a commit, done during commit fest 1, that is the origin of this improvement:

commit d222585a9f7a18f2d793785c82be4c877b90c461
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date:   Fri Jun 27 23:08:08 2014 -0700

Allow pushdown of WHERE quals into subqueries with window functions.

We can allow this even without any specific knowledge of the semantics
of the window function, so long as pushed-down quals will either accept
every row in a given window partition, or reject every such row.  Because
window functions act only within a partition, such a case can't result
in changing the window functions' outputs for any surviving row.
Eliminating entire partitions in this way obviously can reduce the cost
of the window-function computations substantially.

David Rowley, reviewed by Vik Fearing; some credit is due also to
Thomas Mayer who did considerable preliminary investigation.

The pushdown of the WHERE qual is done only if two conditions are satisfied:

  • Only the partitioning columns are referenced
  • The qual contains no volatile functions

Let's have a look at how things are improved by using a simple data set: a list of individuals referenced by an ID, with information about the city where they live and their respective age (the authorities lacked imagination for the city names).

=# CREATE TABLE population (
     person_id serial,
     age int,
     city text);
CREATE TABLE
=# INSERT INTO population (age, city)
     SELECT round(random() * 100),
            'city ' || round(random() * 200)
     FROM generate_series(1, 1000000);
INSERT 0 1000000

Now, here is a query that evaluates the average, minimum and maximum age of the cities where the population is living. If 9.4, this query returns the following plan:

=# EXPLAIN SELECT * FROM
     (SELECT city,
        avg(age) OVER (PARTITION BY city) avg_age,
        min(age) OVER (PARTITION BY city) min_age,
        max(age) OVER

[continue reading]

PostgreSQL 9.4 will finally support huge pages. This article will cover about configuring huge pages on RHEL 7 box.



Continue reading "Using huge pages on RHEL 7 and PostgreSQL 9.4"
In the 3rd post of "PostgreSQL on RHEL7 series, I will mention about running more than one 9.4 instance on RHEL 7. This article is also valid for 9.3.
Continue reading "Running more than one 9.4 /9.3 instance in parallel on RHEL 7"
On 29th of June, Andres Freund committed patch: Add cluster_name GUC which is included in process titles if set.   When running several postgres clusters on one OS instance it's often inconveniently hard to identify which "postgres" process belongs to which postgres instance.   Add the cluster_name GUC, whose value will be included as part […]
Posted by Pavel Golub in MicroOLAP on 2014-07-02 at 09:34:10

As you probably know PostgreSQL sources are managed by Git version control system. Which is great, but for me as SVN user it would be much better to get access to the sources using familiar tools, e.g. TortoiseSVN. Mainly because I don’t need write access, read mode only. There is a workaround for such guys.

First of all, there is a mirror of PostgreSQL sources on the GitHub. And the second, GitHub supports SVN protocol using the bridge to communicate svn commands to GitHub.

So in my case I need only trunk with the latest commits. I’ve created an empty folder and made Checkout to the https://github.com/postgres/postgres/trunk URL.Image


Filed under: Coding, PostgreSQL Tagged: development, git, PostgreSQL, svn, trick