We just concluded the PgCon Developer Meeting. The two big items from me were that EnterpriseDB has dedicated staff to start work on parallelizing Postgres queries, particularly in-memory sorts. I have previously expressed the importance (and complexity) of parallelism. Second, EnterpriseDB has also dedicated staff to help improve Pgpool-II. Pgpool is the swiss army knife of replication tools, and I am hopeful that additional development work will further increase its popularity.
One of the
clients of OmniTI requested
help to provide sample application to insert
JSON data into Postgres using Java JDBC driver . I’m not Java expert
so it took a while for me to write a simple java code to insert
data. TBH, I took help to write test application from one of our
Java engineers at OmniTI. Now, test application is ready and next
step is to make it work with JSON datatype ! After struggling a
little to find out work around for string escaping in JAVA code, I
stumbled upon data type issue!
Here is the test application code to connect to
my local Postgres installation and insert JSON data into sample
postgres=# \d sample
Column | Type | Modifiers
id | integer |
data | json |
denishs-MacBook-Air-2:java denish$ java -cp $CLASSPATH PgJSONExample
-------- PostgreSQL JDBC Connection Testing ------------
PostgreSQL JDBC Driver Registered!
You made it, take control your database now!
Something exploded running the insert: ERROR: column "data" is of type json but expression is of type character varying
Hint: You will need to rewrite or cast the expression.
After some research , I
found out that there is no standard JSON type on java side
so adding support for json to postgres jdbc is not straight forward
answer helped me for testing out the JSON datatype
handling at psql level. As Craig mentioned in the answer that
the correct way to solve this problem is to write a custom Java
mapping type that uses the JDBC setObject method. This can be a
tricky though. A simpler workaround is to tell PostgreSQL to
cast implicitly from text to json:
postgres=# create cast (text as json) without function as
The WITHOUT FUNCTION clause is used because text and json have
the same on-disk and in-memory representation, they’re basically
just aliases for the same data type. AS IMPLICIT tells PostgreSQL
it can convert without being explicitly told to, allowing things
like this to work:
postgres=# prepare test(tex
One of the things that really frustrated me about the KNN GIST distance box box centroid operators that came in PostgreSQL 9.1 and PostGIS 2.0 was the fact that one of the elements needed to be constant to take advantage of the index. In PostGIS speak, this meant you couldn't put it in the FROM clause and could only enjoy it in one of two ways.
Having recently posted some thoughts on Shaun Thomas' " "PostgreSQL Backup and Restore How-to" review ", Packt asked me if I'd like to review the new " Instant PostgreSQL Starter " by Daniel K. Lyons and kindly provided me with access to the ebook version. As I'm happily in a situation where I may need to introduce PostgreSQL to new users, I was interested in taking a look and here's a quick overview.
It follows the same "Instant" format as the backup booklet, which I quite like as it provides a useful way of focussing on particular aspects of PostgreSQL without being bogged down in reams of tl;dr documentation. " Instant Pg Starter " is divided into three sections:Installation Quick start – creating your first table Top 9 features you need to know about
It occurs to me I forgot to congratulate the winners of the free ebooks. So without further adieu:
Congrats to the winners. But more, I call upon them to pay it forward by contributing to the community, either by corresponding with the excellent PostgreSQL mailing lists, or maybe submitting a patch or two to the code. There’s a lot of ground to cover, and more warm bodies always helps.
Thanks again, everyone!
There’s a fantastic set of blog posts about distributed databases and network partitioning, starting with this post explaining the perils of trying to “communicate with someone who doesn’t know you’re alive.”
The whole series worth reading for anyone interested in data stores, consistency and Postgres!
In our previous article we went through describing what retention policies are and how they can be enforced on your PostgreSQL server backups with Barman 1.2. In this post, we will go through the configuration aspects.
For the sake of simplicity, we assume a typical scenario which involves taking full backups once a week through the “barman backup” command. Suppose you want to automatically keep the latest 4 backups and let Barman automatically delete the old ones (obsolete).
The main configuration option for retention policies in Barman is “retention_policy” which can be defined both at global or server level. If you want all your servers by default to keep the last 4 periodical backups, you need to add in the general section of Barman’s configuration file the following line:
[barman] ... // General settings retention_policy: REDUNDANCY 4
When the next “barman cron” command is executed (every minute if you installed Barman using RPMs or Debian/Ubuntu packages), Barman checks for the number of available full periodical backups for every server, order them in descending chronological order (from the most recent to the oldest one) and deletes backups from the 5th position onwards.
In case you have several servers backed up on the same Barman host and you want to differentiate the retention policy for a specific server, you can simply edit that server configuration section (or file, see “Managing the backup of several PostgreSQL servers with Barman“) and define a different setting:
[malcolm] description = Malcolm Rocks ssh_command = ssh malcolm conninfo = host=malcolm port=5432 user=postgres dbname=postgres retention_policy: REDUNDANCY 8
However, Barman allows systems administrators to manage retention policies based on time, in terms of recovery window and point of recoverability. For example, you can set another server to allow to recover at any point in time in the last 3 months:
[angus] description = Angus Rocks ssh_command = ssh angus conninfo = host=angus port=5432 user=postgres dbname=postgres retention
So, here’s what I want to say today:
alembic revision -m "bug XXXXXX Add a new table" --autogenerate
The most difficult thing to deal with so far are the many User Defined Functions that we use in Socorro. This isn’t something that any migration tools I tested deal well with.
Happy to answer questions! And I’ll see about making a longer talk about this transition soon.
Do you use PostgreSQL and truly believe it’s “the world’s most advanced open source database” and that its upcoming 9.3 release will make it even more awesome?
Do you also use Python and believe it’s “an easy to learn, powerful programming language” with “elegant syntax” that makes it an ideal language for developing applications and tools around PostgreSQL, such as Pyrseas?
We have also been requested to add the capability to load and maintain “static data” (relatively small, unchanging tables) as part of yamltodb, so that it can be integrated more easily into database version control workflows.
And for the next release, Pyrseas 0.7, we’d like to include the first version of the database augmentation tool which will support declarative implementation of business logic in the database–starting off with audit trail columns. Some work has been done on this already, but it needs integration with the current code and tests.
Or perhaps coding is not your forte, but you’re really good at explaining and documenting technical “stuff”. Then you could give us a hand with revamping the docs, maybe writing a tutorial so that users have a smooth ride using our tools.
Or maybe you have your own ideas as to how improve the PostgreSQL version control experience. We’d love to hear those too.
initdb [OPTION]... [DATADIR]Now, any data corruption found will be notified as below:
-k, --data-checksums use data page checksums
initdb -D data_directory -k
postgres=# select * from corruption_test;In earlier version,just an error message.
WARNING: page verification failed, calculated checksum 63023 but expected 48009
ERROR: invalid page in block 0 of relation base/12896/16409
postgres=# select * from corruption_test where id=1;That's cool right....
ERROR: invalid page header in block 0 of relation base/12870/18192
$ export PGDATA=/usr/local/pg93beta/dataSome points on Disk page checksums:
Data page checksum version: 1
The Call for Papers for DjangoCon US 2013 is now open.
This option would be incompatible with the tablekeyset and tablekeyprefix options. If given, the key won't be looked up at all. We would simply use the given key and return the corresponding list of values. That would make selecting from such a table faster - possibly lots faster. For scalars, sets and lists, the table would have one column. In the case of a scalar there would only be one row. For zsets, it would have two columns - one for the value and one for the score. Finally, for hashes it would have two, one for the property and one for the value.CREATE FOREIGN TABLE hugeset (value text)
OPTIONS (tabletype 'list', singleton_key 'myhugelist');
"I love Mongo's HA story. Out of the box I can build a 3-node Mongo cluster with a full replica set. I can add nodes, I can fail over, without losing data."Wouldn't it be nice if we could say the same thing about Postgres? But we can't.