At FOSS4G this year, I wanted to take a run at the decision process around open source with particular reference to the decision to adopt PostGIS: what do managers need to know before they can get comfortable with the idea of making the move.
# import to vertica
zcat data.sql | pv -s 16986105538 -p -t -r | vsql
0:13:56 [4.22MB/s] [==============> ] 14%
I have yet to run PostgreSQL on GCE in production. I am still testing it but I have learned the following:
Either disk can be provisioned as a raw device allowing you to use Linux Software Raid to build a RAID 10 which even further increases speed and reliability. Think about that, 4 SSD provisioned disks in a RAID 10...
The downside I see outside of the general arguments against cloud services (shared tenancy, all your data in a big brother, lack of control over your resources, general distaste for $vendor, or whatever else we in our right minds can think up) is that GCE is current limited to 16 virtual CPUS and 104GB of memory.
What does that mean? Well it means that it is likely that GCE is perfect for 99% of PostgreSQL workloads. By far the majority of PostgreSQL need less than 104GB of memory. Granted, we have customers that have 256GB, 512GB and even more but those are few and far between.
It also means that EC2 is no longer your only choice for dynamic cloud provisioned VMs for PostgreSQL. Give it a shot, the more competition in this space the better.
It has been a little quiet on the U.S. front of late. Alas, summer of 2014 has come and gone and it is time to strap on the gators and get a little muddy. Although we have been relatively quiet we have been doing some work. In 2013 the board appointed two new board members, Jonathan S. Katz and Jim Mlodgeski. We also affiliated with multiple PostgreSQL User Groups:
Thanks for Bucardo team for responding my previous post. My cascaded slave replication works as expected.
Today I notice there is still something to do related with delta
and track tables.
Single table replication scenario:
Db-A/Tbl-T1 (master) => Db-B/Tbl-T2 (slave) => Db-C/Tbl-T3 (cascaded slave)
Every change on Table T1 replicated to T2, then T2 to T3. After a while, VAC successfully cleans delta and track tables on Db-A. But not on Db-B.
I detect 2 issues:
1. If cascaded replication T2 to T3 successful, the delta table on Db-B is not be cleaned up by VAC.
2. If cascaded replication T2 to T3 failed before VAC schedule, the delta table on Db-B will be cleaned up by VAC. Then, cascaded replication from T2 to T3 losts.
I fix it by modifying SQL inside bucardo.bucardo_purge_delta(text, text):
Need advice from Bucardo team.
Postgres 9.5 will come up with an additional logging option making possible to log replication commands that are being received by a node. It has been introduced by this commit.
commit: 4ad2a548050fdde07fed93e6c60a4d0a7eba0622 author: Fujii Masao <email@example.com> date: Sat, 13 Sep 2014 02:55:45 +0900 Add GUC to enable logging of replication commands. Previously replication commands like IDENTIFY_COMMAND were not logged even when log_statements is set to all. Some users who want to audit all types of statements were not satisfied with this situation. To address the problem, this commit adds new GUC log_replication_commands. If it's enabled, all replication commands are logged in the server log. There are many ways to allow us to enable that logging. For example, we can extend log_statement so that replication commands are logged when it's set to all. But per discussion in the community, we reached the consensus to add separate GUC for that. Reviewed by Ian Barwick, Robert Haas and Heikki Linnakangas.
The new parameter is called log_replication_commands and needs to be set in postgresql.conf. Default is off to not log this new information that may surprise existing users after an upgrade to 9.5 and newer versions. And actually replication commands received by a node were already logged at DEBUG1 level by the server. A last thing to note is that if log_replication_commands is enabled, all the commands will be printed as LOG and not as DEBUG1, which is kept for backward-compatibility purposes.
Now, a server enabling this logging mode...
$ psql -At -c 'show log_replication_commands' on
... Is able to show replication commands in LOG mode. Here is for example the set of commands set by a standby starting up:
LOG: received replication command: IDENTIFY_SYSTEM LOG: received replication command: START_REPLICATION 0/3000000 TIMELINE 1
This will certainly help utilities and users running audit for replication, so looking forward to see log parsing tools like pgbadger make some nice outputs using this
A while ago I wrote about compiling PostgreSQL extensions under Visual Studio – without having to recompile the whole PostgreSQL source tree.
I just finished the pg_sysdatetime extension, which is mainly for Windows but also supports compilation with PGXS on *nix. It’s small enough that it serves as a useful example of how to support Windows compilation in your extension, so it’s something I think is worth sharing with the community.
The actual Visual Studio project creation process took about twenty minutes, and would’ve taken less if I wasn’t working remotely over Remote Desktop on an AWS EC2 instance. Most of the time was taken by the simple but fiddly and annoying process of adding the include paths and library path for the x86 and x64 configurations. That’s necessary because MSVC can’t just get them from pg_config and doesn’t have seem to have user-defined project variables to let you specify a $(PGINSTALLDIR) in one place.
Working on Windows isn’t always fun – but it’s not as hard as it’s often made out to be either. If you maintain an extension but haven’t added Windows support it might be easier than you expect to do so.
Packaging it for x86 and x64 versions of each major PostgreSQL release, on the other hand… well, lets just say we could still use PGXS support for Windows with a “make installer” target.
We had about 50 folks attend the PDXPUGDay 2014 last week, between DjangoCon and Foss4g. A lot of folks were already in town for one of the other confs, but several folks also day tripped from SeaPUG! Thanks for coming on down.
Thanks again to our speakers:
(Plus our lightning talk speakers: Josh B, Mark W, and Basil!)
And of course, PSU for hosting us.
Videos are linked from the wiki.
If you weren't able to make it to FOSS4G 2014 this year, you can still experience the event Live. All the tracks are being televised live and its pretty good reception. https://2014.foss4g.org/live/. Lots of GIS users using PostGIS and PostgreSQL. People seem to love Node.JS too.
After hearing enough about Node.JS from all these people, and this guy (Bill Dollins), I decided to try this out for myself.
A co-worker of mine did a blog post last year that I’ve found incredibly useful when assisting clients with getting shared_buffers tuned accurately.
You can follow his queries there for using pg_buffercache to find out how your shared_buffers are actually being used. But I had an incident recently that I thought would be interesting to share that shows how shared_buffers may not need to be set nearly as high as you believe it should. Or it can equally show you that you that you definitely need to increase it. Object names have been sanitized to protect the innocent.
To set the stage, the database total size is roughly 260GB and the use case is high data ingestion with some reporting done on just the most recent data at the time. shared_buffers is set to 8GB. The other thing to note is that this is the only database in the cluster. pg_buffercache is installed on a per database basis, so you’ll have to install it on each database in the cluster and do some additional totalling to figure out your optimal setting in the end.
database=# SELECT c.relname , pg_size_pretty(count(*) * 8192) as buffered , round(100.0 * count(*) / ( SELECT setting FROM pg_settings WHERE name='shared_buffers')::integer,1) AS buffers_percent , round(100.0 * count(*) * 8192 / pg_relation_size(c.oid),1) AS percent_of_relation FROM pg_class c INNER JOIN pg_buffercache b ON b.relfilenode = c.relfilenode INNER JOIN pg_database d ON (b.reldatabase = d.oid AND d.datname = current_database()) GROUP BY c.oid, c.relname ORDER BY 3 DESC LIMIT 10; relname | buffered | buffers_percent | percent_of_relation -------------------------------------+----------+-----------------+--------------------- table1 | 7479 MB | 91.3 | 9.3 table2 | 362 MB | 4.4 | 100.0 table3 | 311 MB | 3.8 | 0.8 table4
After my Btree bloat estimation query, I found some time to work on a new query for tables. The goal here is still to have a better bloat estimation using dedicated queries for each kind of objects.
Compare to the well known bloat query, this query pay attention to:
You’ll find the queries here:
I created the file
sql/bloat_tables.sql with the
9.0 and more query version. I edited the query to add the bloat
pgstattuple (free_percent +
dead_tuple_percent) to compare both results and added the following
-- remove Non Applicable tables NOT is_na -- remove tables with real bloat < 1 block AND tblpages*((pst).free_percent + (pst).dead_tuple_percent)::float4/100 >= 1 -- filter on table name using the parameter :tblname AND tblname LIKE :'tblname'
Here is the result on a fresh pagila database:
postgres@pagila=# \set tblname % postgres@pagila=# \i sql/bloat_tables.sql current_database | schemaname | tblname | real_size | bloat_size | tblpages | is_na | bloat_ratio | real_frag ------------------+------------+----------------+-----------+------------+----------+-------+------------------+----------- pagila | pg_catalog | pg_description | 253952 | 8192 | 31 | f | 3.2258064516129 | 3.34 pagila | public | city | 40960 | 8192 | 5 | f | 20 | 20.01 pagila | public | customer | 73728 | 8192 | 9 | f | 11.1111111111111 | 11.47 pagila | public | film | 450560 | 8192 | 55 | f | 1.81818181818182 | 3.26 pagila | public | rental | 1228800 | 131072 | 150 | f
The 2.1.4 release of PostGIS is now available.
The PostGIS development team is happy to release patch for PostGIS 2.1, the 2.1.4 release. As befits a patch release, the focus is on bugs, breakages, and performance issues
As mentioned in my earlier blog, I'm visiting several events in the US and Canada in October and November. The first of these, the talk about WebRTC in CRM at xTupleCon, has moved from the previously advertised timeslot to Wednesday, 15 October at 14:15.
This will be a hands on event for developers and other IT professionals, especially those in web development, network administration and IP telephony. Please bring laptops and mobile devices with the latest versions of both Firefox and Chrome to experience WebRTC.
If you do want to attend xTupleCon itself, please contact xTuple directly through this form for details about the promotional tickets for free software developers.
is_nato filter out indexes we can not estimate the bloat with “accuracy” (currently only indexes referencing a fields using the “name” type).
While working on table bloat (I will blog about that very soon), I found a large deviation on statistics on array types. I’m not sure how to handle correctly these animals’ header yet.
Cheers and happy monitoring!
The conference was really a mini-conference but it was great. It was held in the exact same room that PostgreSQL Conference West was held all the way back in 2007. It is hard to believe that was so long ago. I will say it was absolutely awesome that PDX still has the exact same vibe and presentation! (Read: I got to wear shorts and a t-shirt).
Some items of note: Somebody was peverse enough to write a FUSE driver for PostgreSQL and it was even bi-directional. This means that PostgreSQL gets mounted as a filesystem and you can even use Joe (or yes VIM) to edit values and it saves them back to the table.
Not nearly enough of the audience was aware of PGXN. This was a shock to me and illustrates a need for better documentation and visibility through .Org.
The success of this PgDay continues to illustrate that other PUGS should be looking at doing the same, perhaps annually!
Thanks again Gab and Mark for entrusting me with introducing your conference!
When: 7-9pm Thu Sep 18, 2014
Who: Jay Riddle
What: Using Postgresql to enable Google like Search
Jay’s been experimenting with Pg’s full text search
capabilities. At our next meeting, he’ll cover the following:
* Brief intro on why Google like search capabilities are fun.
* Introduce Postgresql full text abilities. Discuss at a high level why a full text index may be a bit heavier than a normal index.
* Look at possible solutions for use cases where the data you want to index is spread across multiple columns and multiple tables.
Jay Riddle is a database administrator at Viewpoint software.
Our meeting will be held at Iovation, on the 32nd floor of the US Bancorp Tower at 111 SW 5th (5th & Oak). It’s right on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots. No bikes in the office, sorry!
Elevators open at 6:45 and building security closes access to the floor at 7:30.
The building is on the Green & Yellow Max lines. Underground bike parking is available in the parking garage; outdoors all around the block in the usual spots.
See you there!
This post takes a look at Btrfs and its transparent compression mount option for storing data files in the context of a Postgresql tpc-b benchmark.
The results suggest a limited but real performance benefit to running Postgresql with data tables on Btrfs over Ext4. A performance gain is realized when table and index sizes exceed available memory for disk cache and the system is under enough load to make checkpoints challenging enough to constrain disk read operations for continued transaction processing. Benchmarks expereienced an approximate 2X gain in transactions per second for the system under test.
A hypothesis and evidence are presented which may explain the performance differences. One possibility is that Btrfs’ fsync implementation is more selective than Ext4 mounted with data=ordered. Another possibility is that write entanglement in Ext4 exarcerbates resource contention during checkpoints, starving required disk reads. And it appears that transparent compression reduces the overall read and write load, reducing resource contention during checkpoints. Alternative theories actively solicited.
Of course computational systems are dynamic systems and meaningful benchmark results are hard. Please accept these results as meaningful only within the specific context presented. There could easily be a bifurcation point for any old configuration parameter — life will find a way to ruin your benchmark.
The tests were run on my desktop computer. The pgbench databases existed in a tablespace on a cheap Western Digital 3TB Network drive from Fry’s. The underlying filesystem for the tablespaces changed between tests while all other configuration options remain the same. WAL and other disk usage was on a generic Intel SSD. The computer has 32 GB of RAM, and shared buffers was set at 500MB.
- Btrfs v3.12
For more details, below please find links pointing to hardware and configuration details.