We're still working on the program and sponsors for PyPgDay, but I
thought I'd let you know about the fun part of it: Salesforce.com is
sponsoring a post-PyPgDay party!
The party will be at Fault Line Brewing, which means either driving from the convention center (it's a couple miles away) or taking the bus, which Salesforce.com is also sponsoring. You'll need a PyPgDay badge or t-shirt to get in, or a PyCon badge. PyPgDay attendees will have priority if we run out of room, but ask!
Come party with us.
As always, details about PyPgDay are on the wiki page.
Tuesday, January 22, 2013
Thursday, January 17, 2013
PostgreSQL 9.3: Current Feature Status
So we're finally beginning the final playoffs for 9.3 features, otherwise known as "CommitFest 4". Here's where we are on the various features which have been discussed, proposed, submitted, or thought about for 9.3. The "likely/uncertain" is my own judgement based on the activity I've seen on the feature so far, and may not reflect the project consensus (although it does largely reflect the "needs review/ready for committer" split on those patches).
I've also probably missed stuff. Mostly this is by not seeing some patches as significant, which actually are.
As you can see, what's already committed would make a respectable release for any other DBMS. But here at PostgreSQL, we don't believe that there's any such thing as "enough features". If your database can't process 16TB of data with six nines of uptime, parallelize all your application logic, make breakfast and drive the kids to school without waking you up, we still have work to do!
Already Committed:
Pending CF4, Looking Likely:
In CF4, Uncertain:
Already Bounced to 9.4:
Never Got Worked On in 9.3:
Regardless of what does or doesn't make it in at this point, we have a great 9.3 expected.
I've also probably missed stuff. Mostly this is by not seeing some patches as significant, which actually are.
As you can see, what's already committed would make a respectable release for any other DBMS. But here at PostgreSQL, we don't believe that there's any such thing as "enough features". If your database can't process 16TB of data with six nines of uptime, parallelize all your application logic, make breakfast and drive the kids to school without waking you up, we still have work to do!
Already Committed:
- Use POSIX shared memory: no more SHMMAX
- LATERAL subqueries
- Streaming-only Cascading and Remastering
- extend large object access to 4TB (was 2GB)
- Improve pg_upgrade performance
- COPY FREEZE mode
- Automatically updatable views
- Create your own background workers
- configuration file directories
- additional ALTER, IF NOT EXISTS objects
- pg_basebackup generates replica configuration
- pg_terminate_backend your own queries
- SP-GiST indexing for Range Types
Pending CF4, Looking Likely:
- Writeable Foreign Tables
- PostgreSQL database federation (pgsql_fdw)
- LOCK TIMEOUT
- Event Triggers
- pg_retainxlog
- Improved Foreign Key locking (FOR KEY SHARE)
- Recursive VIEWs
- Improved performance for SP-GiST and Range Types
- Merging recovery.conf into postgresql.conf
- pg_ping utility
- Additional JSON features and built-in functions
- Updating postgresql.conf from SQL command-line
In CF4, Uncertain:
- Logical Streaming Replication, and its many dependant patches.
- Data page checksums
- Auto-updated Materialized Views
- Reducing Hint Bit I/O
- Storing counts, other data in GIN indexes
- Row-Level Security and dependant SEPgSQL features.
- xlogreader
- use pg_receivexlog for promotion
- REINDEX CONCURRENTLY
- Parallel pg_dump
Already Bounced to 9.4:
- pg_dump file-per-object format ("split")
- Trim Trailing NULLs optimization
- Automatic audit logging ("timing events")
- TABLESAMPLE
- pg_stats_lwlocks view
- array foreign keys
Never Got Worked On in 9.3:
- Query parallelization
- UPSERT/MERGE
- Merge joins for Ranges
- Global temp tables
- Temp tables on Replicas
- Reordering database columns
- Spatial JOIN
- special JSON GiST Indexes
Regardless of what does or doesn't make it in at this point, we have a great 9.3 expected.
Monday, January 14, 2013
PostgreSQL Down Under
It's my first trip to Oz! I will shortly be visiting Canberra and Melbourne and making a number of presentations on PostgreSQL. If you're an Australian PostgreSQL user, come say hello!
First, I'll be speaking twice at LinuxConf.AU in Canberra. First, for the SysAdmins Miniconf I'll be doing "How To Crash Your PostgreSQL Server" on January 28th. Then on Friday, February 1, I'll be doing "PostgreSQL 9.2: Full Throttle Database" again.
After LCA, I will fly to Melbourne, where Experian/Hitwise will be hosting the first-ever Melbourne pgDay on February 4. I'll be co-presenting there with Italian PostgreSQL hacker Gabriele Bartolini, and MelPUG leader Jason Golden. We haven't decided on talks yet for this event.
If you've never met me or any member of the Core Team, or you have questions about PostgreSQL, or you want to get involved in the community, come to one or both of those events! If you're going to be in Canberra in time for Australia Day and want to bum around, I get there on the 25th. I could also use some recommendations on cloud hosts with a Australia data centers I can easily use for doing demos.
(Yes, I realize my blog has been nothing but events and conferences lately. It's that season. Your technical content will return eventually.)
First, I'll be speaking twice at LinuxConf.AU in Canberra. First, for the SysAdmins Miniconf I'll be doing "How To Crash Your PostgreSQL Server" on January 28th. Then on Friday, February 1, I'll be doing "PostgreSQL 9.2: Full Throttle Database" again.
After LCA, I will fly to Melbourne, where Experian/Hitwise will be hosting the first-ever Melbourne pgDay on February 4. I'll be co-presenting there with Italian PostgreSQL hacker Gabriele Bartolini, and MelPUG leader Jason Golden. We haven't decided on talks yet for this event.
If you've never met me or any member of the Core Team, or you have questions about PostgreSQL, or you want to get involved in the community, come to one or both of those events! If you're going to be in Canberra in time for Australia Day and want to bum around, I get there on the 25th. I could also use some recommendations on cloud hosts with a Australia data centers I can easily use for doing demos.
(Yes, I realize my blog has been nothing but events and conferences lately. It's that season. Your technical content will return eventually.)
Friday, January 11, 2013
Registration now open for PyPgDay
Registration is open now for the PyPgDay. Register early and register often!
We are also still looking for speakers and sponsors.
In addition to giving you access to a solid day of PostgreSQL, PostGIS and Python development content, your registration fee gets you a cool tshirt custom-designed for the PyPgDay and priority access to the afterparty. Early registration rates are in effect through February 28. We will have a full schedule for the PyPgDay up sometime in mid-February.
Yes, we are requiring a paid registration for this event. One reason for that is that PyPgDay is not free for SFPUG: we need to pay for room rental, catering, A/V, insurance, staffing and other expenses. More importantly, we've found that free registrations are not reliable, and we expect that the event may sell out. Event volunteers and speakers, of course, may attend the PyPgDay for free.
Net proceeds from the event will be split between PostgreSQL.US (a 501(c)3 nonprofit supporting the PostgreSQL community) and the SFPUG Meetings Fund.
Also, before anyone asks: No, PyCon is not underwriting the cost of the PyPgDay. Frankly, we didn't even ask them to: they are non-profit, same as us, and they have their own project to support.
We are also still looking for speakers and sponsors.
In addition to giving you access to a solid day of PostgreSQL, PostGIS and Python development content, your registration fee gets you a cool tshirt custom-designed for the PyPgDay and priority access to the afterparty. Early registration rates are in effect through February 28. We will have a full schedule for the PyPgDay up sometime in mid-February.
Yes, we are requiring a paid registration for this event. One reason for that is that PyPgDay is not free for SFPUG: we need to pay for room rental, catering, A/V, insurance, staffing and other expenses. More importantly, we've found that free registrations are not reliable, and we expect that the event may sell out. Event volunteers and speakers, of course, may attend the PyPgDay for free.
Net proceeds from the event will be split between PostgreSQL.US (a 501(c)3 nonprofit supporting the PostgreSQL community) and the SFPUG Meetings Fund.
Also, before anyone asks: No, PyCon is not underwriting the cost of the PyPgDay. Frankly, we didn't even ask them to: they are non-profit, same as us, and they have their own project to support.
Thursday, January 10, 2013
SFPUG Live: PostgreSQL and Distributed Services
For next Tuesday's San Francisco PostgreSQL User Group, we will have CitusDB talking about distributed services. As usual, this will be live broadcast on streaming video. Broadcast should start around 7:15 PM Pacific.
CitusDB is a distributed analytic database built on top of PostgreSQL that
enables queries to run in real-time against very large datasets. A quick
high-level overview of the key features is available on the CitusDB site.
Join CitusDB staff in discussing their experiences writing a foreign
data wrapper for MongoDB, as well as some of the broader issues that come
into play when writing FDWs that interact with distributed services.
More information about the event, and RSVPs on the SFPUG Meetup Page.
CitusDB is a distributed analytic database built on top of PostgreSQL that
enables queries to run in real-time against very large datasets. A quick
high-level overview of the key features is available on the CitusDB site.
Join CitusDB staff in discussing their experiences writing a foreign
data wrapper for MongoDB, as well as some of the broader issues that come
into play when writing FDWs that interact with distributed services.
More information about the event, and RSVPs on the SFPUG Meetup Page.
PyPgDay: Call For Sponsors
Does your company do Python and PostgreSQL or PostGIS? Do you want to make a good impression on the PostgreSQL community? Were you planning on attending PyCon anyway?
If so, you may want to sponsor our first-every PyPgDay on March 13th. Sponsorships are due by February 28th. We expect 90 to 140 attendees at this event, so it's decent publicity for a low, low price. Net proceeds go to benefit the PostgreSQL community via PostgreSQL.US and the SFPUG Meetings Fund.
More information on the PyPgDay Wiki Page.
Of course, we want you to speak too! Talk proposals are due by January 20th.
If so, you may want to sponsor our first-every PyPgDay on March 13th. Sponsorships are due by February 28th. We expect 90 to 140 attendees at this event, so it's decent publicity for a low, low price. Net proceeds go to benefit the PostgreSQL community via PostgreSQL.US and the SFPUG Meetings Fund.
More information on the PyPgDay Wiki Page.
Of course, we want you to speak too! Talk proposals are due by January 20th.
Saturday, January 5, 2013
Cascading Replication and Cycles
Cascading replication is a 9.2 feature which allows you to replicate through a chain of replicas. The master replicates to replica1, which replicates to replica2, which replicates to replica3, etc. This has some tremendous advantages for large high-availability setups, including rapid failover, limiting replication streams over WAN, and knowing determinatively which replica is furthest ahead.
In PostgreSQL 9.2, cascading replication really requires file-based WAL archiving to work. As of a recent commit, PostgreSQL 9.3 will not require that: you can do cascading replication chains of whatever length you want without any WAL archiving. Score one for vastly enhanced usability! Thanks, Heikki.
Also, score one for arming additional foot-guns (see below).
First, the good news. Setting up a chain of replicas has now become fairly simple even for non-DBAs. Here's the basic steps:
Given this simplicity, I was able to set up an eight-server chain of replicas on the GoGrid Cloud in about an hour, half of which was waiting for GoGrid's rubber-stamping feature to copy the master once I had it configured (I really like this feature on GoGrid, for testing). In a low-traffic environment lag from master to replica7 was between 0.1 and 1.7 seconds, median 0.5 seconds. Not bad!
Thom Brown one-upped me by setting up a chain of 120 replicas. Surprisingly, this worked fine, except that replication lag to the end of the chain was several minutes.
Of course, where there are powerful new features, there are powerful foot-guns. This foot-gun is known as "cycles".
Here's how it goes: you have a replication chain where m1 replicates to r1 (synchronously) and r2 (async). r2 replicates to r3, which replicates to r4, r5 and r6. This is complex, but not unreasonable in an HA multi-datacenter setup.
Then, m1 fails. You promote r1 to be the new master, and reconfigure r2 to replicate from it. However, due to out-of-date network documentation, you accidentally connect r2 to r6 instead of r1, which 9.3 cascading replication will happily let you do without a complaint. Now you have a single independant master, and five replicas which are working perfectly well except that they are receiving no new data at all because you have a cycle. Ooops!
Troubleshooting this situation, once it happens, will be annoying: you basically have to check pg_is_in_recovery() and pg_stat_replication on each server, looking for a master which has replicas. If you don't find one, a cycle is your problem; otherwise your problem is something else.
However, it's not clear how PostgreSQL would detect a cycle; currently a replica only knows about its master and its own replicas, and nothing about masters or replicas more than one step away. We'd probably need to modify streaming replication to support a detection mechanism.
It's also not clear what we should do if a cycle is detected. Preventing a cycle entirely seems wrong, since one can imagine circumstances where a temporary cycle is a legitimate part of a transition between two different replication setups. Issuing a warning doesn't seem sufficient, though, given that many replication chains are managed by devops software which ignores warnings. Although writing a warning every 5 minutes to the logs on all replicas would at least help with troubleshooting.
Anyway, there's no plans to address cycles as an issue for 9.3, except maybe some documentation. Please comment below if you have thoughts or ideas on this particular foot-gun.
In PostgreSQL 9.2, cascading replication really requires file-based WAL archiving to work. As of a recent commit, PostgreSQL 9.3 will not require that: you can do cascading replication chains of whatever length you want without any WAL archiving. Score one for vastly enhanced usability! Thanks, Heikki.
Also, score one for arming additional foot-guns (see below).
First, the good news. Setting up a chain of replicas has now become fairly simple even for non-DBAs. Here's the basic steps:
- Edit pg_hba.conf to enable replication connections.
- Edit postgresql.conf to turn on the various replication options.
- Restart the master.
- Set up passwordless access to the master from the first replica.
- Use pg_basebackup to snapshot the master from the replica.
- Create a recovery.conf pointing at the master.
- Start the replica.
- Run pg_basebackup on the second replica.
- Edit recovery.conf to point at the first replica.
- Start the second replica.
- Repeat steps 8-10 for as many replicas you want to chain.
Given this simplicity, I was able to set up an eight-server chain of replicas on the GoGrid Cloud in about an hour, half of which was waiting for GoGrid's rubber-stamping feature to copy the master once I had it configured (I really like this feature on GoGrid, for testing). In a low-traffic environment lag from master to replica7 was between 0.1 and 1.7 seconds, median 0.5 seconds. Not bad!
Thom Brown one-upped me by setting up a chain of 120 replicas. Surprisingly, this worked fine, except that replication lag to the end of the chain was several minutes.
Of course, where there are powerful new features, there are powerful foot-guns. This foot-gun is known as "cycles".
Here's how it goes: you have a replication chain where m1 replicates to r1 (synchronously) and r2 (async). r2 replicates to r3, which replicates to r4, r5 and r6. This is complex, but not unreasonable in an HA multi-datacenter setup.
Then, m1 fails. You promote r1 to be the new master, and reconfigure r2 to replicate from it. However, due to out-of-date network documentation, you accidentally connect r2 to r6 instead of r1, which 9.3 cascading replication will happily let you do without a complaint. Now you have a single independant master, and five replicas which are working perfectly well except that they are receiving no new data at all because you have a cycle. Ooops!
Troubleshooting this situation, once it happens, will be annoying: you basically have to check pg_is_in_recovery() and pg_stat_replication on each server, looking for a master which has replicas. If you don't find one, a cycle is your problem; otherwise your problem is something else.
However, it's not clear how PostgreSQL would detect a cycle; currently a replica only knows about its master and its own replicas, and nothing about masters or replicas more than one step away. We'd probably need to modify streaming replication to support a detection mechanism.
It's also not clear what we should do if a cycle is detected. Preventing a cycle entirely seems wrong, since one can imagine circumstances where a temporary cycle is a legitimate part of a transition between two different replication setups. Issuing a warning doesn't seem sufficient, though, given that many replication chains are managed by devops software which ignores warnings. Although writing a warning every 5 minutes to the logs on all replicas would at least help with troubleshooting.
Anyway, there's no plans to address cycles as an issue for 9.3, except maybe some documentation. Please comment below if you have thoughts or ideas on this particular foot-gun.