Here's some things contributors have already adjusted:
- shared_buffers now goes up to 128MB automatically depending on available RAM reservation (we did this for 9.3)
- effective_cache_size is automatically set to shared_buffers X 3 (might be X 4 by the time we release)
However, one thing which came up during this discussion is that many people may be running PostgreSQL on virtual machines which have less than 1GB of RAM available to PostgreSQL. I don't think they are, but we lack data. So I put up a survey to see what amounts of RAM our users really have on their smallest machines/VMs. Please fill out the survey so that we can have reasonable input for development, and if you have comments, please put them below.
Your survey may not be working correctly. I voted in the 4 - 7 category. My vote didn't show up there. There were only 2 results. One in 1 - 1.9 and the other in 8GB and up.
ReplyDeleteI think you're seeing the effect of server-side caching.
Deleteconfirmed. Mind you, caching is being wonky in this instance, but the web team is on it.
DeleteWith beginning of SSDs doing random lookups at a rate of even 15.000/second one can expect to get at minimum a thousand lookups even without large ram buffers. But therefore PostgreSQL algorithms have slighty to be changed.
DeleteThis comment has been removed by the author.
DeleteActually, that's accounted for in the existing tunable variables. However, that does bring up an important point for any autotuning.
DeleteI don't know the user base, but the smallest Amazon EC2 server (the "micro" instances) have 615MB of RAM, and the smallest server offered by RackSpace have 512MB of RAM.
ReplyDeleteCheck out lowendbox.com and the like for the truly low end. The smallest I've ever noticed is 64M, but usually 128M is lowest. However, the nature of the task likely changes for these.
ReplyDeleteI think the effective_cache_size is 4x shared buffers, but might be 3x by the time 9.4 is released. I think you stated this backwards above.
ReplyDeleteLike anyone can keep track with that thread ...
Delete