Quantcast
Viewing all articles
Browse latest Browse all 190

Postgres Performance on AWS EBS

Image may be NSFW.
Clik here to view.
AWS EBS is network-attached storage … in other words, S L O W, compared to local SSD for Postgres database use.

I’ve been seeing average disk latency of 0.55 – 0.80 milliseconds per block, and IOPS and bandwidth are throttled by both the instance and the volume.

For m4.2xlarge, only 10,000 IOPS and 100 Mbps are available, regardless of how many or beefy your attached EBS volumes are – not impressive for SSD at all:


Image may be NSFW.
Clik here to view.

Figure 1: m4.2xlarge throttling IO from 4G EBS volume (10,000 IOPS, 250Mbs)
Image may be NSFW.
Clik here to view.

Figure 2: 4G EBS gp2 unencrypted volume showing minimum read latency of 0.55 ms

In the above case, one thing you can do is to switch from m4.2xlarge to m5.2xlarge, which is cheaper and has double the IO performance.

But if you’re stuck using Postgres with EBS for large databases (bigger than RAM), there are workarounds related to the fact that shared_buffers will store the index in RAM:

  1. carefully configure shared_buffers to be as large as possible, and max_connections as small as possible
  2. run EXPLAIN to see if indexes are used (no SEQ SCAN)
  3. use covering indexes to read data from the index cache
  4. rewrite queries to do index scans from RAM instead of table scans across the network from EBS (ie. HAVING => INTERSECT and EXCEPT, WHERE-splitting, etc.)
  5. use Redis to cache repeated queries.

It would be nice if Postgres had a setting to indicate network-attached storage as a hint to the optimizer.

Percona has some advice for tuning operating system parameters.

Keywords: cloud, architecture, Postgresql


Viewing all articles
Browse latest Browse all 190

Trending Articles