Image may be NSFW.
Clik here to view.httperf is an easy-to-use but powerful GPL2 command line (CLI) stress and load testing tool for linux.
Installing httperf
CentOS 6:
yum install httperf
CentOS 7:
wget http://ftp.tu-chemnitz.de/pub/linux/dag/redhat/el7/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm rpm -Uvh rpmforge-release-0.5.3-1.el7.rf.x86_64.rpm yum install httperf
Running httperf
- Always get permission from the site owner first before doing load testing
- It’s important to start by calibrating your tool first. Send one request and check the response:
$ httperf --server www.example.com --uri /index.php --print-request --print-reply -d10
If you see non-200 HTTP responses, like this 301 example response below, then you need to ensure you have the correct –uri parameter:
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE httperf: maximum number of open descriptors = 1024 SH0:GET /index.php HTTP/1.1 SH0:User-Agent: httperf/0.9.0 SH0:Host: www.example.com SH0: SS0: header 83 content 0 RH0:HTTP/1.1 301 Moved Permanently
You can ignore the open files warning – it’s a bug in httperf. Just keep the load under 200 connections, or compile your own version from source.
Now we’re ready to do concurrent testing:
$ httperf --server www.example.com --uri /index.php --num-conn 20 --num-cal 10 --rate 2 --timeout 5
httperf --timeout=5 --client=0/1 --server=www.example.com --port=80 --uri=/blog --rate=2 --send-buffer=4096 --recv-buffer=16384 --num-conns=20 --num-calls=10 httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE Maximum connect burst length: 1 Total: connections 20 requests 200 replies 200 test-duration 10.675 s Connection rate: 1.9 conn/s (533.8 ms/conn, <=4 concurrent connections) Connection time [ms]: min 1175.2 avg 1266.2 max 1728.3 median 1179.5 stddev 179.3 Connection time [ms]: connect 63.4 Connection length [replies/conn]: 10.000 Request rate: 18.7 req/s (53.4 ms/req) Request size [B]: 73.0 Reply rate [replies/s]: min 18.2 avg 19.1 max 20.0 stddev 1.3 (2 samples) Reply time [ms]: response 120.3 transfer 0.0 Reply size [B]: header 238.0 content 0.0 footer 0.0 (total 238.0) Reply status: 1xx=0 2xx=0 3xx=200 4xx=0 5xx=0 CPU time [s]: user 2.36 system 8.30 (user 22.1% system 77.7% total 99.9%) Net I/O: 5.7 KB/s (0.0*10^6 bps) Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
Always check for non-zero error counts.
Going Pro
After you're comfortable using httperf, here's how to take it to the next level:
- use a dedicated physical machine separate from your subject under test to reduce intrusive latencies, and tail the server logs in separate terminal windows. Graph CPU and RAM consumption of the subject.
- build your own version of httperf with your preferred options. On CentOS 7:
git clone https://github.com/httperf/httperf.git cd httperf # read Readme.md sudo yum install automake openssl-devel libtool libtoolize --force autoreconf -i automake ./configure make sudo make install
read the links below and configure open files, port range and TCP timeout - do runs 3 times at different times of the day and/or seasons
- again, always check for non-zero error counts
- add load and stress testing to your server and application deployment checklists. There's always some kind of surprise just waiting to be discovered. :)
- test tools are one of those things where you really need the source code to get what you want
- Runnning
strace httperf ...
, we see that httperf does polling with the select() system call. Hmm ...[..] select(4, [3], [], NULL, {0, 0}) = 0 (Timeout) select(4, [3], [], NULL, {0, 0}) = 0 (Timeout) select(4, [3], [], NULL, {0, 0}) = 0 (Timeout) [..]
akamaras.com: stress test your web server with httperf
SO: Changing the file descriptor size in httperf
easyengine.io: Increase "Open Files Limit"