I'm relatively new to Node.js (been playing with it for a while, but just starting to use it for serious stuff). I was experiencing some very odd benchmark behavior when testing locally, and decided to build the most basic possible HTTP server and benchmark it with Apache Benchmark (ab).
Here's my code:
var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.write('Hello, world!');
res.end();
}).listen(8888);
When I run ab like so:
ab -n 1000000 -c 100 'http://127.0.0.1:8888/'
I get the following results:
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
apr_socket_recv: Operation timed out (60)
Total of 16413 requests completed
I'm using nodejs version 0.10.25. I've also set my ulimit to 8192 (the max) before running my server and ab -- so file descriptor limits can't be reached.
Here's what I don't understand:
- Why is AB not able to make all 1 million requests? Why is it failing after only ~16000?
- Is this normal behavior? I was expecting to see a lot of throughput, and be able to finish the tests quickly.
Thanks!
Bonus: I've recorded a small screencast you can view here: http://recordit.co/9i0ja6GZqR This shows what I'm talking about.
UPDATE:
After seeing this post: Why does a simple Thin server stop responding at 16500 requests when benchmarking? and running:
sudo sysctl -w net.inet.tcp.msl=1000
I was able to successfully complete my ab command with the following results:
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 100000 requests
Completed 200000 requests
Completed 300000 requests
Completed 400000 requests
Completed 500000 requests
Completed 600000 requests
Completed 700000 requests
Completed 800000 requests
Completed 900000 requests
Completed 1000000 requests
Finished 1000000 requests
Server Software:
Server Hostname: 127.0.0.1
Server Port: 8888
Document Path: /
Document Length: 13 bytes
Concurrency Level: 100
Time taken for tests: 338.545 seconds
Complete requests: 1000000
Failed requests: 0
Write errors: 0
Total transferred: 114000000 bytes
HTML transferred: 13000000 bytes
Requests per second: 2953.82 [#/sec] (mean)
Time per request: 33.854 [ms] (mean)
Time per request: 0.339 [ms] (mean, across all concurrent requests)
Transfer rate: 328.84 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 14 212.3 0 8363
Processing: 0 20 114.8 11 4205
Waiting: 0 20 114.2 11 4205
Total: 0 34 240.8 11 8368
Percentage of the requests served within a certain time (ms)
50% 11
66% 13
75% 14
80% 14
90% 17
95% 21
98% 32
99% 605
100% 8368 (longest request)
Unfortunately, I'm unable to go beyond 100 concurrent requests -- if I go up to 200, for instance, I get the following:
rdegges at Randalls-MacBook-Pro in ~
○ ab -n 1000000 -c 200 'http://127.0.0.1:8888/'
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
apr_socket_recv: Connection reset by peer (54)
It seems like I'm essentially capped at 100 concurrent requests.
The other thing I'm concerned about is the response time from the first ab command. It loks like the mean response time was about 33.854ms, which seems awfully high considering what the server is doing.
So I guess my question is: what can I do to improve concurrent requests and decrease processing time per request? I'm sure I'm doing something wrong here.
Thanks!