0

I'm relatively new to Node.js (been playing with it for a while, but just starting to use it for serious stuff). I was experiencing some very odd benchmark behavior when testing locally, and decided to build the most basic possible HTTP server and benchmark it with Apache Benchmark (ab).

Here's my code:

var http = require('http');

http.createServer(function(req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.write('Hello, world!');
  res.end();
}).listen(8888);

When I run ab like so:

ab -n 1000000 -c 100 'http://127.0.0.1:8888/'

I get the following results:

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
apr_socket_recv: Operation timed out (60)
Total of 16413 requests completed

I'm using nodejs version 0.10.25. I've also set my ulimit to 8192 (the max) before running my server and ab -- so file descriptor limits can't be reached.

Here's what I don't understand:

  • Why is AB not able to make all 1 million requests? Why is it failing after only ~16000?
  • Is this normal behavior? I was expecting to see a lot of throughput, and be able to finish the tests quickly.

Thanks!

Bonus: I've recorded a small screencast you can view here: http://recordit.co/9i0ja6GZqR This shows what I'm talking about.

UPDATE:

After seeing this post: Why does a simple Thin server stop responding at 16500 requests when benchmarking? and running:

sudo sysctl -w net.inet.tcp.msl=1000

I was able to successfully complete my ab command with the following results:

This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
Completed 100000 requests
Completed 200000 requests
Completed 300000 requests
Completed 400000 requests
Completed 500000 requests
Completed 600000 requests
Completed 700000 requests
Completed 800000 requests
Completed 900000 requests
Completed 1000000 requests
Finished 1000000 requests


Server Software:
Server Hostname:        127.0.0.1
Server Port:            8888

Document Path:          /
Document Length:        13 bytes

Concurrency Level:      100
Time taken for tests:   338.545 seconds
Complete requests:      1000000
Failed requests:        0
Write errors:           0
Total transferred:      114000000 bytes
HTML transferred:       13000000 bytes
Requests per second:    2953.82 [#/sec] (mean)
Time per request:       33.854 [ms] (mean)
Time per request:       0.339 [ms] (mean, across all concurrent requests)
Transfer rate:          328.84 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   14 212.3      0    8363
Processing:     0   20 114.8     11    4205
Waiting:        0   20 114.2     11    4205
Total:          0   34 240.8     11    8368

Percentage of the requests served within a certain time (ms)
  50%     11
  66%     13
  75%     14
  80%     14
  90%     17
  95%     21
  98%     32
  99%    605
 100%   8368 (longest request)

Unfortunately, I'm unable to go beyond 100 concurrent requests -- if I go up to 200, for instance, I get the following:

rdegges at Randalls-MacBook-Pro in ~
○ ab -n 1000000 -c 200 'http://127.0.0.1:8888/'
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)
apr_socket_recv: Connection reset by peer (54)

It seems like I'm essentially capped at 100 concurrent requests.

The other thing I'm concerned about is the response time from the first ab command. It loks like the mean response time was about 33.854ms, which seems awfully high considering what the server is doing.

So I guess my question is: what can I do to improve concurrent requests and decrease processing time per request? I'm sure I'm doing something wrong here.

Thanks!

Community
  • 1
  • 1
rdegges
  • 32,786
  • 20
  • 85
  • 109
  • can your computer make that many request at once to apache? try 10,000 concurrent from a hundred boxes to get a better feel for what node is capable of. i don't see how one box can request and respond to all that, and even if it could, the results would be meaningless. – dandavis May 29 '14 at 20:55
  • Do you have a comparison against nginx/apache/... ? To use all your cores, you could use the [cluster](http://nodejs.org/api/cluster.html) module. – toasted_flakes May 29 '14 at 20:56
  • You should first benchmark your server to know how long a response takes. With concurrency 100, you basically only have 10ms to answer a request, otherwise the event loop will get starved and things start piling up. Now, 10ms are actually a lot, but if the underlying network stack gets overwhelmed with requests, you might get there. Once things start piling up, there's no going back. Everything basically grinds to a halt. One thing you could try is running AB from a different machine. AB generates a lot of CPU usage which might contribute to the issue. – Tim May 29 '14 at 21:16
  • Updated my question with some edits, thanks! – rdegges May 29 '14 at 23:55
  • I found that `ab` on OS X is pretty much broken. [Here](https://gist.github.com/robertklep/4cbeaa31f96c9938c008) is the output of running `httperf` on my Mac running against your script. – robertklep May 31 '14 at 06:54

1 Answers1

0

Have the benchmark target computer do nothing but run cluster or other multi-core node solution (it's not useful or relevant to benchmark single process node).

Have multiple computers bombard the target computer over a local network, don't run ab or similar from the server itself.

Esailija
  • 138,174
  • 23
  • 272
  • 326