20 July 2013, 23:52

A friend of mine created a new website for Grubby, a Dutch professional gamer for WarCraft 3 and StarCraft 2. This PHP website is hosted on a quad-core VPS with 8 GB RAM. During busy moments, the server has a thousand simultaneous connections. During testing, the server had about 600 simultaneous connections, which caused a load of 2.7* on the VPS.

This load was expected, so the website was built to make use of Hiawatha's CGI-caching feature. Each page that is generated by PHP will be cached by Hiawatha when the PHP script allows it. The pages were only cached for a few seconds, but this caused the load to drop to 0.05. Even with more than thousand simultaneous connections!

So, if you have ever asked yourself if Hiawatha is capable of serving heavy traffic website, the answer is definitely: yes!

*) For people who don't know what this number means; it's the number of required CPU's to handle the load. So, a load of 1 on a single core system means the machine is 100% busy, just like a load of 2 on a dual-core system.

21 July 2013, 03:25
Aditya Jain
21 July 2013, 16:10
21 July 2013, 22:57
Nice job!
22 July 2013, 10:13
24 July 2013, 12:45
Who doubts? I have always known that Hiawatha is a awesome web server
Congrats Hugo!
Chris Wadge
25 July 2013, 00:07
I recently did a test with both simple static content and PHP. When benchmarking well-tuned Hiawatha and nginx, Hiawatha was able to achieve ~160,000,000 static hits/day without substantial latency, and nginx was able to achieve about 120,000,000 hits/day. In PHP land, the results were similar: Hiawatha + php-fpm 120,000,000, nginx 90,000,000 hits/day. Both sets of numbers were pretty impressive, but Hiawatha especially so, since the main focus is security rather than raw performance. For context, here are the test machine specs:

Xen PV domU (private cluster, not oversubscribed)
OS/Kernel: Debian 7, 3.2.0-4-amd64
CPU: AMD Opteron(tm) Processor 6376 (2 cores)
Memory: 512MB (DDR3 ~1600MHz)

This was performed with AB and httperf. If I get the time, I'll do a more formalized benchmark and publish the results.
1 August 2013, 04:08
@Chris Wadge,

How do you tune the Hiawatha? Would you mind sharing with us?
Chris Wadge
1 August 2013, 08:12
@samiux, using the same general principles one would use to tune most webservers, really. My usual method is to first I tune the OS (open file handles, max process caps, network stack), get rid of superfluous services (e.g. konsole-kit), tune and optimize the filesystem, etc. Then I calculate the min/max/avg memory numbers for each thread after warming the CGI cache (xcache), and set the php-fpm pool accordingly, rounded down to a multiple of the CPU count. Likewise on the front end, I tune Hiawatha based on its own memory footprint, taking into account the expected load for PHP, this time rounded up to CPU count rather than down. During benchmarking you'll also have to turn off your anti-abuse measures for the test host IP via BanlistMask.

During benchmarking, monitor the CPU, memory, and IO load on the webserver. Admittedly, I wasn't actually able to saturate Hiawatha's CPUs in my quick and dirty benchmarks, because I saturated the test host's upstream pipe. This wasn't the case for nginx, which bottlenecked on CPU before the pipe was fully saturated.

Also, note that this is with a stock Debian kernel. I imagine if I took a few extra steps and rolled a custom kernel specifically for throughput, targeted cflags, etc... those numbers might be even higher.
1 August 2013, 11:19
@Chris Wadge,

Thanks for the information.

By the way, how do you measure the benchmark? Which tool?

Chris Wadge
1 August 2013, 12:26
I used AB and httperf for my initial baseline tests.
30 August 2013, 00:24
Did you considering using siege ? I have vague memory this is better use when peforming load tests.
19 November 2013, 21:50
Most websites only contain static pages, which could have been made completely without PHP. So I think PHP is the real problem in this case...