Reverse Proxy POST problem

David Johnson
18 November 2013, 17:24
Hiawatha version: 9.3
Operating System: debian 6.0.8

Hi Hugo,

I think I may have come across a problem with the Reverse Proxy function specific to 9.3.

My set up is hiawatha acting as a reverse proxy to another instance of hiawatha running on an openvz container on the same system. Using 9.2 everything works fine. When updated to 9.3 GET requests are fine but POSTing form data where the response is a 302 redirect is so slow - may be 5+s for a request which on 9.2 is almost instant.

In the logs under 9.2 the redirect happens instantly, under 9.3 there is a 5 or 6 sec delay between the POST request and the GET request generated by the redirect. It appears the same in the logs of the proxy and the backend server.

It seems to be the front end proxy where the problem lies - switch back to 9.2 and all is fine, bypass the proxy and talk direct to the backend server and everything is fine. This is the case whether the backend server is running 9.2 or 9.3. All the keep alive and time for request settings are at default on both proxy and backend server.

Something to do with the reverse proxy keep alive code added in 9.3?

David Johnson
Hugo Leisink
18 November 2013, 17:29
I'm already aware of some issues with the reverse proxy in 9.3. I have a beta of 9.3.1 ready, which can be found here.
David Johnson
19 November 2013, 13:02
Hi Hugo,

9.3.1 beta does not fix the problem I'm having, although it has changed the results I get - now long output from complex php scripts doesn't work at all when passed through the reverse proxy function - all that gets back to the browser are the headers plus a bit of the first chunk. Here are abridged log entries from the backend and the reverse proxy for a single request - note that the proxy only sends a small part of the output:

Reverse Proxy: [19/Nov/2013:11:38:18 +0000] "GET /index.php HTTP/1.1" 200 879
Backend Server: [19/Nov/2013:11:38:18 +0000] "GET /index.php HTTP/1.1" 200 38384

Chrome also throws an error after the keep-alive time expires:


These are the headers received by the browser:

HTTP/1.1 200 OK
Date: Tue, 19 Nov 2013 11:38:18 GMT
Server: Hiawatha v9.3.1
Connection: keep-alive
Transfer-Encoding: chunked
X-Powered-By: PHP/5.3.27-1~dotdeb.0
Expires: Mon, 26 Jul 1997 05:00:00 GMT
Last-Modified: Tue, 19 Nov 2013 11:38:18 GMT
Cache-Control: no-store, no-cache, must-revalidate
Cache-Control: post-check=0, pre-check=0
Pragma: no-cache
Content-type: text/html

With 9.3 the problem only occurred when receiving back redirects from a POST submission.

I've gone back to 9.2 for now, which works fine with the same scripts.

Thanks, DJ
Hugo Leisink
19 November 2013, 14:03
Do you have an URL of your backend webserver for me? I'd like to reproduce the issue so I can investigate what goes wrong.
David Johnson
19 November 2013, 16:39
Hi Hugo,

The backend server is not on a public IP - it an openvz VM running on the same host as the reverse proxy on an internal private network.

I'll see if I can configure something with port forwarding so you can access it. Will contact by private message with more info.

I have been able to reproduce symptoms of the same problem with the simplest of php scripts running on the backend:


With 9.2 this works fine, with 9.3 the main html loads okay, but one or two of the additional calls of images take 5 or 6s to complete and on 9.3.1 one or two of the images fail to load altogether with just the headers transferred.

Hugo Leisink
19 November 2013, 16:47
What you can do is use port forwarding (easy with rinetd) and limit access to my server only (
19 November 2013, 18:32
Hi David.

If I were to guess this might be due to a change in Hiawatha's handling of the Content-length field. (Required to support persistent connections)
Content-length is not present in your header ?

There is a theoretical chance that if the Content-length is missing and Hiawatha manages to read only the HTTP header from the socket. that the HTTP body might be dropped.

(content_read == content_length == 0 in target.c:proxy_request() which will cause it to skip the rest of the data, instead of reading until the socket closes)
David Johnson
20 November 2013, 10:37
Hi Hugo,

But the Transfer-encoded: chunked is there - doesn' t this mean that there will be no Content-length header in the initial headers - the length being set for each chunk? That how I understand it, although I'm straying into an area I'm not familiar with, so I'm only posing the question. This isn't something I'm specifically setup - its just how php5-fpm and hiawatha are working with default config.

Hugo Leisink
21 November 2013, 14:54
Can please redownload the 9.3.1 beta package and try again?

About your previous post: a CGI application can chose to specify the length of the output, which will result in a Content-Length header. Otherwise, the webserver will send the output in chunks, which results in a "Transfer-Encoding: chunked" header. You won't see them both in the same header.
David Johnson
21 November 2013, 19:32
Hi Hugo,

Thanks for the update - compiled and installed. Progress. This has fixed the issue with the ERR_INCOMPLETE_CHUNKED_ENCODING errors and failure to transmit image files etc. However, the original problem I was seeing with long delays in POST requests with 9.3 as compared with 9.2 still persists. Example log extracts follow:

1st POST transaction and 302 redirect response/request - 9.3.1 Reverse Proxy

9.3.1 Proxy log - - [21/Nov/2013:17:59:52 +0000] "POST /admin/adminindex.php?action=categories&sub=update&type= HTTP/1.1" 302 2205... - - [21/Nov/2013:17:59:58 +0000] "GET /admin/adminindex.php?action=categories&sub=view&type=&done=1&numb=1 HTTP/1.1" 200 60198...

9.3.1 backend log - - [21/Nov/2013:17:59:47 +0000] "POST /admin/adminindex.php?action=categories&sub=update&type= HTTP/1.1" 302 440... - - [21/Nov/2013:17:59:52 +0000] "GET /admin/adminindex.php?action=categories&sub=view&type=&done=1&numb=1 HTTP/1.1" 200 59502...

2nd POST transaction and 302 redirect response/request - 9.2 Reverse Proxy

9.2 Proxy log - - [21/Nov/2013:18:07:20 +0000] "POST /admin/adminindex.php?action=categories&sub=update&type= HTTP/1.1" 302 2157..... - - [21/Nov/2013:18:07:20 +0000] "GET /admin/adminindex.php?action=categories&sub=view&type=&done=1&numb=1 HTTP/1.1" 200 60105 ....

9.3.1 Backend log - - [21/Nov/2013:18:07:20 +0000] "POST /admin/adminindex.php?action=categories&sub=update&type= HTTP/1.1" 302 407.... - - [21/Nov/2013:18:07:20 +0000] "GET /admin/adminindex.php?action=categories&sub=view&type=&done=1&numb=1 HTTP/1.1" 200 59469 ....

Between these two transactions all I did was swap the version of hiawatha running on the frontend reverse proxy. It makes no difference which version is running on the backend. The 2nd transaction (9.2) goes through instantly, note the delay and the odd timestamp relationship in the 1st transaction. The delay is consistently 6s. It looks like a socket timeout is closing the post request between the proxy and the backend or something like that.

Hope this help. DJ
David Johnson
22 November 2013, 16:23
Hi Hugo,

A bit more information for you which may help. This is certainly linked to Content-Length header v Transfer-Encoding: chunked headers. I've been using chrome developer tools to check the headers being sent back. The default action of php when using a header(Location:..) redirect appears to be to set the Transfer-Encoding: chunked header and not Content-Length even though the content length of the body of a 302/303 redirect is zero. Manually adding a Content-Length: 0 header as well as the Location:... redirect header forces the sending of the Content-Length instead of Transfer-Encoding and this fixes the reverse proxy delay I'm getting in 9.3.1 (and 9.3) and which wasn't there in 9.2.

The bad news is the web app I'm working with is a commercial shopping cart app that makes heavy use of these kind of redirects so fixing the app is going to be a pain, even if its possible to do so (parts of the app are ioncube encoded). I guess that there will be lots of other php scripts out there which also use Location header redirects without manually setting the content length.

Hope this helps. DJ
Hugo Leisink
22 November 2013, 17:19
And what about adding a check for 302 and 303 to line 1542 in src/target.c? Does that also fix the issue?
David Johnson
23 November 2013, 16:36
Hi Hugo,

Adding 302/303 to line 1542 of src/target.c is not enough - as the Transfer-Encoding: chunked header is present the content_length var is immediately reset to -1 by the following if block at 1559. However, after a bit of reading on the RFC on chunked encoding I think I have tracked down the problem to the block at line 1595:
  if (chunked && (bytes_in_buffer >= 7)) {
if (memcmp(buffer + bytes_in_buffer - 7, "\r\n0\r\n\r\n", 7) == 0) {
keep_reading = false;

Change this to:

if (chunked && (bytes_in_buffer >= 5)) {
if (memcmp(buffer + bytes_in_buffer - 5, "0\r\n\r\n", 5) == 0) {
keep_reading = false;

and the problem goes away. It appears there is no certainty of there being a CRLF in front of the 0 length final chunk - for instance when it is the first line in the buffer. If there are any other implications as a result of this change, I don't know but everything I have tested it with seems to work correctly with no delay.

Hope this helps. DJ
Hugo Leisink
25 November 2013, 22:15
Thanks, that indeed helps. I understand how that can cause the delay. The first code (the 7 bytes) was already sort of hackish, but with less bytes makes it even more. There is a slight chance that the content itself contains that pattern, which can cause trouble in certain situations. I've implemented a better chunked-transfer-encoding handling, so please redownload the 9.3.1 version and try that one.
David Johnson
26 November 2013, 12:19
Hi Hugo,

Latest beta compiled/installed and working great. No sign of the delay problem. Will continue to give it a good workout on my test/dev server in the coming days and report if I find any issues.

Thanks, DJ
Hugo Leisink
3 December 2013, 09:28
And? Everything ok now?
This topic has been closed.