HTTP Crashes and Multithread gets

The crashes of HTTP has been identified now reliable enough.

Time and time again when the logs are being inspected the reason found is because people are using multi-threading GET via PHP, ie. via Ajax Filemanager downloading large files with multiple threads.

We are now looking to implement on next update a server wide per IP connection limit, lifting the PHP connection limits just above this. Unfortunately, it will also mean that FTP connections will be limited lower than now, or to modify the Ajax filemanager to use direct GETs instead of via PHP handled gets.

Every user has direct HTTP access to their data in URI /data, so if your username is johndoe and server is foo, URI johndoe.foo.pulsedmedia.com/data 

I suggest that is used instead of Ajax filemanager download, thus bypassing PHP layer for large data GETs.

Normally PHP processes are automatically restarted by Lighttpd if they crash, so there is somekind of bug in fastcgi layer preventing this from happening because of persistent multiple large GETs. What stuns me however, is that the users who does this don't get alarmed when their connections keep constantly dropping. I would atleast be curious what is going on if this is constantly happening.

We will introduce code changes, or per IP connection limits in an upcoming update to fix this problem. It will mean for those users who try too many connections unability to even access UI for a moment.

 



Tuesday, May 17, 2011

<< Geri