We have a LAMP setup that is working prety good for half a year. All of a sudden today the apache server (mysql servers are not on this box) started to die. It seems to have started to spawn more and more processes over time. Eventually it will consume all the memory and the server would just die. We are using prefork.
In the mean time what we are doing is just added more ram and increased the MaxClients and ServerLimit parameter to 512. We're just prolonging the crash. The number still goes up slowly. Maybe in a day, it would reach that limit.
What is going on? We only have around 15-20 request per second. We have 1Gb memory and it's not half used, there's no swapping going on. Why is apache creating more and more processes? It's almost like theres a leak somewhere!
The database boxes are fine, they are not causing a delay to requests. We tested some queries everything is quick!
-
In your
httpd.conffile, you'll likely have a section commented out that looks similar to:<IfModule mod_status.c> <Location "/server-status"> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location> ExtendedStatus On </IfModule>In looking at one of my servers that's had a problem w/ the load getting too high, I can see a similar problem ... the lines of 'SS' should never get that high:
Srv PID Acc M CPU SS ... Request 0-0 22830 1/9/3640 K 2.36 7 ... GET /[].css HTTP/1.1 1-0 79114 0/0/858 W 0.00 121462 ... POST /cgi/[] HTTP/1.1 2-0 22856 0/1/3211 W 0.00 20 ... POST /cgi/[] HTTP/1.1 3-0 22890 0/0/2697 W 0.00 0 ... GET /server-status HTTP/1.0 4-0 79105 0/5/525 W 0.34 121463 ... POST /cgi/[] HTTP/1.1 5-0 22892 1/1/764 K 0.00 6 ... GET /[].js HTTP/1.1 6-0 22893 1/1/449 K 0.00 5 ... GET /[].js HTTP/1.1 7-0 22894 1/1/57 K 0.00 5 ... GET /[].js HTTP/1.1 8-0 22895 1/1/426 K 0.00 4 ... GET /[].js HTTP/1.1 9-0 - 0/0/40 . 0.00 2 ... OPTIONS * HTTP/1.0 10-0 22897 0/0/16 _ 0.00 4 ... OPTIONS * HTTP/1.0 11-0 22898 0/0/8 _ 0.00 4 ... OPTIONS * HTTP/1.0(you might need to scroll down to see that table -- the upper tables will be overall server statistics, and then a visualization of what each of the children is currently doing)
update : of course, this assumes something's going wrong. (based on your comment of only 10-15 requests per second). I have some other servers where people are mirroring files from us, and as the files are quite large, and there's a few folks who've been known to open 500 streams with not so great bandwidth, it'll eat up all 1024 connections, but it's perfectly normal and doesn't cause a crash.
If you're having problems with runaway CGIs, you might consider using suExec or CGIwrap to limit the execution time, although there will be overhead for using them.
From Joe H.
0 comments:
Post a Comment