As an IT consultant, it is always healthy to do a periodic review of client servers to prevent unexpected issues from arising. During a recent review of a client’s server, errors were found relating to w3wp.exe shutting down constantly. “w3wp.exe” is the process that runs an individual application pool in IIS. This particular client runs an internal website which is business critical. Seeing this event, raised my concern that the application pool was not functioning properly, and we were in for potential issues in the near future.
My first goal was to determine that this process did indeed belong to the business line application website. I opened task manager to determine if multiple w3wp processes were running on the server. Two different processes were found. The next step was to determine which process was shutting down all the time. Logging showed the process was shutting down around every 30 minutes to an hour. Careful monitoring determined that one of the processes stayed active as the memory usage hovered constantly around 100MB, and the other process would drop very low or disappear. I needed to confirm which application pool the active w3wp process belonged to. In order to do this, you will need the PID. If it is not shown by default in task manager, you can display it by going to View – Select Options – Check PID.
Having the PID will allow us to run a command line argument to determine which application pool the process belongs to. Depending on the operating system on the server and IIS version, you will need to enter different commands. First open a command prompt as administrator.
For Windows Server 2003/2003 R2:
For Windows Server 2008/2008 R2/2012 with IIS7:
%SystemRoot%\System32\inetsrv\appcmd list wp
You will receive a list of application pool names with their PIDs next to them. From here open IIS to view your websites. Under the properties of the website, the application pool associated with the site can be found. I was able to determine that the application pool that was staying active was for the business critical website, and the other application pool that was shutting down was the default application pool. Crisis averted.
With the worry dissipating, I wanted to determine why the default app pool was stopping and creating errors. If the application pool was not being used by the server, the simplest answer was to just disable the website and move on. This particular server was using the default website for other functions therefore troubleshooting began. First option was to look over the event logs for any other related errors at the time the application pool shutdown. At times of shutdown, no other correlated events were found. Taking a look at the times of alerts, showed that the events had a pattern. As stated above, it was consistently shutting down almost every 30 minutes. What if this wasn’t shutting down due to a crash? Some quick searched showed that there are options to have application pools shut themselves down due to being idle for certain periods of time. This is primarily to save resources on your server.
Idle time settings can be found by opening IIS, expand the server you are connected to and selecting Application Pools. You will need to find your application pool from the list and select properties or advance settings depending on version on IIS7. In the list of properties, you will find Idle Time-out field. Next to it will indicate the time. For our application pool, we found it was set to 20 minutes, the typical default value. We increased this threshold to two hours as we wanted to alleviate some of the error messages.
After update, the error logs have virtually no further messages for application pool shutdown. This helped clean up the event logs so we could see other potential issues if they arise. Without regular review of the client’s server, we would have never found the potential issue. Luckily the error turned out to be nothing major however finding it did help with troubleshooting in the future. Now event logs are cleaner without this message propagating every 30 minutes. Seeing this error could have lead us on a different path costing hours of troubleshooting.