After upgrading to Azure SDK 2.5 with Windows Azure Diagnostics 1.2 (see http://blogs.msdn.com/b/kwill/archive/2014/12/02/windows-azure-diagnostics-upgrading-from-azure-sdk-2-4-to-azure-sdk-2-5.aspx) you may notice that IIS logs and failed request (FREB) logs are no longer transferred to storage.
Root Cause
When WAD generates the diagnostics configuration it queries the IIS Management Service to find the location of the IIS logs, and by default this location is set to %SystemDrive%\inetpub\logs\LogFiles. In a PaaS WebRole IISConfigurator will configure IIS according to your service definition, and part of this setup changes the IIS log file location to C:\Resources\directory\{deploymentid.rolename}.DiagnosticStore\LogFiles\Web. The WAD configuration happens prior to IISConfigurator running which means WAD is watching the wrong folder for IIS logs.
Workaround
To work around this issue you have to restart the WAD diagnostics agent after IISConfigurator has setup IIS. When the WAD diagnostics agent starts up again it will query the IIS Management Service for the IIS log file location and will get the correct C:\Resources\directory\{deploymentid.rolename}.DiagnosticStore\LogFiles\Web location.
The two ways to restart the diagnostics agent are:
- Reboot the VM. This can be done from the portal or from an RDP session with the VM.
- Update the WAD configuration, which will cause the diagnostics agent to refresh it’s configuration. This can be done from Visual Studio (Server Explorer –> Cloud Services –> Right-click a role –> Update Diagnostics –> Make any change and update) or from Powershell (see this post).
One problem with these two options is that you have to manually do this for each role/VM in your service after deploying. The bigger problem is that any operation which recreates the Windows (D: drive) partition will also reset the IIS log file location to the default %SystemDrive% location which will cause the diagnostics agent to again get the wrong location. This will happen to all instances roughly once per month for Guest OS updates, or randomly to single instances due to service healing (see this and this for more info).
Resolution
The WAD dev team is working to fix this issue with the next Azure SDK release. In the meantime you can add the following code to your WebRole.OnStart method in order to automatically reboot the VM once during initial startup.
public override bool OnStart()
{
// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
// Write a RebootFlag.txt file to the %RoleRoot%\Approot\bin folder to track if this VM has rebooted to fix WAD 1.3 IIS logs issue
string path = "RebootFlag.txt";
// If RebootFlag.txt already exists then skip rebooting the VM
if (!System.IO.File.Exists(path))
{
System.IO.File.WriteAllText(path, "Writing RebootFlag at " + DateTime.Now.ToString("O"));
System.Diagnostics.Trace.WriteLine("Rebooting");
System.Diagnostics.Process.Start("shutdown", "/r /t 0");
}
return base.OnStart();
}
Note that this code uses a file on the %RoleRoot% drive as a flag, so this will also cause an additional reboot in extra scenarios such as portal reboots and in-place upgrades (see this post), but these scenarios are rare enough that it should not cause an issue in your service. If you wish to avoid these extra reboots you can set the role runtime execution context to elevated by adding <Runtime executionContext="elevated" /> to the .csdef file and then write to either a file to the %SystemRoot% drive or write a flag to the registry.