Continuing my "Business Agility" series (I promise to explain what I mean by this!) I wanted to talk about another area of software that is often over looked and lacking information...application performance monitoring.
To start with why is performance monitoring important and what has it got to do with "business agility"?
As with my previous post on debugging it is the lack of information that directly impacts the ability of the business to improve its software. The lack of performance information is a blind spot as to what is actually happening operationally with your software. Ignorance of a problem doesn’t always equal bliss! In my experience much business software is written without regard for performance. However this is not the sin - the sin in my eyes is that the ability to measure performance is often not thought about at all and if performance does become an issue then it is often too late to put in place a good, consistent monitoring framework. Performance Monitoring should be part of your day 1 design goals!
I also want to define a few things at this point before we really get into it.
1 - I design software from many perspectives ahead of performance - chiefly extensibility (agility) and maintainability; performance is not my primary concern (usually in business software however your software my depend upon its performance alone). By this I mean that I prefer to write cleaner, well constructed and understandable code or components rather than ones that may be more performant but less easy to follow, extend, debug or fix. If an elegant design comes up to you and smacks you across the face with a "I will run very slowly" message then take heed obviously but in my experience you can often use the cleaner design to introduce performance enhancing techniques like caching into the software to boost performance way more than a "a fancy, complicated algorithm that no one can debug".
2 - "Performance monitoring", in the sense I am interested in is "operational" performance. There are many tools that exist to analyse the performance of your software. These are primarily developer orientated, technical tools that can instrument your code in situ on a development PC. They provide incredibly detailed outputs around method and even statement level code. Whilst these are valuable, and they certainly are if your code has performance at its heart I do not think they paint the "operational" picture well; eg: how does your application perform once deployed on an operational platform? Many performance analysis tools are run during the development phase (and usually at the end when it is realised that your application *does* have a performance issue, think horse bolted scenario) and really only provide "relative" performance information - absolute timings will only be available once deployed to the production environment due to the disparity in desktop and server hardware architecture and OS resources and configuration. So what I am specifically talking about is extending these "relative" timings to also include "absolute" timings from an operational environment.
3 - The performance monitoring information itself, at its simplist level is nothing more than a timing against an operation. This is all you essentially need to identify performance problems - however you will definitely benefit from correlating parameter and variable values to the performance figures - eg: creating a "context" for your performance data. It might be a specific combination of values that trigger a spike in performance of an operation - being able to tie the two together really accelerates problem identification and resolution. I coined (I hope) the term "Time To Fix" (TTF) in my last post about debugging and this fits the same pattern. Performance information can also be recorded as part of your debugging trace - this will give you the vital correlation mentioned. For impact and sheer wow factor you can also use the Windows Performance Monitor system to visualise your applications performance - this works very well for operational monitoring.
The most important aspect of performance monitoring is that it should be considered fundamental to helping you diagnose performance problems and improve the quality of your software in general. As with providing better debugging information the more (performance) information you are armed with the better equipped you are to identify, remedy and improve things. I want to convince you that performance monitoring should be baked into your applications from day one and hope to demonstrate that through good component design (and I've even provided the code!) that the overhead is minimal and the reward worthwhile.
Before you the reader go any further and if you have not already acquainted yourself with the specifics of the languages/platform this post is technically targeted at (C#/Microsoft platforms) you might like to read this "primer post" on the background to this post and approach to software design.
Right, welcome back...
Ok, so what are talking about at a code level?
A component interface that allows us to invoke any implementation of a "performance monitor". Using dependency injection we can even dynamically configure our monitoring components to suit operational conditions and requirements. Imagine being able to gather performance information triggered by a specific time period, executing user or any other environmental property!
The ability to provide "flavours" of performance monitor based on scenario. Whilst I think that performance information should be available to all applications there are clearly a difference in requirements from your core enterprise service stacks to other smaller, possibly desktop orientated applications. This gives us different implementations to satisfy the "horses for courses" maxim.
It should be simple to use with a low footprint in the executing code being monitored.
It needs to be flexible enough to monitor any of your application code; eg: it should not be tied to method level only.
A winform application that demonstrates the use of the performance monitoring interface and also provides custom PerfMon counter install/uninstall
One performance monitor implementation to provide a basic "stopwatch" to provide duration and average duration (when used in a iterative scenario like a loop).
One "rich" implementation that provides the ability to visualise your application performance by hooking into Windows PerfMon.
A set of PerfMon orientated utilities to help take the pain out of PerfMon integration.
An ASP.Net HttpModule to hook up your web apps to your custom PerfMon counters.
A simple webservice to demonstrate the Windows PerfMon monitor implementation.
Installing & Running the code
1. Unzip the code into a folder. You should have...
(Ignore the Resharper folders)
The solution is in the BuiltUtilities folder.
Common - this will house all the "common" BuildUtilities libraries
Common/BuildUtilities.Common.Monitoring - this has the monitoring class library. This has all the "common" code we require for monitoring including the performance monitor interface, interface implementations, perfmon hook http module and perfmon helpers.
Monitoring/WebService - The dummy web service that demonstrates the two implementations. This used the standard ASP.NET development webserver. The service is fixed to run on port 81.
Monitoring/WinForm - The main client, this calls the web service and additionally installs/uninstalls your custom perfmon counters.
2. Load up the solution "BuildUtilities.Monitoring.sln" found in the BuiltUtilities folder.
3. Make sure the winform project "TestPerfMonComponentMonitor" is your start up project. Press play/F5. You should have...
4. Install PerfMon counters. Simple - just click the "Install Counters" button!
5. Locate the webserver (otherwise your webservice will fail). You will need to find the webserver executable. It is located in the .Net framework install folder. If you click the "..." button you should be taken to the c:\windows\microsoft.net\framework folder...choose your framework version (minimum v2) folder and select WebDev.WebServer.EXE.
6. Locate webservice code. Click the "..." button and browse to the BuildUtilities\Monitoring\WebService\TestPerfMonService folder.
7. Start Webserver - just click "Start WebServer" - you should get a system tray popup...
8. Install & run DebugView from SysInternals/Technet - this will let you see the web service debug trace to prove the service is being called.
9. Click the "Single Dummy WebService Call" button. You should see a couple of lines of debug appear in DebugView
10. Click "Start PerfMon" - it should launch the Windows Performance Monitor. Click the big plus button on the toolbar to add a new counter. Select "BuildUtilities.Net Perfmon" from the "Performance object" dropdown. You should have a instance in the right hand side instance list.
11. Back to the main winform app...this is the big one....click the "Burst Dummy WebService Call" button and swap to PerfMon - you should see something like this (image too large to display inline).
Summary - end of part 1
As you can see we have a fully instrumented webservice application. Using PerfMon I can visually see what is happening with the webservice performance and I can refer to the DebugView capture to inspect the trace for the service parameter values - as you can imagine it would be pretty quick for you to hunt down, replicate, investigate and resolve any performance issue you might encounter armed with all this information.
It's taken a few late nights to get the code and this post into shape - in doing so I realised that it would be better to split it into multiple posts rather than one monster post. There is value to be had from this post on its own so it is better to get it published so that you can benefit (or not!) from the example right now.
I will be following up this post with a second part and in it I will try to talk about some of the oddities of Windows PerfMon and also more importantly how you can use these components in your own software.
My parting shot is this. How you do the monitoring is not that important; I am merely demonstrating the way I do it. The important thing here is that performance monitoring, like debug tracing is something that has to baked into the application from day 1 - and the reason to do this is to improve the quality of your software and reduce the time to fix issues once it is operational.
To start with why is performance monitoring important and what has it got to do with "business agility"?
As with my previous post on debugging it is the lack of information that directly impacts the ability of the business to improve its software. The lack of performance information is a blind spot as to what is actually happening operationally with your software. Ignorance of a problem doesn’t always equal bliss! In my experience much business software is written without regard for performance. However this is not the sin - the sin in my eyes is that the ability to measure performance is often not thought about at all and if performance does become an issue then it is often too late to put in place a good, consistent monitoring framework. Performance Monitoring should be part of your day 1 design goals!
I also want to define a few things at this point before we really get into it.
1 - I design software from many perspectives ahead of performance - chiefly extensibility (agility) and maintainability; performance is not my primary concern (usually in business software however your software my depend upon its performance alone). By this I mean that I prefer to write cleaner, well constructed and understandable code or components rather than ones that may be more performant but less easy to follow, extend, debug or fix. If an elegant design comes up to you and smacks you across the face with a "I will run very slowly" message then take heed obviously but in my experience you can often use the cleaner design to introduce performance enhancing techniques like caching into the software to boost performance way more than a "a fancy, complicated algorithm that no one can debug".
2 - "Performance monitoring", in the sense I am interested in is "operational" performance. There are many tools that exist to analyse the performance of your software. These are primarily developer orientated, technical tools that can instrument your code in situ on a development PC. They provide incredibly detailed outputs around method and even statement level code. Whilst these are valuable, and they certainly are if your code has performance at its heart I do not think they paint the "operational" picture well; eg: how does your application perform once deployed on an operational platform? Many performance analysis tools are run during the development phase (and usually at the end when it is realised that your application *does* have a performance issue, think horse bolted scenario) and really only provide "relative" performance information - absolute timings will only be available once deployed to the production environment due to the disparity in desktop and server hardware architecture and OS resources and configuration. So what I am specifically talking about is extending these "relative" timings to also include "absolute" timings from an operational environment.
3 - The performance monitoring information itself, at its simplist level is nothing more than a timing against an operation. This is all you essentially need to identify performance problems - however you will definitely benefit from correlating parameter and variable values to the performance figures - eg: creating a "context" for your performance data. It might be a specific combination of values that trigger a spike in performance of an operation - being able to tie the two together really accelerates problem identification and resolution. I coined (I hope) the term "Time To Fix" (TTF) in my last post about debugging and this fits the same pattern. Performance information can also be recorded as part of your debugging trace - this will give you the vital correlation mentioned. For impact and sheer wow factor you can also use the Windows Performance Monitor system to visualise your applications performance - this works very well for operational monitoring.
The most important aspect of performance monitoring is that it should be considered fundamental to helping you diagnose performance problems and improve the quality of your software in general. As with providing better debugging information the more (performance) information you are armed with the better equipped you are to identify, remedy and improve things. I want to convince you that performance monitoring should be baked into your applications from day one and hope to demonstrate that through good component design (and I've even provided the code!) that the overhead is minimal and the reward worthwhile.
Before you the reader go any further and if you have not already acquainted yourself with the specifics of the languages/platform this post is technically targeted at (C#/Microsoft platforms) you might like to read this "primer post" on the background to this post and approach to software design.
Right, welcome back...
Ok, so what are talking about at a code level?
A component interface that allows us to invoke any implementation of a "performance monitor". Using dependency injection we can even dynamically configure our monitoring components to suit operational conditions and requirements. Imagine being able to gather performance information triggered by a specific time period, executing user or any other environmental property!
The ability to provide "flavours" of performance monitor based on scenario. Whilst I think that performance information should be available to all applications there are clearly a difference in requirements from your core enterprise service stacks to other smaller, possibly desktop orientated applications. This gives us different implementations to satisfy the "horses for courses" maxim.
It should be simple to use with a low footprint in the executing code being monitored.
It needs to be flexible enough to monitor any of your application code; eg: it should not be tied to method level only.
Show me the code...
I have loaded the code I will talk about onto my ProjectDistributor.Net site - it may be worth your while downloading this now so you can trace, at a code level, what I will be talking about next. Essentially the code provides....
A winform application that demonstrates the use of the performance monitoring interface and also provides custom PerfMon counter install/uninstall
One performance monitor implementation to provide a basic "stopwatch" to provide duration and average duration (when used in a iterative scenario like a loop).
One "rich" implementation that provides the ability to visualise your application performance by hooking into Windows PerfMon.
A set of PerfMon orientated utilities to help take the pain out of PerfMon integration.
An ASP.Net HttpModule to hook up your web apps to your custom PerfMon counters.
A simple webservice to demonstrate the Windows PerfMon monitor implementation.
Installing & Running the code
1. Unzip the code into a folder. You should have...
(Ignore the Resharper folders)
The solution is in the BuiltUtilities folder.
Common - this will house all the "common" BuildUtilities libraries
Common/BuildUtilities.Common.Monitoring - this has the monitoring class library. This has all the "common" code we require for monitoring including the performance monitor interface, interface implementations, perfmon hook http module and perfmon helpers.
Monitoring/WebService - The dummy web service that demonstrates the two implementations. This used the standard ASP.NET development webserver. The service is fixed to run on port 81.
Monitoring/WinForm - The main client, this calls the web service and additionally installs/uninstalls your custom perfmon counters.
2. Load up the solution "BuildUtilities.Monitoring.sln" found in the BuiltUtilities folder.
3. Make sure the winform project "TestPerfMonComponentMonitor" is your start up project. Press play/F5. You should have...
4. Install PerfMon counters. Simple - just click the "Install Counters" button!
5. Locate the webserver (otherwise your webservice will fail). You will need to find the webserver executable. It is located in the .Net framework install folder. If you click the "..." button you should be taken to the c:\windows\microsoft.net\framework folder...choose your framework version (minimum v2) folder and select WebDev.WebServer.EXE.
6. Locate webservice code. Click the "..." button and browse to the BuildUtilities\Monitoring\WebService\TestPerfMonService folder.
7. Start Webserver - just click "Start WebServer" - you should get a system tray popup...
8. Install & run DebugView from SysInternals/Technet - this will let you see the web service debug trace to prove the service is being called.
9. Click the "Single Dummy WebService Call" button. You should see a couple of lines of debug appear in DebugView
10. Click "Start PerfMon" - it should launch the Windows Performance Monitor. Click the big plus button on the toolbar to add a new counter. Select "BuildUtilities.Net Perfmon" from the "Performance object" dropdown. You should have a instance in the right hand side instance list.
11. Back to the main winform app...this is the big one....click the "Burst Dummy WebService Call" button and swap to PerfMon - you should see something like this (image too large to display inline).
Summary - end of part 1
As you can see we have a fully instrumented webservice application. Using PerfMon I can visually see what is happening with the webservice performance and I can refer to the DebugView capture to inspect the trace for the service parameter values - as you can imagine it would be pretty quick for you to hunt down, replicate, investigate and resolve any performance issue you might encounter armed with all this information.
It's taken a few late nights to get the code and this post into shape - in doing so I realised that it would be better to split it into multiple posts rather than one monster post. There is value to be had from this post on its own so it is better to get it published so that you can benefit (or not!) from the example right now.
I will be following up this post with a second part and in it I will try to talk about some of the oddities of Windows PerfMon and also more importantly how you can use these components in your own software.
My parting shot is this. How you do the monitoring is not that important; I am merely demonstrating the way I do it. The important thing here is that performance monitoring, like debug tracing is something that has to baked into the application from day 1 - and the reason to do this is to improve the quality of your software and reduce the time to fix issues once it is operational.
Comments
James
Likewise, I'm very interested in the ideas you discuss, but can't access the code (I'd also love to see the Windsor intellisense!). Any chance you can rehost, or mail me at si_bell @ hotmail.com? Cheers