Skip to main content

Monitoring (Part 1): Whatchoolookinat?

Continuing my "Business Agility" series (I promise to explain what I mean by this!) I wanted to talk about another area of software that is often over looked and lacking information...application performance monitoring.

To start with why is performance monitoring important and what has it got to do with "business agility"?

As with my previous post on debugging it is the lack of information that directly impacts the ability of the business to improve its software. The lack of performance information is a blind spot as to what is actually happening operationally with your software. Ignorance of a problem doesn’t always equal bliss! In my experience much business software is written without regard for performance. However this is not the sin - the sin in my eyes is that the ability to measure performance is often not thought about at all and if performance does become an issue then it is often too late to put in place a good, consistent monitoring framework. Performance Monitoring should be part of your day 1 design goals!

I also want to define a few things at this point before we really get into it.

1 - I design software from many perspectives ahead of performance - chiefly extensibility (agility) and maintainability; performance is not my primary concern (usually in business software however your software my depend upon its performance alone). By this I mean that I prefer to write cleaner, well constructed and understandable code or components rather than ones that may be more performant but less easy to follow, extend, debug or fix. If an elegant design comes up to you and smacks you across the face with a "I will run very slowly" message then take heed obviously but in my experience you can often use the cleaner design to introduce performance enhancing techniques like caching into the software to boost performance way more than a "a fancy, complicated algorithm that no one can debug".

2 - "Performance monitoring", in the sense I am interested in is "operational" performance. There are many tools that exist to analyse the performance of your software. These are primarily developer orientated, technical tools that can instrument your code in situ on a development PC. They provide incredibly detailed outputs around method and even statement level code. Whilst these are valuable, and they certainly are if your code has performance at its heart I do not think they paint the "operational" picture well; eg: how does your application perform once deployed on an operational platform? Many performance analysis tools are run during the development phase (and usually at the end when it is realised that your application *does* have a performance issue, think horse bolted scenario) and really only provide "relative" performance information - absolute timings will only be available once deployed to the production environment due to the disparity in desktop and server hardware architecture and OS resources and configuration. So what I am specifically talking about is extending these "relative" timings to also include "absolute" timings from an operational environment.

3 - The performance monitoring information itself, at its simplist level is nothing more than a timing against an operation. This is all you essentially need to identify performance problems - however you will definitely benefit from correlating parameter and variable values to the performance figures - eg: creating a "context" for your performance data. It might be a specific combination of values that trigger a spike in performance of an operation - being able to tie the two together really accelerates problem identification and resolution. I coined (I hope) the term "Time To Fix" (TTF) in my last post about debugging and this fits the same pattern. Performance information can also be recorded as part of your debugging trace - this will give you the vital correlation mentioned. For impact and sheer wow factor you can also use the Windows Performance Monitor system to visualise your applications performance - this works very well for operational monitoring.

The most important aspect of performance monitoring is that it should be considered fundamental to helping you diagnose performance problems and improve the quality of your software in general. As with providing better debugging information the more (performance) information you are armed with the better equipped you are to identify, remedy and improve things. I want to convince you that performance monitoring should be baked into your applications from day one and hope to demonstrate that through good component design (and I've even provided the code!) that the overhead is minimal and the reward worthwhile.

Before you the reader go any further and if you have not already acquainted yourself with the specifics of the languages/platform this post is technically targeted at (C#/Microsoft platforms) you might like to read this "primer post" on the background to this post and approach to software design.

Right, welcome back...

Ok, so what are talking about at a code level?

A component interface that allows us to invoke any implementation of a "performance monitor". Using dependency injection we can even dynamically configure our monitoring components to suit operational conditions and requirements. Imagine being able to gather performance information triggered by a specific time period, executing user or any other environmental property!

The ability to provide "flavours" of performance monitor based on scenario. Whilst I think that performance information should be available to all applications there are clearly a difference in requirements from your core enterprise service stacks to other smaller, possibly desktop orientated applications. This gives us different implementations to satisfy the "horses for courses" maxim.

It should be simple to use with a low footprint in the executing code being monitored.

It needs to be flexible enough to monitor any of your application code; eg: it should not be tied to method level only.

Show me the code...
I have loaded the code I will talk about onto my ProjectDistributor.Net site - it may be worth your while downloading this now so you can trace, at a code level, what I will be talking about next. Essentially the code provides....

The performance monitoring interface

A winform application that demonstrates the use of the performance monitoring interface and also provides custom PerfMon counter install/uninstall

One performance monitor implementation to provide a basic "stopwatch" to provide duration and average duration (when used in a iterative scenario like a loop).

One "rich" implementation that provides the ability to visualise your application performance by hooking into Windows PerfMon.

A set of PerfMon orientated utilities to help take the pain out of PerfMon integration.

An ASP.Net HttpModule to hook up your web apps to your custom PerfMon counters.

A simple webservice to demonstrate the Windows PerfMon monitor implementation.

Installing & Running the code
1. Unzip the code into a folder. You should have...


(Ignore the Resharper folders)

The solution is in the BuiltUtilities folder.

Common - this will house all the "common" BuildUtilities libraries
Common/BuildUtilities.Common.Monitoring - this has the monitoring class library. This has all the "common" code we require for monitoring including the performance monitor interface, interface implementations, perfmon hook http module and perfmon helpers.
Monitoring/WebService - The dummy web service that demonstrates the two implementations. This used the standard ASP.NET development webserver. The service is fixed to run on port 81.

Monitoring/WinForm - The main client, this calls the web service and additionally installs/uninstalls your custom perfmon counters.

2. Load up the solution "BuildUtilities.Monitoring.sln" found in the BuiltUtilities folder.

3. Make sure the winform project "TestPerfMonComponentMonitor" is your start up project. Press play/F5. You should have...


4. Install PerfMon counters. Simple - just click the "Install Counters" button!

5. Locate the webserver (otherwise your webservice will fail). You will need to find the webserver executable. It is located in the .Net framework install folder. If you click the "..." button you should be taken to the c:\windows\\framework folder...choose your framework version (minimum v2) folder and select WebDev.WebServer.EXE.

6. Locate webservice code. Click the "..." button and browse to the BuildUtilities\Monitoring\WebService\TestPerfMonService folder.

7. Start Webserver - just click "Start WebServer" - you should get a system tray popup...


8. Install & run DebugView from SysInternals/Technet - this will let you see the web service debug trace to prove the service is being called.

9. Click the "Single Dummy WebService Call" button. You should see a couple of lines of debug appear in DebugView

10. Click "Start PerfMon" - it should launch the Windows Performance Monitor. Click the big plus button on the toolbar to add a new counter. Select "BuildUtilities.Net Perfmon" from the "Performance object" dropdown. You should have a instance in the right hand side instance list.


11. Back to the main winform app...this is the big the "Burst Dummy WebService Call" button and swap to PerfMon - you should see something like this (image too large to display inline).

Summary - end of part 1
As you can see we have a fully instrumented webservice application. Using PerfMon I can visually see what is happening with the webservice performance and I can refer to the DebugView capture to inspect the trace for the service parameter values - as you can imagine it would be pretty quick for you to hunt down, replicate, investigate and resolve any performance issue you might encounter armed with all this information.

It's taken a few late nights to get the code and this post into shape - in doing so I realised that it would be better to split it into multiple posts rather than one monster post. There is value to be had from this post on its own so it is better to get it published so that you can benefit (or not!) from the example right now.

I will be following up this post with a second part and in it I will try to talk about some of the oddities of Windows PerfMon and also more importantly how you can use these components in your own software.

My parting shot is this. How you do the monitoring is not that important; I am merely demonstrating the way I do it. The important thing here is that performance monitoring, like debug tracing is something that has to baked into the application from day 1 - and the reason to do this is to improve the quality of your software and reduce the time to fix issues once it is operational.


jstonis said…
I tried to download your sample code for this but I get an IIS error when I click on the link. I'd love to see how you did this. Anyway you can email this to ?
James Simmonds said…
Just PM'ed you the zip - hope it helps!

Anonymous said…
Hi James,

Likewise, I'm very interested in the ideas you discuss, but can't access the code (I'd also love to see the Windsor intellisense!). Any chance you can rehost, or mail me at si_bell @ Cheers
SI Unik said…
ME too..I really want to see how you did it ;)

Popular posts from this blog

Walk-Thru: Using Wolfpack to automatically deploy and smoke test your system

First, some history... The advent of NuGet has revolutionised many many aspects of the .Net ecosystem; MyGet, Chocolatey & OctopusDeploy to name a few solutions building upon its success bring even more features to the table.

I also spotted that NuGet could solve a problem I was having with my OSS System Monitoring software Wolfpack; essentially this is a core application framework that uses plugins for extension (Wolfpack Contrib) but how to unify, standardise and streamline how these plugins are made available? NuGet to the rescue again - I wrapped the NuGet infrastructure (I deem NuGet to be so ubiquitous and stable that is has transcended into the software "infrastrucuture" hall of fame) with a new OSS project called Sidewinder. Sidewinder allows me to wrap all my little extension and plugins in NuGet packages and deploy them directly from the Wolfpack application - it even allows me to issue a new version of Wolfpack and have Wolfpack update itself, sweet huh?


Geckoboard Countdown Widget v2

v2 is here, now with added colours! This time using the RAG Numbers widget to display your date countdown - as you get nearer the date the number will change from green to amber then red.

The new url is:!
Note 1: notice the new /rag/ path and the msg querystring param.
Note 2: the original v1 url still works

To use this on your Geckoboard add a new "Custom Widget/RAG Numbers" widget. By default the number will turn amber at 10 or less days to go and red at 3 or less days to go however you can change these with the querystring (see below).

Required querystring params
date: "yyyy-mm-dd" format, this is the target date to countdown totz: timezone offset in hours from GMT that you are in. Can be negative if you are behind GMT
Optional querystring params
msg: the label that appears next to the number of days (remember to encode spaces as +). The default is "Days Remaining" if you…

Resharper add-in idea - highlight IDisposable vars

[Update 18th July 2013]
@RobGibbens and Greg Hurlman picked up on this - Rob pointed out that there is an FxCop rule that can do this and Greg suggested a Visual Studio extension. I've had a quick look at the Visual Studio options and it looks like an "Editor Extension" is a good fit....hmmm, Project New, click...doh...dammit I really don't have time for this but it looks a fun little diversion! I'll update here if I get anything working.

[Original Post]
Had an interesting idea for a Visual Studio Resharper add-in the other day but don't have the time to implement it so thought I would put it out might already exist (and hopefully someone can point me in the right direction) or someone will build it (Darren Voisey where are you?!).

The idea is very simple really - when you have a variable for an object that implements IDisposable it gets highlighted or a tell-tale is displayed to let you know it should be disposed or should be wrapped in a using sta…