Skip to main content

Introducing...Declarative Deployment Verification Tests (DDVT)

I'm reading Continuous Delivery (CD) at the moment and it's a thought provoking read and provides a great opportunity to reflect on your own deployment and delivery problems (oh come on, you have some too right?). A couple of recent deployment incidents of our own prompted me into action. 

Introducing "Declarative Deployment Verification Tests" - or DDVT

So what is this?

Essentially it boils down to approaching deployment scripts as you would a development task - test first - and places the responsibility of defining the deployment success on the developer, who as the person creating the software, should be best placed to understand it's deployment requirements. That's the basic premise - define the success of the deployment up front in the form of unit tests in your favourite test framework, write the deployment scripts - rinse and repeat until you have all greens from dev to production environments.

I believe the magic ingredient though is the 'declarative' part - using BDD style naming in your tests. This has many benefits - the obvious one being structure and readability but also communicates the intent of the deployment action very very clearly. This has incredible value as it allows your deployment team, who might not be that au-fait with the details of the software they are deploying, to understand what you are trying to achieve. I have personally experienced a deployment being rolled back because of a very silly, minor deployment script error that caused the deployment guys (quite rightly) to bail out and roll back - if they understood better what was being achieved they could have intelligently inspected the script, possibly made the fix and got the deployment out the door.

Let's jump to an example.

Scenario
You have developed a new feature "X" that has a windows service that performs some backend processing against a new table in an existing database and delivers the results via a new endpoint in an existing webservice. Let's break this down into the deployment objects and requirements.
  • Windows service
  • Service account
  • New table
  • Service account requires db login and table/sp permissions
  • Ditto for the website service account
  • New webservice endpoint
  • ...
Phew - quite a list...and I can guarantee that this grows so fast during development that unless you dedicate tasks to create the deployment scripts as part of development you will be left with a huge list that takes much longer to implement at the end.

The typical cycle I have witnessed is this, dev with either admin permissions (yikes!) or manually create/deploy/configure things, forget to document each deployment item, create the deployment scripts at the end of development, try a deployment to a testing environment, scripts fail (you forgot about that db login), fix and re-run, fail (you forgot the table permissions), fix and re-run.......apologise to everyone...it will work this time for sure...oh,....

So on to the tests. BDD test style goes something like this,

GIVEN some context
WHEN something happens
THEN this is the expected result

Applying this for our scenario we would end up with a package of tests for this deployment split across the windows service, database and web service. Taking the database deployment as a starting point here is our pseudo nunit-ish DDVT code,
 
Namespace Company.Project.Tests.Deployment.FeatureX

[TestFixture]
[Category"Deployment"]
[Category"FeatureX"]
class WhenTheDatabaseObjectsAreDeployed
{
[TestFixtureSetup]
void Given()
{
    // arrange connections, helpers etc to be used by tests
}

[Test]
void ThenTheTableBobShouldExistInDbFred
{
    // Assert.That..Table Bob does indeed exist the db Fred
}

[Test]
void ThenTheWindowsServiceAccountShouldHaveDbLoginGranted()
{
    // Assert.That..the above is true!
}

[Test]
void ThenTheWindowsServiceAccountShouldHaveInsertPermissionToTableBob()
{
    // Assert.That..you get the idea!....
}
...
// more tests....
...
}

Repeat for the windows and web service deployment objects.
Run your tests and see how readable they appear in your test runner
Then the most suited resource can start to implement the deployment scripts.
 

As you can see from the screenshot, should a test fail it's crystal clear what it was trying to assert and gives the DDVT test executor an opportunity to understand the deployment intent and possibly fix the problem on the fly.

If you have followed the CD book you will know that a great deployment process is unlikely to fail - it's in action too often and to be honest if you have a fully automated system then I'm sure you will have cracked this 'verification' problem already - deployment automation solves the majority of issues found in this stage of the software lifecycle.

So what value does the DDVT approach offer then?

Well for those mere mortals still striving for continuous delivery automated perfection this approach will certainly help tighten up your deployment process and I certainly think there are benefits in just approaching deployment script development such as,
  • Declarative, descriptive DDVT created up front rather than cobbled together at the end of development
  • DDVT effectively create a deployment 'spec'
  • Provide a 'done' indicator as well as a 'quality' meter; you know you are 'done' implementing the deployment script when all the tests go green. You know the 'quality' of a deployment based on then number of test failures
  • You can run these tests whenever you wish to sanity check the software install/platform or as a first pass in a troubleshooting run.
In practical terms, if you are to adopt a DDVT approach then you will need to develop some infrastructure code/library to assist with those assert statements - eg, "Assert.That(DDVHelpers.SqlServer("server01").Db("Fred").Table("Bob").Exists(), ...)".
 
I have the bare bones of such a library that I currently use to power my SqlServer database DDVT and I am also developing windows service, filesystem and msmq DDV helpers. I will make this code available via my OSS project "DeploymentWang" - no promises about when though!
 
Would a Declarative Deployment Verification Testing approach improve your development/deployment process?


Comments

James - when do you envisage these tests being run? If they are run post-deployment in a new environment, what do they add beyond a set of system-tests that not only verify that the deployment worked, but that the features/functions that rely on the deployment also work?

If the error messaging that comes out of a failed test is explicit enough, surely the output of a set of system tests will identify whether or not the deployment worked?

It should be possible to bubble up an exception that says "Feature 'X' failed because table 'Bob' doesn't exist."
James Simmonds said…
I'd say primarily as part of the actual deployment to give you instant feedback about a specific deployment problem.

The system tests I have seen to date are pretty poor in terms of feedback - they also make massive assumptions about system state and the state of the infrastructure and software too.

I think it all rather depends on how mature/automated your deployment process is really. Just approaching deployment script development test first has benefits but these certainly extend the more "agricultural" your deployment. Having a deployment test called "UserBobNotInDbReaderRole" fail is certainly going to give the deployment crew the chance to fix this on the spot.

The other major benefit of this approach is that it enables a separation between the what and the how - the devs can state what and the deployment/ops/infrastructure team can implement the how using whatever specialist/niche tools they like

Popular posts from this blog

Walk-Thru: Using Wolfpack to automatically deploy and smoke test your system

First, some history... The advent of NuGet has revolutionised many many aspects of the .Net ecosystem; MyGet, Chocolatey & OctopusDeploy to name a few solutions building upon its success bring even more features to the table.

I also spotted that NuGet could solve a problem I was having with my OSS System Monitoring software Wolfpack; essentially this is a core application framework that uses plugins for extension (Wolfpack Contrib) but how to unify, standardise and streamline how these plugins are made available? NuGet to the rescue again - I wrapped the NuGet infrastructure (I deem NuGet to be so ubiquitous and stable that is has transcended into the software "infrastrucuture" hall of fame) with a new OSS project called Sidewinder. Sidewinder allows me to wrap all my little extension and plugins in NuGet packages and deploy them directly from the Wolfpack application - it even allows me to issue a new version of Wolfpack and have Wolfpack update itself, sweet huh?

Abou…

Resharper add-in idea - highlight IDisposable vars

[Update 18th July 2013]
@RobGibbens and Greg Hurlman picked up on this - Rob pointed out that there is an FxCop rule that can do this and Greg suggested a Visual Studio extension. I've had a quick look at the Visual Studio options and it looks like an "Editor Extension" is a good fit....hmmm, Project New, click...doh...dammit I really don't have time for this but it looks a fun little diversion! I'll update here if I get anything working.

[Original Post]
Had an interesting idea for a Visual Studio Resharper add-in the other day but don't have the time to implement it so thought I would put it out there...it might already exist (and hopefully someone can point me in the right direction) or someone will build it (Darren Voisey where are you?!).

The idea is very simple really - when you have a variable for an object that implements IDisposable it gets highlighted or a tell-tale is displayed to let you know it should be disposed or should be wrapped in a using sta…

Geckoboard Countdown Widget v2

v2 is here, now with added colours! This time using the RAG Numbers widget to display your date countdown - as you get nearer the date the number will change from green to amber then red.

The new url is: http://geckocountdown.appspot.com/rag/?date=2010-12-25&tz=0&msg=Days+to+Xmas!
Note 1: notice the new /rag/ path and the msg querystring param.
Note 2: the original v1 url still works


To use this on your Geckoboard add a new "Custom Widget/RAG Numbers" widget. By default the number will turn amber at 10 or less days to go and red at 3 or less days to go however you can change these with the querystring (see below).


Required querystring params
date: "yyyy-mm-dd" format, this is the target date to countdown totz: timezone offset in hours from GMT that you are in. Can be negative if you are behind GMT
Optional querystring params
msg: the label that appears next to the number of days (remember to encode spaces as +). The default is "Days Remaining" if you…