Skip to main content

Introducing...Declarative Deployment Verification Tests (DDVT)

I'm reading Continuous Delivery (CD) at the moment and it's a thought provoking read and provides a great opportunity to reflect on your own deployment and delivery problems (oh come on, you have some too right?). A couple of recent deployment incidents of our own prompted me into action. 

Introducing "Declarative Deployment Verification Tests" - or DDVT

So what is this?

Essentially it boils down to approaching deployment scripts as you would a development task - test first - and places the responsibility of defining the deployment success on the developer, who as the person creating the software, should be best placed to understand it's deployment requirements. That's the basic premise - define the success of the deployment up front in the form of unit tests in your favourite test framework, write the deployment scripts - rinse and repeat until you have all greens from dev to production environments.

I believe the magic ingredient though is the 'declarative' part - using BDD style naming in your tests. This has many benefits - the obvious one being structure and readability but also communicates the intent of the deployment action very very clearly. This has incredible value as it allows your deployment team, who might not be that au-fait with the details of the software they are deploying, to understand what you are trying to achieve. I have personally experienced a deployment being rolled back because of a very silly, minor deployment script error that caused the deployment guys (quite rightly) to bail out and roll back - if they understood better what was being achieved they could have intelligently inspected the script, possibly made the fix and got the deployment out the door.

Let's jump to an example.

Scenario
You have developed a new feature "X" that has a windows service that performs some backend processing against a new table in an existing database and delivers the results via a new endpoint in an existing webservice. Let's break this down into the deployment objects and requirements.
  • Windows service
  • Service account
  • New table
  • Service account requires db login and table/sp permissions
  • Ditto for the website service account
  • New webservice endpoint
  • ...
Phew - quite a list...and I can guarantee that this grows so fast during development that unless you dedicate tasks to create the deployment scripts as part of development you will be left with a huge list that takes much longer to implement at the end.

The typical cycle I have witnessed is this, dev with either admin permissions (yikes!) or manually create/deploy/configure things, forget to document each deployment item, create the deployment scripts at the end of development, try a deployment to a testing environment, scripts fail (you forgot about that db login), fix and re-run, fail (you forgot the table permissions), fix and re-run.......apologise to everyone...it will work this time for sure...oh,....

So on to the tests. BDD test style goes something like this,

GIVEN some context
WHEN something happens
THEN this is the expected result

Applying this for our scenario we would end up with a package of tests for this deployment split across the windows service, database and web service. Taking the database deployment as a starting point here is our pseudo nunit-ish DDVT code,
 
Namespace Company.Project.Tests.Deployment.FeatureX

[TestFixture]
[Category"Deployment"]
[Category"FeatureX"]
class WhenTheDatabaseObjectsAreDeployed
{
[TestFixtureSetup]
void Given()
{
    // arrange connections, helpers etc to be used by tests
}

[Test]
void ThenTheTableBobShouldExistInDbFred
{
    // Assert.That..Table Bob does indeed exist the db Fred
}

[Test]
void ThenTheWindowsServiceAccountShouldHaveDbLoginGranted()
{
    // Assert.That..the above is true!
}

[Test]
void ThenTheWindowsServiceAccountShouldHaveInsertPermissionToTableBob()
{
    // Assert.That..you get the idea!....
}
...
// more tests....
...
}

Repeat for the windows and web service deployment objects.
Run your tests and see how readable they appear in your test runner
Then the most suited resource can start to implement the deployment scripts.
 

As you can see from the screenshot, should a test fail it's crystal clear what it was trying to assert and gives the DDVT test executor an opportunity to understand the deployment intent and possibly fix the problem on the fly.

If you have followed the CD book you will know that a great deployment process is unlikely to fail - it's in action too often and to be honest if you have a fully automated system then I'm sure you will have cracked this 'verification' problem already - deployment automation solves the majority of issues found in this stage of the software lifecycle.

So what value does the DDVT approach offer then?

Well for those mere mortals still striving for continuous delivery automated perfection this approach will certainly help tighten up your deployment process and I certainly think there are benefits in just approaching deployment script development such as,
  • Declarative, descriptive DDVT created up front rather than cobbled together at the end of development
  • DDVT effectively create a deployment 'spec'
  • Provide a 'done' indicator as well as a 'quality' meter; you know you are 'done' implementing the deployment script when all the tests go green. You know the 'quality' of a deployment based on then number of test failures
  • You can run these tests whenever you wish to sanity check the software install/platform or as a first pass in a troubleshooting run.
In practical terms, if you are to adopt a DDVT approach then you will need to develop some infrastructure code/library to assist with those assert statements - eg, "Assert.That(DDVHelpers.SqlServer("server01").Db("Fred").Table("Bob").Exists(), ...)".
 
I have the bare bones of such a library that I currently use to power my SqlServer database DDVT and I am also developing windows service, filesystem and msmq DDV helpers. I will make this code available via my OSS project "DeploymentWang" - no promises about when though!
 
Would a Declarative Deployment Verification Testing approach improve your development/deployment process?


Comments

James - when do you envisage these tests being run? If they are run post-deployment in a new environment, what do they add beyond a set of system-tests that not only verify that the deployment worked, but that the features/functions that rely on the deployment also work?

If the error messaging that comes out of a failed test is explicit enough, surely the output of a set of system tests will identify whether or not the deployment worked?

It should be possible to bubble up an exception that says "Feature 'X' failed because table 'Bob' doesn't exist."
Unknown said…
I'd say primarily as part of the actual deployment to give you instant feedback about a specific deployment problem.

The system tests I have seen to date are pretty poor in terms of feedback - they also make massive assumptions about system state and the state of the infrastructure and software too.

I think it all rather depends on how mature/automated your deployment process is really. Just approaching deployment script development test first has benefits but these certainly extend the more "agricultural" your deployment. Having a deployment test called "UserBobNotInDbReaderRole" fail is certainly going to give the deployment crew the chance to fix this on the spot.

The other major benefit of this approach is that it enables a separation between the what and the how - the devs can state what and the deployment/ops/infrastructure team can implement the how using whatever specialist/niche tools they like

Popular posts from this blog

Walk-Thru: Using Wolfpack to automatically deploy and smoke test your system

First, some history... The advent of NuGet has revolutionised many many aspects of the .Net ecosystem; MyGet, Chocolatey & OctopusDeploy to name a few solutions building upon its success bring even more features to the table. I also spotted that NuGet could solve a problem I was having with my OSS System Monitoring software Wolfpack ; essentially this is a core application framework that uses plugins for extension ( Wolfpack Contrib ) but how to unify, standardise and streamline how these plugins are made available? NuGet to the rescue again - I wrapped the NuGet infrastructure (I deem NuGet to be so ubiquitous and stable that is has transcended into the software "infrastrucuture" hall of fame) with a new OSS project called Sidewinder . Sidewinder allows me to wrap all my little extension and plugins in NuGet packages and deploy them directly from the Wolfpack application - it even allows me to issue a new version of Wolfpack and have Wolfpack update itself, sweet huh

Configuration in .Net 2.0

11-Dec-2007 Update I've updated this post to fix the broken images and replaced them with inline text for the example xml and accompanying C# code. This post has been by far the most hit on this blog and along with the comments about the missing images I thought it was time to update it! Whilst recreating the examples below I zipped up the working source code and xml file and loaded this onto my Project Distributor site - please download it to get a full working custom configuration to play with! Just click on the CustomConfigExampleSource link on the right hand side, then the "Source" link to get the zip. We are in the process of converting our codebase to .Net 2.0. We've used Enterprise Library to great effect so decided that we should continue with this in the form of the Jan 2006 release which targets 2.0 and I've got the job of porting our Logging, Data Access etc wrappers to EntLib 2.0. ...And so far so good - the EntLib docs aren't bad and the migrati

Castle/Windsor schema enables Visual Studio intellisense

There has been a lot of noise recently about Inversion of Control (IoC) with .Net recently (stop sniggering at the back java guys!).... I've been using IoC via the Spring.NET framework for over 2 years now - it's a completely different approach to coding and once you get your head around it everything just falls into place and development is a real joy again. As I mention, Spring.NET is my framework of choice but a recent change in employer has seen me bump up against Castle/Windsor . First impressions are that I like it - it's not as powerful or feature rich as Spring but that's not always a bad thing! The one thing I did miss though was Visual Studio intellisense when editing the configurations - Spring has an online schema that can be associated with a Spring configuration. This got me thinking - if the VS intellisense can be hooked into that easily why not create one for Windsor configuration? So I did...you can download it from my new google code site here . Remem