I'm reading Continuous Delivery (CD) at the moment and it's a thought provoking read and provides a great opportunity to reflect on your own deployment and delivery problems (oh come on, you have some too right?). A couple of recent deployment incidents of our own prompted me into action.
Introducing "Declarative Deployment Verification Tests" - or DDVT
So what is this?
Essentially it boils down to approaching deployment scripts as you would a development task - test first - and places the responsibility of defining the deployment success on the developer, who as the person creating the software, should be best placed to understand it's deployment requirements. That's the basic premise - define the success of the deployment up front in the form of unit tests in your favourite test framework, write the deployment scripts - rinse and repeat until you have all greens from dev to production environments.
I believe the magic ingredient though is the 'declarative' part - using BDD style naming in your tests. This has many benefits - the obvious one being structure and readability but also communicates the intent of the deployment action very very clearly. This has incredible value as it allows your deployment team, who might not be that au-fait with the details of the software they are deploying, to understand what you are trying to achieve. I have personally experienced a deployment being rolled back because of a very silly, minor deployment script error that caused the deployment guys (quite rightly) to bail out and roll back - if they understood better what was being achieved they could have intelligently inspected the script, possibly made the fix and got the deployment out the door.
Let's jump to an example.
Scenario
You have developed a new feature "X" that has a windows service that performs some backend processing against a new table in an existing database and delivers the results via a new endpoint in an existing webservice. Let's break this down into the deployment objects and requirements.
- Windows service
- Service account
- New table
- Service account requires db login and table/sp permissions
- Ditto for the website service account
- New webservice endpoint
- ...
Phew - quite a list...and I can guarantee that this grows so fast during development that unless you dedicate tasks to create the deployment scripts as part of development you will be left with a huge list that takes much longer to implement at the end.
The typical cycle I have witnessed is this, dev with either admin permissions (yikes!) or manually create/deploy/configure things, forget to document each deployment item, create the deployment scripts at the end of development, try a deployment to a testing environment, scripts fail (you forgot about that db login), fix and re-run, fail (you forgot the table permissions), fix and re-run.......apologise to everyone...it will work this time for sure...oh,....
So on to the tests. BDD test style goes something like this,
GIVEN some context
WHEN something happens
THEN this is the expected result
Applying this for our scenario we would end up with a package of tests for this deployment split across the windows service, database and web service. Taking the database deployment as a starting point here is our pseudo nunit-ish DDVT code,
Namespace Company.Project.Tests.Deployment.FeatureX
[TestFixture]
[Category"Deployment"]
[Category"FeatureX"]
class WhenTheDatabaseObjectsAreDeployed
{
[TestFixtureSetup]
void Given()
{
// arrange connections, helpers etc to be used by tests
}
[Test]
void ThenTheTableBobShouldExistInDbFred
{
// Assert.That..Table Bob does indeed exist the db Fred
}
[Test]
void ThenTheWindowsServiceAccountShouldHaveDbLoginGranted()
{
// Assert.That..the above is true!
}
[Test]
void ThenTheWindowsServiceAccountShouldHaveInsertPermissionToTableBob()
{
// Assert.That..you get the idea!....
}
...
// more tests....
...
}
Repeat for the windows and web service deployment objects.
Run your tests and see how readable they appear in your test runner
Then the most suited resource can start to implement the deployment scripts.
As you can see from the screenshot, should a test fail it's crystal clear what it was trying to assert and gives the DDVT test executor an opportunity to understand the deployment intent and possibly fix the problem on the fly.
If you have followed the CD book you will know that a great deployment process is unlikely to fail - it's in action too often and to be honest if you have a fully automated system then I'm sure you will have cracked this 'verification' problem already - deployment automation solves the majority of issues found in this stage of the software lifecycle.
So what value does the DDVT approach offer then?
Well for those mere mortals still striving for continuous delivery automated perfection this approach will certainly help tighten up your deployment process and I certainly think there are benefits in just approaching deployment script development such as,
- Declarative, descriptive DDVT created up front rather than cobbled together at the end of development
- DDVT effectively create a deployment 'spec'
- Provide a 'done' indicator as well as a 'quality' meter; you know you are 'done' implementing the deployment script when all the tests go green. You know the 'quality' of a deployment based on then number of test failures
- You can run these tests whenever you wish to sanity check the software install/platform or as a first pass in a troubleshooting run.
In practical terms, if you are to adopt a DDVT approach then you will need to develop some infrastructure code/library to assist with those assert statements - eg, "Assert.That(DDVHelpers.SqlServer("server01").Db("Fred").Table("Bob").Exists(), ...)".
I have the bare bones of such a library that I currently use to power my SqlServer database DDVT and I am also developing windows service, filesystem and msmq DDV helpers. I will make this code available via my OSS project "DeploymentWang" - no promises about when though!
Would a Declarative Deployment Verification Testing approach improve your development/deployment process?
Comments
If the error messaging that comes out of a failed test is explicit enough, surely the output of a set of system tests will identify whether or not the deployment worked?
It should be possible to bubble up an exception that says "Feature 'X' failed because table 'Bob' doesn't exist."
The system tests I have seen to date are pretty poor in terms of feedback - they also make massive assumptions about system state and the state of the infrastructure and software too.
I think it all rather depends on how mature/automated your deployment process is really. Just approaching deployment script development test first has benefits but these certainly extend the more "agricultural" your deployment. Having a deployment test called "UserBobNotInDbReaderRole" fail is certainly going to give the deployment crew the chance to fix this on the spot.
The other major benefit of this approach is that it enables a separation between the what and the how - the devs can state what and the deployment/ops/infrastructure team can implement the how using whatever specialist/niche tools they like