This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Continuous Delivery (CD), the new darling of IT, is a to dramatically improve the pace and quality of software delivery by creating a repeatable, reliable pipeline for taking software from concept to customer. The benefits are faster time-to-market, increased quality and improved responsiveness. However, doing CD properly hinges on the maturity of your testing.
A new version of any application should be rigorously tested to ensure it meets all desired system qualities. It is important that all relevant aspects -- whether functionality, security, performance or compliance -- are verified by the application delivery pipeline. If in doubt, test and test again.
Many companies are starting initiatives to build delivery pipelines, automate environment creation and app deployment, and so on. That's all very well, but the most important goal of a pipeline is to allow you to make an informed (and, at some stage, perhaps automated) Go/No-go decision.
What do you have to do to get to that point? Simple: test, test and test again! Few things dampen an executive sponsors' enthusiasm for Continuous Delivery more quickly than a couple of embarrassing post-release outages.
Accurate tests will help you streamline development, enabling you to determine when you've done just enough to implement new features. At the same time, testing will help you to manage risks by ensuring your CD initiative doesn't go off the rails before it's had time to prove its worth.
The more testing you do, the better you will be able to determine whether the new deliverable is better than what's currently running.
The status quo in many companies is there are no automated tests and only limited manual ones. Such a situation severely limits the throughput capacity and predictability of your delivery pipeline. The other extreme is lots of tests using many different tools. How do you aggregate this information to allow you to make a confident Go/No-go decision?
In an environment with mostly manual tests, failures are hard to reproduce. This is not a new problem, but it gets worse as you do more tests as your Continuous Delivery initiative ramps up.
At the other extreme, with too many tools and tests, the pipeline often simply takes too long and costs too much to run. Plus, there is the signal-to-noise issue: how much does one failure stand out in a crowd of thousands of less meaningful tests that always pass? What is the right amount of testing for the current context and what is the desired risk/cost/speed tradeoff?
Managing Tests in a CD Organization
You also need to ensure your tests measure all relevant aspects of risk and quality for your applications. You will probably need multiple tools to achieve this.
Your first set of tests should verify the most important aspects of the system. Make sure key use cases still work when a new release goes out. Capture as much information as you need when you run your tests. If you have enough information about what was going on in the app to determine what the problem is, you will spend less time trying to reproduce test failures.
Then build out test coverage based on the desired risk and quality information - not necessarily by "test category." More tests are not always better. Clearly, there is a tradeoff between getting more information and increasing runtime cost, maintenance cost, complexity of the overall picture, and so on.
Choose tests according to the context. For example, run more tests in the areas of the system that are being changed in the current pipeline run. And you want to actively manage your test set. Strive to minimize the signal-to-noise ratio of the test output -- a test that never fails can be as wasteful as a test that never succeeds.
What Do I Need To Do Now?
When you are getting started, the first thing to do is critically examine existing tests to see how long they take to run and if they are compatible with the targets you have set for your delivery pipeline.
Then, determine the most critical use cases for your application and ask the following: What is the likelihood of breaking this use case? How bad is it if this use case no longer works? How quickly can we fix it if it breaks? Do we have tests to cover this use case? Manual or automated? How many of them do I need to run to be confident enough that the use case still works well?
If you are missing tests, add time and budget to add these to your pipeline initiative. You may also decide to invest in better mechanisms to fix things quickly. Choose tests based on use cases rather than test categories -- and ensure that you capture information about the systems under test as the test runs. Also, it's smart to ensure your tests are linked to the use cases and parts of the system they cover - otherwise, it's hard to choose the most appropriate test set.
Continuous Delivery is all the rage. Its benefits are easy to grasp, but doing it properly hinges on the maturity of your testing. A new version of an application should be rigorously tested to ensure that it meets all desired system qualities. It is important that all relevant aspects -- whether functionality, security, performance or compliance -- are verified by the pipeline. If in doubt, test and test again.
Phillips is an evangelist and thought leader in the DevOps, Cloud and Continuous Delivery space. He sits on the management team at XebiaLabs and drives product direction, positioning and planning.