Following our principle of Continuous Skill Enhancement here at Synyx I recently read the book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation by Jez Humble (from ThoughtWorks) and David Farley (from LMAX).
The book consists of three distinct parts.
Part one provides a high-level overview about the basics of software delivery. The authors are touching topics such as configuration management, continuous integration and software testing, describing what they are good for and what the challenges are when implementing them. While the chapters help to understand the terminology used throughout the book they don’t (and cannot) describe each of the topics in great detail – there are other books for that. But of course you’re already familiar with these topics so it’s just a little refresher.
Part two is dedicated to the central concept described in Continuous Delivery: the deployment pipeline. The idea is to receive immediate feedback on errors and regressions as early in the development lifecycle of a project as possible and to provide a working application to the users as early and often as possible.
This means that every commit by a developer triggers a run of the deployment pipeline. It starts with building the artifact (obviously), proceeds to the first test stage running unit tests, from which it continues to the integration test phase. If all tests ran successfully the artifact will continue through the stages of the deployment pipeline, e. g. a smoke test or non-functional test stage (think security and performance tests) and to a UAT (user acceptance testing) stage. Finally the artifact will end up in the staging environment and from there it should require only the click of a button to deploy it to production. Of course the authors describe each step in great detail and have some anecdotes from their projects to lighten up the text.
The central theme of part three is managing different parts of the delivery ecosystem. The authors discuss pros and cons of physical servers, virtualized servers and cloud computing, introduce the reader to the concepts of automatic machine provisioning and configuration management with Puppet, monitoring your systems and collecting logs and performance data. They talk about managing test data, how to version it and how to get a basic stock of data for running integration tests in the first place. One chapter is dedicated to the challenges of managing components and dependencies in which the authors discuss different strategies of versioning the components of your application. It even comprises a short introduction to Apache Maven. In the following chapter the authors give an introduction to different revision control systems like Subversion and Git, as well as commercial alternatives like BitKeeper and ClearCase, and their respective advantages and disadvantages over the free alternatives. They continue to describe several advanced branching and integration strategies, each with its very own advantages and disadvantages in different situations.
The last chapter swiftly copes with rather non-technical questions like the project lifecycle risk management and how compliance and auditing are handled in a project using continuous delivery.
The concepts detailed in Continuous Delivery are not new per se but it’s the first book I read that really brought these together in one coherent narrative. In fact, most of the concepts will seem to be obvious once you’ve read and grokked them, but somehow nobody ever thought about them in depth.
Some of the distilled concepts are:
- Build binaries exactly once, store them in your artifact repository and promote them through the complete deployment pipeline.
- Only promote builds into staging or production that pass all unit and acceptance tests.
- The development, testing, UAT and staging environments should be as similar as possible to the production environment.
- Automate everything: builds, configuration, tests. Human interaction is prone to error, try to avoid it wherever possible.
- Use version control for everything, including the configuration of underlying operating systems and infrastructure such as networking equipment.
Continuous Delivery has rightfully received much praise around the Internet and especially in the recently popularized DevOps movement. In 2011, the authors also won a Jolt Excellence Award in the category The Best Books.
One thing I didn’t like about the book is the way online sources have been referenced in the text. Whenever the authors reference a website they provide an alphanumeric shortcode you know from URL shorteners like TinyURL. In fact that’s exactly what they are. These shortcodes can be used with Bit.ly or, as a fallback, directly from the supporting website of the book.
This often interrupts the flow of reading. Instead a more traditional style, e. g. placing the shortcodes in footnotes, would have been preferable in the printed version of the book. I also missed a list of all referenced online sources, either at the end of each chapter or in a separate appendix. Fortunately this is really the only criticism I have for Continuous Delivery.
As a conclusion, I can really recommend reading Continuous Delivery to anyone involved in developing and delivering software. It will provide some new points of view on your work and give you some new ideas about how to improve your current processes. I, for one, am looking forward to applying the principles outlined in this book to some of our projects.
If you’re hooked now you might want to read the sample chapter from Continuous Delivery: Chapter 5 – Anatomy of the Deployment Pipeline.
Oh, and by the way: We’re hiring!