I recently heard from someone in the industry that around 30-40% of teams are using test-driven development
, but only 5-10% of teams are using continuous integration
servers. I'm not sure how reliable those numbers are, but that's not important; what is
important is the general picture: Even the people who seem to understand the benefits of test-driven development don't seem to understand the benefits of continuous integration. I found that very puzzling, especially because I see continuous integration (CI) as a natural progression of test-driven development (TDD). They are, after all, part of the same trend of agile software development leading towards faster and faster feedback.Oh, Waterfall...
The trend goes all the way back to that dusty fossil of software development theory, the waterfall model
and its fixed and rigid phases of: Analysis, Design, Coding, Integration, Testing (and Deployment and Maintenance, which I won't bother with here). The big problem with waterfall is that once something is planned at the beginning, it is set-in-stone and changing it becomes very expensive. Many projects would either fail by going over time or budget constraints, or by the death of a thousand cuts... the dreaded Feature Creep
. Clearly there needed to be a more flexible way to get the right plans up-front so that there would be no need for changes during development.Spiralling Towards Agility
This is the motivation for the spiral model
, with its repeated iterations of little waterfalls: Analysis, Design, Coding, Integration, and Testing, repeated several times until done. Of course, the spiral model's flexibility was complicated by too much reliance on planning and documentation, leading to the recent rise of agile methodologies which repeat that loop in ever smaller, but more frequent, iterations. Depending on the agile methodology you look at, sometimes it will treat a 1 month iteration as a mini-waterfall, and in other cases it will take this progression even further, even down to the level of a week or even a day, so that on any given day, a programmer might do a bit of Analysis, a bit of Design, a bit of Coding, a bit of Integration, and a bit of Testing.
What is the meaning of this trend? The obvious answer is that it is a progression toward greater and greater flexibility. However, the proponents of test-driven development see flexibility as a side effect of a more important trend toward faster and faster feedback cycles. By seeing where things are going off-track earlier, you can make corrections to get things back on-track earlier. That is the source of flexibility; when a change or a new requirement comes up, fast feedback allows you to adjust to the new circumstances faster. So, faster feedback leads to greater flexibility.A New Style
TDD takes it to the next level, breaking the programming task into even smaller iterations, and simultaneously putting the primary feedback mechanism, testing, at the forefront. In fact, it kind of reverses the whole process, putting testing at the beginning of each iteration, rather than at the end. The goal of TDD is to use the feedback from unit-tests to actually drive
development. While this may seem counter-intuitive, when you realize that the goal is faster feedback, it actually makes sense because the sooner you have the test running, the sooner you are getting feedback from that test.
If for no other reason, it is important for programmers to understand TDD because it opens up a new style of programming. Instead of starting by thinking about how to make a design for the task, the programmer thinks about what it would take for the task to be complete. By holding the mindset of "tests first", this immediately requires breaking the task into much smaller sub-tasks because it is nearly impossible to write a high-level test from scratch. In fact, the only practical way to proceed is to start with very simple tests and build up from there. You need to think: "What is the simplest test that will actually test something useful?" In essence, we have shifted mindsets from being task-oriented to feedback-oriented.
When you think in this mindset, you can actually reverse the normal programming process. A
esting (ADCIT) becomes T
But in order for this analogy to make sense, you need to think at a much lower level. The actual process goes something like this: Write a small t
est for some behaviour, compile and run the test to ensure that it fails (you can think of compiling as 'i
ntegration' at this low level of coding), write some c
ode that will make the test pass, refactor the code until its d
esign is as simple and clean as possible, then a
nalyze the information you learned to decide which test to write next.Test-o-matic Tools
The reason this reversal of the process is interesting is because it can change the way you look at programming, and potentially make you much more productive, but it is only possible with the right experience and the right tools. Every phase has its own tools associated with it to help automate it. It is only by sufficiently automating these phases that TDD can be achieved economically. Testing is done with a testing framework such as JUnit, compilation/integration is usually done by the IDE or a build script in Ant, coding is automated somewhat by editor features such as code completion, low-level design is aided with automated refactoring in the IDE, and analysis can even be automated somewhat with a static code analysis tool (either built into the IDE or separate). With all this tool support, TDD can be a very productive style of programming.Next Up, Continuous Integration
Like testing, integration has followed a similar trend toward faster and faster feedback. In the early days, you had one big integration phase near the end of the project as different components were basically forced together. This was called Big Bang Integration. And like a big bang, it often did. The integration efforts often failed miserably, pushing project development past deadlines. People started to do integration more often, eventually implementing weekly or even daily builds. The concept of the daily build is promoted by several agile methodologies.
Continuous Integration is the next logical step beyond daily builds. With CI, each programmer takes responsibility for making sure that their contribution does not break the build. Martin Fowler describes it in detail in his article on the subject
, but the basic idea of Continuous Integration is that once you finish a task, including writing tests for that task, you:
- Ensure that the project (including all the latest changes from the repository) builds properly on your development machine.
- Then, after fixing any issues that arise, you check in your code and do another clean build on an integration machine to ensure that you properly submitted everything to the repository.
- After you fix any problems from this, you are done. Your change has been included in the repository and everything in the repository is integrated and builds correctly.
When every programmer follows this same procedure each time they commit a contribution to the repository -- often resulting in building dozens of times per day -- this is continuous integration.
Martin Fowler writes
"If you have continuous integration, it removes one of the biggest barriers to frequent deployment. Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle. This helps break down the barriers between customers and development - barriers which I believe are the biggest barriers to successful software development."The Feedback Connection
Again, like with testing, a little automation of this procedure can start to change our understanding of integration. With the help of a continuous integration server, the final step of performing a second build on an integration machine can be automated so that whenever a programmer commits code to the repository, the CI server detects this event and automatically starts a build, without requiring the programmer to manually do this himself. If the build succeeds, no problem; code is submitted to the repository, and everything is fine. But if the build fails, a report detailing the problem is generated and sent out. This report is essentially a feedback mechanism.
With this insight, now we can see a deeper similarity between testing and integration. Not only are they following a similar trend, but we can also see that integration is an extension of testing, at the team level
. When a unit test fails, you are getting feedback on the quality of your own code. When a build fails, you are getting feedback on the quality of the whole team's code. So, unit tests are feedback on programmer-level code quality and builds are feedback on team-level code quality. 'Feedback on quality' is one definition of testing. Therefore, by this analogy, integration is team-level testing. A successful integration build means that your contribution is compatible with the other contributions which have been made in the repository. An unsuccessful integration build means you need to fix something.
This analogy puts things in a new light. For instance, just as we can see that TDD is a progression beyond basic unit-testing, we can also see that CI is a progression beyond the practice of daily builds. This is a useful analogy because it will show us how the factors involved in both TDD and CI will be similar in nature. For example, if more automation opens up a different style of programming with TDD, then more automation may open up a different style of collaboration with CI. Likewise, if TDD is more productive for you, then CI will probably be more productive for you as well, since they are both methods of speeding up feedback on code quality.Well, by that analogy...
If the motivations for adopting TDD are the same motivations for adopting CI, then why aren't more people using CI? I think the problem is that the available tools do not fully support the ideal continuous integration scenario. When all we had was JUnit and a clunky IDE, unit testing wasn't widely adopted either, and there was no sign of TDD yet. Now we have IDEs with automated refactorings, advanced code-completion and code-generation, static code analysis with quick-fixes, and built-in support for testing frameworks (like JUnit and TestNG) in both the IDE and in other tools like Ant. All of these combined make productive TDD possible. Perhaps CI is in a similar early stage with its first major tool, the CI server, but not much else in the way of automation. Maybe someday soon we won't call it simply 'Continuous Integration', but we'll have a more advanced term for it like 'Integration Driven Development', or combine it with TDD and call it 'Feedback Driven Development'.
What would this ideal version of continuous integration be like? The dream goal is that there is always a complete, up-to-date, working version of the entire software product. Whenever a good change is submitted to the repository, the change is automatically incorporated into the final system -- no programmer interaction should be required except to commit code. Whenever a bad change is submitted, the programmer responsible for the bad change is immediately notified and every effort is made to help the programmer locate and fix the source of the problem. Furthermore, the effect on the other programmers in the team should be minimized as much as possible. Ideally, the code in the repository would never be broken.Broken Build Syndrome
Today, the situation is not so rosy. Most CI servers do little more than provide a report (email, IM, web, etc.) when the build fails. They may also provide a web interface to keep track of the status of the builds, which is good, but not yet at the level of tool support that I'm imagining. Don't get me wrong, the benefits of a CI server are much better than even a Daily Build practice, but they don't go quite far enough to achieve that 'next level', as TDD is the 'next level' beyond basic unit testing.
One of the biggest problems with today's CI servers is what happens when things go wrong. I call it broken build syndrome. Somebody broke the build, the build hasn't been fixed yet, and it holds many other people back, since they can no longer rely on the repository's codebase. Builds take a long time to run, typically. On a medium or large team, broken build syndrome will happen, even if you have a CI server, because of this time lag. Sure, the CI server sends out a report detailing the problems with the build, but is it always easy to figure out from the report what needs to be done to fix it? Not always. And before you can find the problem, someone else has checked in their own bad code. They get a report too, but they think you broke the build so they don't realize they also need to fix it. As a result, the build remains broken much longer than necessary, and basically confusion and frustration spread through the whole team. This is just one of the many ways broken build syndrome can happen.
Should we blame the people involved? Not really. People make mistakes, so our systems should adapt to the people, rather than expecting people to act perfectly. If we draw our inspiration from TDD, we see that the way to solve this problem is to provide better tool support, automating those tedious things that are error-prone, and providing better visibility for those problems that do get through the cracks.TeamCity
This is actually one of the primary motivations behind TeamCity
-- to bring Continuous Integration to the 'next level', with features aimed at minimizing the problems and time wasted by broken build syndrome. Some of the features we already have are:
- Remote Run and Delayed Commit - which perform a complete build without submitting code to the repository, so if the build breaks nobody else will be affected. With Delayed Commit, if the build is successful, the code will be committed to the repository automatically.
- A Distributed Build Grid architecture - allowing many machines to run builds simultaneously, making continuous integration more continuous, without slowing down machines that are in use by your team members.
- A rich web interface for monitoring and administrating builds - which provides detailed (but well-organized) information about all builds
- A team-oriented (rather than tool-oriented) perspective on builds - providing useful metaphors like 'taking responsibility' for a broken build so that everyone on the team can instantly know whether or not someone is fixing the build
- Tight IDE integration - allowing such things as jumping directly from an error-report to the exact line in your IDE where the test failed, as well as many other features. (Currently supports IntelliJ IDEA and Visual Studio 2005. Eclipse support is expected by the end of March 2007, and NetBeans support is on the way soon after.)
- Server-side code analysis and code coverage - allowing time-consuming analysis to be performed regularly and all team members to benefit by viewing the results online or by fetching the results right into their IDEs
Even though there is a lot of room for improvement in tool support for continuous integration, the benefits of CI over the more-common practice of daily (or weekly) builds is very large. If you have already adopted test-driven development, then you really have no excuse -- CI is a natural extension to TDD, and the same reasons you chose to adopt TDD are the same reasons you should adopt CI.
If you haven’t taken a look at TeamCity, check it out here
Or test drive TeamCity online, without a download, here