Brief History Of Continuous Integration

Code Development Through the Ages

For the last few years, continuous integration (CI) has become so much a part of my everyday life, I rarely think about how much of a jump it was to get there. So let me take this opportunity, since I’m already talking about this topic in the TFS webinar, to go down memory lane.

In the beginning, all developers worked on their machines, with their own sources, and everything was fine. Until the integration phase.

Assuming the team worked with source control systems (yes, even SourceSafe counted as one), developer A would get the latest sources from developer B, and the code wouldn’t compile. It took hours to even days to resolve, including lots of frustration. The integration part hurt. In a large team, it hurt a lot.

That wasn’t all though. Even when we managed to compile the application, we noticed that it did not always behave the same. The “works on my machine” term was born, because not all developers had the same tools, or the same versions, installed. Even if all configuration files were under source control (and they weren’t) artifacts would not build the same every time, regardless of labels in the source control system.

So we looked for a tool or process that will solve both the painful integration part, and the “works on my machine” part.

Developers have started to use automatic tools for building. Usually, the building process was not just a click in the IDE – it was a length process. So automation saved lots of time. Plus, in the end of the build process, the tool would tell us if the build succeeded or not. The feedback that was received manually before, became automatic.

So if we can have an automatic build process, why use our machine for it? We moved the automatic build to a separate machine. It had all the code and dependencies needed for a full automated build. This added some objectivity to the build process: No more “works on my machine”. Now we had “builds on the server”.

The integration part was not resolved though – when the build failed, we still needed to bump heads in order to make it pass. Integration was still painful.

This time, it wasn’t a tool that helped, but understanding the process. The answer is to not integrate big batches of code, but rather work small batches. The smaller the code, less problem we’ll see. But that was against the nature of developers: we like to work until our feature is complete, not check in code every few minutes, and start integrating it every time.

Changing behavior is easier when we have supporting tools. Our automatic build server to the rescue!

We programmed our build engine to automatically collect newly checked-in code and run the build. If it succeeded great. If not, we can retract the code changes, since they were not that big. If everything is automated, and there’s is no cost of stopping our work and going to the build machine, minimizing the time between check-ins was easier.

Now we had the ability to push small code changes and get the build server feedback. And we still wanted more.

When we had “works on my machine”, we knew that our code does not only build, it also works (sort of). We knew that because we debugged certain scenarios, or ran the application through them. It gave us confidence, and we wanted that confidence also from our server.

Yes, we could run the debugger, or automate scenarios on the build machine. But automated tests – both unit tests, integration tests and user acceptance tests –  are better. The build machine can do that too! It was no longer a build machine: It became a continuous integration server.

What we call CI tools, do all of that: Get the latest code changes, build the code, run the tests and give as feedback on the result. Automated CI FTW!

Today we’re actually taking it further, with continuous delivery: Not only do we know our code works, we can automatically deploy it for the customers.

Our tools are evolving, and making it possible to lower the bar, so people new to CI will not be scared off by the needed effort to put up system. Which can be done in less than 45 minutes.

Don’t believe me? Click here to watch the webinar.