Pristine Jenkins

What do you get when you combine a whale and a butler? A CI server that's a lot easier to manage.

It's something that I started doing about a year ago, and I must say I'm loving it! My CI box has never been so clean and clutter-free.

What am I talking about?

I'm talking about running a Docker Engine on your Jenkins [1] box. That's it! Do not install anything else on that machine. What this enables is an utterly simple CI box that can run practically any piece of software required by your projects.

There is no more "jumping on the CI box" and installing this or upgrading that. There is no more resetting the shared MySQL database used by every test suite that runs on that box. Every project, and in fact every individual build, is responsible for setting up and tearing down all of its own resources.

Here is the entire setup:

  • An Ubuntu box (or whatever you desire)
  • Jenkins
  • Java (so we can run Jenkins)
  • Docker
  • Python (so we can run Docker Compose)

That's it!

Benefits

There are many benefits, but let me go over just a few.

No need to install any additional software in CI

This is perhaps my favorite advantage this setup affords. The last time I had to install something on Jenkins was when I set up Jenkins. From then on, we've added dozens of jobs to it, each requiring various languages and dependent services, without ever having to go back in there to install any of those languages or services.

Every piece of software that is required by any job on our CI is run inside a Docker container. Whether it's Java 7 or 8, Node 0.x or 7.x, MySQL or Memcached—if you can get it in a Docker container, then you can run it in CI.

Getting Chrome running alongside a Selenium server–easy. Spinning up Memcached because some tests require it–no problem. Everything runs in Docker, and Jenkins stays clean.

No need to upgrade software in CI

This is actually tied to the previous benefit, but it warrants its own discussion.

If and when you upgrade the base software used in your project–whether it's upgrading from Ruby 2.3 to Ruby 2.4 or moving from Memcached to Redis–the only thing involved in such an upgrade is modifying the base Docker images that you use. There is no more RVM, no more linking and unlinking binaries, and no more battling with environmental issues that prevent you from upgrading software.

The task now becomes a matter of upgrading the Docker images, making sure that all components still play nice with each other, and you're done.

Beautiful!

About the only things that you'll ever have to upgrade on the CI box are the CI server itself and the Docker Engine.

No setup required to run tests locally

Since all your software dependencies are now explicit in your Dockerfiles and docker-compose.ymls, what this means is that anyone who has Docker installed on their machine can run the entire test suite, without any additional setup or configuration, by simply running the same Docker or Docker Compose command that your CI server would.

Can you think of a time when you were able to pull down a project that you've never worked on before and, without having to go through any setup whatsoever, been able to run the test suite immediately afterward?

Builds share fewer things

Prior to this Docker-based setup, practically every build box that I've worked on had, at minimum, a shared MySql (or Postgres) server, a couple of versions of Ruby and/or Node.js, and something (rvm or Jenkins plugin, or both) to correctly select the appropriate version of the language for the build.

If you correctly Dockerize the environment needed for builds (i.e. getting both the database and the app in dedicated, linked Docker containers) then the chances that "it works on my machine" but fails in CI or any other environment are significantly reduced!

Environmental inconsistencies are not eliminated entirely, but because every environment dependency is explicitly stated in a docker-compose.yml file or a Dockerfile and runs completely isolated from all other projects, tests, and applications, there is much less room for uncontrolled variables to sneak in. Everything is spun up just for your build, according to your explicit specifications, and cleaned up afterward.

This is a BIG win.

The only thing that your builds now share are the Docker Engine, physical machine resources (like CPU, memory, and network bandwidth), and perhaps external dependencies that you couldn't Dockerize for one reason or another (like a third-party API, for example).

Let me say this again: the fact that individual builds/commits/branches can alter the entire environment, without affecting anyone else's builds, is a big deal. This is super useful for experimenting with major changes such as version upgrades, database schema changes, and swapping in/out big dependencies.

Docker makes this very easy.

Drawbacks

It wouldn't be fair if I only told you about how green the grass is on the other side without telling you how much it rains over there.

So, here are a few things that you have to consider before buying your plane tickets.

Certain tasks become more complex

If, for example, your CI is constantly using the AWS CLI to talk to AWS, then running the AWS CLI out of a Docker container brings on some complexity. It's no longer a matter of running aws s3 blah blah blah; but you now have to spin up a Docker container, mount your credentials as volumes on the container, make sure that you actually trust the Docker image you're using so that your credentials aren't shipped off to who-knows-where, and a few other things.

Same thing for SSH. If you have Jenkins jobs that routinely run Ansible playbooks to manage remote servers, then you have to make sure you mount your keys correctly, get tunneling working (if applicable), and a few other things that may be specific to your situation.

The point is: getting some things to work in Docker is harder than just getting them to work on the host machine itself. It's not always the case, but it does come up.

You may consider simply installing these tools directly on the CI box and making them available to all jobs.

Everybody on the project has to agree to build using Docker

In some organizations or teams, this may be a show-stopper. Even though you may be loving this idea, you may find some push back against it. If everyone on the project won't agree to run the build in Docker, then you're going to have a bad time.

Relax. Inhale. Exhale.

Remember, running your build in Docker doesn't mean you have to run in production in Docker. Maybe one day, but you certainly don't have to start there.

Learning curve

As with any new technology, if you've never used Docker before, then you're not going to learn everything you need today and be running today. That may be the case, but it's unlikely.

The same goes for your teammates. If they haven't used Docker before, they will be incurring a learning curve to get running with this. Knowledge is generally free, but its acquisition takes time.

Now, the good news is, getting familiar with Docker is fairly straightforward. There is really not a whole lot that you need to know, and there is a lot of community support out there.

That's It!

Like I said, I've been working with such setups for about a year now, and I have never looked back. The bar for getting a job into CI has been lowered considerably.

I hope you give this a try!


[1] I mention Jenkins here because we use Jenkins at the moment, but this applies to any CI server on which you're able to install Docker.

Dariusz Pasciak, Software Craftsman

Dariusz Pasciak is a former 8th Light employee.

Interested in 8th Light's services? Let's talk.

Contact Us