| | |

Microservices at Bench: Benchception

By Pavel Rodionov on
Microservices at Bench: Benchception

At Bench we really love the microservices approach to application architecture because it allows you to build your own stack and run lightning fast iterations. When we implemented our first microservice one year ago, we had a lot of questions:

  1. How do we deploy it, and how can we automate it?
  2. How can we make it scalable from the very beginning?
  3. What’s the best way to run integration tests against it?
  4. What’s the best way to replicate a production environment locally so we can work on it?

Back then we didn’t know the answers, but we had a rough idea and we saw the power of container technologies. After six months of experimentation, we had proven out a few concepts and were ready to launch Project Benchception.

Benchception is a project in two parts. The first part is practical–it’s a set of scripts that pulls all of Bench’s microservices from git, builds the docker containers, and runs them in a vagrant VM. The second part is a set of recommendations for each new microservice that’s language agnostic. We’ve decided to rely on well-known technologies: Docker, Vagrant, and AWS Elastic Beanstalk.

For now, the project is very Bench-specific, but we’re looking forward to open sourcing some reusable parts to make other people’s lives easier.

Goals

Before starting any actual work, we spent a lot of time talking with our product/dev/ops team members to understand what the main blockers in the development process were. As a result of these planning sessions, Benchception was built with these goals in mind:

  1. Any team member should be able to run the whole system locally.
  2. Any micro-service should be deployed and auto-scaled in the cloud.
  3. Any team member should be able to spin a cloud environment that includes all micro-services, for example for demos or integration testing.
  4. Any developer should be able to modify any service and see results immediately.

We took a lot of inspiration from the 12 Factor App document.

Docker

We use a very simple yet powerful approach–every micro-service we build is dockerized from the start. The convention we use is to have a Dockerfile and .docker folder for some docker-specific stuff. For example(Dockerfile):

FROM dockerfile/java:openjdk-7-jdk

ADD .docker /app/docker

ADD target/scala/reporting-service.jar  /app/app.jar

EXPOSE 80

CMD /bin/bash /app/docker/run.sh

Vagrant

For local development, we use Vagrant with Docker provisioning. This allows us to spin all microservices on a single machine and lets us specify which services we’re interested in.

We keep a Vagrant project in a git repository. When somebody checks the project from git, he or she sees the following directory structure:

.
├── mongodb
│   ├── data
│   └── Dockerfile
├── mysql
│   ├── sql
│   ├── Dockerfile
│   └── prepare.sh
├── nginx
│   ├── Dockerfile
│   └── nginx.conf
├── service1
│   ├── git
│   ├── check.sh
│   ├── download_binary.sh
│   └── prepare.sh
├── service2
│   ├── git
│   ├── check.sh
│   ├── download_binary.sh
│   └── prepare.sh
├── service3
│   ├── git
│   ├── check.sh
│   ├── download_binary.sh
│   └── prepare.sh
├── Vagrantfile
├── init.sh
└── up.sh

Running micro-services is pretty easy and can be accomplished by this command:

vagrant up --provider=docker --no-parallel service1 service2 service3 nginx mongodb mysql

We try to reuse existing public docker containers as much as possible. We also recommend tagging containers, as it will help you to speed up the bootstrapping process by reusing the same version across multiple docker containers. The amount of time required to run all of the services for the first time varies and usually depends on network bandwidth. But after that, it works lightning fast–the power of docker fs caching.

You might’ve noticed that every service folder has git subfolder. The reason for this is because we use Vagrant not only to run docker containers, but also to dynamically change source code and see changes immediately. This helps us have a frontend-friendly development workflow.

The Dockerfile looks like this–first you define a proxy that hosts Docker and then all other containers:

Vagrant.configure("2") do |config|

 config.vm.define "proxy" do |proxy|
    proxy.vm.box = "phusion/ubuntu-14.04-amd64"
    proxy.vm.network "private_network", ip: "10.11.12.13"
    proxy.vm.provision "docker"

    proxy.vm.provider "virtualbox" do |v|
                v.memory = 3072
                v.cpus = 2
    end
    proxy.vm.synced_folder "service1/", "/app/service1"
  end
  …
  config.vm.define "service1" do |service1|
    service1.vm.provider "docker" do |d|
      d.name = "service1"
      d.build_dir = "./service1/.docker/"
      d.ports = ["8876:8876"]
      d.vagrant_machine = "proxy"
      d.vagrant_vagrantfile = "./Vagrantfile"
    end
  end

As you can see, we don’t use Docker linking. We’ve found it really hard to maintain a combination of Vagrant port forwarding + Docker linking, so we decided to use a dedicated network interface instead. The only thing you have to be careful of is a port clash.

Elastic Beanstalk

To run our dockerized micro-services on production, we use Elastic Beanstalk. The process is very simple–just take a Docker container and deploy it. You can also define auto-scaling rules. The important prerequisite for your service is that it should be stateless. One thing you have to keep in mind is that Amazon allows you to run only one Docker container per EC2 instance, so if you need more efficient resource consumption it’s better to stick to a CoreOS/etcd combo or wait for Amazon ECS to be released.

Continuous Deployment

Dockerizing micro-services helped us to quickly spin up new environments. Right now, we create a Digital Ocean instance with Docker installed for every environment, and then start all of the containers together. This allows us to have a clean environment every time in which to use Jenkins for continuous integration. We use these environments for multiple purposes: integration testing, features validation, and demoing Bench to potential clients.

Conclusion

Moving to micro-services helped us to build and release software more efficiently and reliably. Project Benchception is ongoing.

We are planning to add a few more things in future:

  1. Automation using Ansible. We want to write a set of ansible playbooks for the major development stages: from development to production deployment.
  2. CLI. We want to build a CLI layer to manage the development workflow: write code, test locally, push the changes and run integration tests, deploy to production, connect to a live docker container on production. The implementation might be a mix-in on top of existing CLIs (AWS CLI, git, vagrant, docker, etc.) with some Benchception-specific extensions playing nicely together.
  3. A generator for rapid service creation. Basically, this would just be an automated dockerization of our micro-service stack but we want to make it language agnostic.
  4. Migration to CoreOS. This is something that we might consider using at large scale when we need micro-service discovery and better resource utilization.
  5. Open Source practices and frameworks. Our goal is to contribute to the community and we’ve already taken the first step in that direction by open sourcing our micro-service stack.
  6. Publishing Docker containers and reusing them more efficiently instead of building them from scratch every time. Something like quay.io might be useful for this.

This article is a part two of micro-services series; if you enjoyed it, you can check out part one here. More articles coming soon, so stay tuned!