| | |

Microservices at Bench: Intro

By Yaroslav Tkachenko on
Microservices at Bench: Intro


Microservices architecture is becoming very popular now, but it’s not a new concept; a lot of companies are already successfully using it and Bench is one of them. In the beginning, Bench's app was a classic JEE monolith written with Spring and Java, so we've been evolving it by implementing new features using microservices. This post is the first in a series of posts on microservices and it will outline the technologies we’re using to build microservices. The second post will go into more detail about how we deploy and maintain our infrastructure.

Bench Service


We’ve created a giter8 template for our typical microservice app - bench-service.g8. It’s actually very simple to use, so I encourage you to try.

Install g8:

brew update && brew install giter8

And then just run:

g8 BenchLabs/bench-service.g8

You’ll be prompted for the name and other details of your new service.

When it’s ready, you can run it with:

sbt run

And then you’ll be able to see a working application on the port you specified (for example, http://localhost:8876/api/endpoint2).


As you can see, a typical Bench Service is written in Scala. We truly believe in the Reactive Manifesto, and Akka is a perfect foundation for reactive applications. We use the Actor model to implement Services (in SOA design). If you haven’t seen Scala or Akka before, this little code snippet from the documentation will help (Akka uses the Actor model):

case class Greeting(who: String)

class GreetingActor extends Actor with ActorLogging {
  def receive = {
    case Greeting(who) ⇒ log.info("Hello " + who)

val system = ActorSystem("MySystem")
val greeter = system.actorOf(Props[GreetingActor], name = "greeter")
greeter ! Greeting("Charlie Parker")

Spray is used as the HTTP layer for implementing RESTful APIs. Every microservice usually has a RESTful API that serves as a main interface for other services to interact with. Spray routing DSL is very concise and powerful. Here is an example of a complete spray-routing application:

import spray.routing.SimpleRoutingApp

object Main extends App with SimpleRoutingApp {
  implicit val system = ActorSystem("my-system")

  startServer(interface = "localhost", port = 8080) {
    path("hello") {
      get {
        complete {
          <h1>Say hello to spray</h1>

Apache Camel is used as a middleware layer between Akka Actors and anything you might think of: queue servers, third party APIs, cloud providers and even social networks. Imagine you want to pull an S3 bucket for new files and once a new file is there you want to pipe it directly to you actor; it’s just one line of code in Camel DSL and your Actor will now receive all S3 files asynchronously:


Another example is ActiveMQ integration. Imagine you’re interested in receiving events from this messaging system. You just need to specify a topic or queue name, like this:


The Akka team has written a really good article on how to integrate Akka/Camel and Camel has a really rich set of components that can fulfill your various integration needs.


Let’s see how all of these components can work together.

Example 1:

So, imagine we have some 3rd party integration microservice and a web app (can be another microservice or monolith app, doesn’t matter). It’s a very typical task to create a new user in a different system right after they sign up in our web app.

The web app doesn’t know anything about other microservices, it just sends a message about the new user and other services can listen for it. In our case, the microservice reacts to this message by creating a new user in some 3rd party system.

Each microservice can also have its own storage to save integration specific data for users. This technique is called Polyglot Persistence. The web app calls its HTTP API to fetch data from that storage (to show a page about integration details, for example).

One feature of this design is very low coupling. The web app and the microservice have different storages for user data and different back-end code for reacting to user creation. There is no explicit call to create a user in the microservice, it’s all done through a Publisher / Subscriber pattern using a messaging queue. In this case, the only point of coupling is the front-end application. Of course, we can go even further and try to decouple that as well, but usually it’s not worth it and it’s ok to have this type of coupling.

Example 2:

This example is more complicated – it shows how invoice creation can be processed in a microservice architecture.

First, the webhook is received by the Billing Service. It saves the required information and sends a message about the newly created invoice. Again, it doesn’t know anything about the web app or other services.

We have two services reacting to this message. Our Mail Service uses the incoming message to send an email to the client about the new invoice. Our Activity Stream Service can use a CQRS (Command Query Responsibility Segregation) pattern to keep a history of all events and notifications and provide a digest through an API endpoint. So all incoming messages will be Commands and the API endpoint runs Queries. The web app can use WebSockets or HTTP polling to have near real-time updates.

Both examples show us that using an HTTP API + a messaging queue can be enough to build really distributed and complicated architectures.

By the way…

One thing to take note of at this point: both HTTP APIs and messaging queues are language-agnostic. You can have a Python app talking to Scala, Erlang and Node.js microservices… Or any other language you want! Use the right tools for the right job.

Going Full Stack

As you can see in the examples above, we use a direct connection between front-end code and microservices APIs. What’s wrong with that? Nothing! But it’s not typical.

Often, we have to use some aggregation layer between them. So the front-end code calls an API of a web app, which calls an API of a microservice, etc. Kind of long path, but sometimes it’s required – for example, if you need to prepare combined data from multiple microservices for a reply.

It’s perfectly ok to have your webapp in domain.com and your microservice API in service1.domain.com/api and use CORS to do AJAX calls between them. Here is an example of production-ready implementation for CORS support in Spray:

package co.bench.api.spray

import spray.routing.{Directives, Route}
import spray.http.HttpHeaders.*
import spray.http.{StatusCodes, HttpOrigin, SomeOrigins}
import spray.http.HttpMethods._

trait CORSSupport extends Directives {

    private val CORSHeaders = List(
        `Access-Control-Allow-Methods`(GET, POST, PUT, DELETE, OPTIONS),
        `Access-Control-Allow-Headers`("Origin, X-Requested-With, Content-Type, Accept, Accept-Encoding, Accept-Language, Host, Referer, User-Agent"),

    def respondWithCORS(origin: String)(routes: => Route) = {
        val originHeader = `Access-Control-Allow-Origin`(SomeOrigins(Seq(HttpOrigin(origin))))

        respondWithHeaders(originHeader :: CORSHeaders) {
            routes ~ options { complete(StatusCodes.OK) }

If you have any issues with CORS (like IE9- support), you can just use something like Nginx as a proxy and use the following URLs instead:



To be continued...

Are you ready to implement your first microservice? If you’re excited and ready to start, let me warn you: you need to have really good DevOps tools to actually create and support microservices. Deployments, monitoring, versioning, common libraries, and QA – all these aspects will become more complicated if you compare them to monolith.

To find out how you can make it as easy as possible, wait for the next article where we’ll share our experiences with maintaining our microservices architecture!