How to keep your microservices straight

Here is a post about… microservices and Docker, what else? With this post I want to illustrate the recent problems I have had at work to deal, on my local environment, with all the microservices that form part of our infrastructure.

There is a lot of literature about microservices and the challenges of adapting Production environments to handle them. However, it is not that common to find information about the struggles of the developers to work with microservices on their local environment.

 

The “localhost” problem

Back in the old times, you would spend several weeks/months developing a monolithic application. The local environment was configured to run that application and it was rarely needed to change any setting. In this sense, the advent of the microservices has been a game changer. As different microservices are developed with different technologies, it is necessary to configure the local environment differently for each service. So if you need to jump from one service to another frequently, you will also need to change the local configuration.

I recently went through that experience myself when I was asked to fix a bug in an “old” microservice (and by “old” I mean a microservice developed a few months ago). Since the time that service went into Production, I had been involved in the development of some others microservices.  So, understandably, when I switched context to work on fixing that bug, I felt a bit lost. All these questions came to my mind:

Is this a Java or an Scala application? How is this service run? What is the deployable artefact? What dependencies does it have: a database, a message broker, other services, storage on AWS S3? What are the credentials to connect to all these systems? What environment variables need to be set? In one sentence, how can I get this service to run in my local environment so that I can try to reproduce the bug and fix it?

Local environments get messy over time, settings change, software is upgraded and, as a result, an old service may not be ready to run after a few weeks of neglecting it. I am sure this is a familiar experience to all those who need to switch context frequently among different services.

Furthermore, the way an application runs locally is, more often than not, different to the way the application is executed on Production. On the local environment, the applications are run with the IDE or with a build tool like Maven or sbt, whereas on Production an executable artefact is deployed (like a jar file). This difference on the way the application is run can have important consequences.

Another potential source of issues comes from using “localhost” as hostname. As convenient as it is, having different services speaking to each other over “localhost” hides the complexities of “over the network” communication. Actually, I have seen a few newbies wondering why they could not reach an application after deploying it configured to listen on “localhost”.

 

The “Docker” solution

To get around the “localhost” problem, I have found Docker to be quite helpful.

The first advantage of “dockerizing” a microservice, and this may not seem obvious, is that having a service configured to run on Docker constitutes an excellent documentation about the service itself. In many cases, I end up running the service directly on my laptop but, if in doubt about how to do it, I just need to take a look at the Dockerfile or the Docker Compose script.

The second advantage is the capability to spin up the Docker container and, voilà, the service with all its dependencies is ready to use.

I normally use a mixed approach, running the service directly in my laptop and all the dependencies (databases, message broker, etc) in a Docker container. This way, third advantage, I can recreate databases at will rapidly and modifying them without fear of breaking anything.

Moreover, and this is the fourth advantage,“dockerizing” a service makes it easy to mirror the Production environment accurately. For instance, on some occasion I came across a database-related bug that I failed to reproduce on my local environment. The database was MySQL and, after spending a lot of time getting my local database into an state similar to Production’s, the bug remained elusive. Eventually, I managed to find the cause of the problem: my local MySQL installation was a later version than the one on Production and, as a result, the default value of the system variable explicit_defaults_for_timestamp was different. Once I noticed this, I set up a MySQL server in a Docker container with the same configuration as Production and the bug came to the surface. With the peace of mind that comes from being able to replicate the bug, I could fix it quickly.

Another real example on this topic is a bug I came across with when using Swagger. The issue was that the “Model/Model Schema” section of Swagger UI was not displayed on Staging whereas it was on my local environment. My initial investigation assumed that it was an environment-related issue. However, the real issue turned out to be a clash among some Json dependencies. This issue did not reveal itself on the local environment because in my laptop I normally run my application with the IDE or “sbt”. However, the application on Staging runs by executing a fat jar created by sbt-assembly. This plugin is great but it may cause hellish situations as it extracts the content of all jars and put it in a single one, removing therefore the namespace protection provided by the individual jars names. The way I was able to find the issue was by running the application inside a Docker container, exactly as it would run on Staging and Production. Obviously, I would have been able to reproduce it locally by running the application with the fat jar (but to do that I would have needed to know the cause of the problem I was investigating!)

An example

Now let’s take a look at an example. Let’s imagine a fictitious airline, EasyRide. EasyRide‘s systems comprise the following services:

  • search, that allows customers to search flights
  • checkout, for customers to pay for their flights
  • tickets, for customers to generate and print out their tickets
  • exchanges, for customers to make changes to their flights

Let’s suppose that:

  • exchanges depends on tickets
  • checkout and tickets make use of a relational database (MySQL)
  • tickets sends notifications to a message broker (ActiveMQ) and stores the tickets on AWS S3
  • search uses a key-value store (Redis) to cache the search results
  • all 4 services connect to different third-party applications

With all this in mind, the Docker Compose script would look like

version: '2'
services:
  redis:
    image: redis:latest
    ports:
      - "6379"
    volumes:
      - ./volumes/redis:/data

  mysql_checkout:
    build: ./mysql
    ports:
      - "3306:3306"
    volumes:
      - ./volumes/mysql_checkout:/var/lib/mysql
      - ./volumes/mysql_conf:/etc/mysql/conf.d
    environment:
      - MYSQL_ROOT_PASSWORD=admin
      - MYSQL_DATABASE=checkout
      - MYSQL_USER=admin
      - MYSQL_PASSWORD=admin

  mysql_tickets:
    build: ./mysql
    ports:
      - "3307:3306"
    volumes:
      - ./volumes/mysql_tickets:/var/lib/mysql
      - ./volumes/mysql_conf:/etc/mysql/conf.d
    environment:
      - MYSQL_ROOT_PASSWORD=admin
      - MYSQL_DATABASE=tickets
      - MYSQL_USER=admin
      - MYSQL_PASSWORD=admin

  activemq:
    image: rmohr/activemq:5.13.1
    ports:
      - "61616:61616"
      - "8161:8161"
    volumes:
      - ./volumes/activemq:/var/activemq/data

  search:
    build: ./search
    ports:
      - "8081:8080"
      - "9010:9010"
      - "5005:5005"
    links:
      - redis
    environment:
      THIRDPARTY_HOST1: http://10.200.10.1:9002
      REDIS_HOST: redis
      REDIS_PORT: 6379
      DOCKER_IP: 192.168.99.100
      RMI_PORT: 9010
      DEBUG_PORT: 5005
      APP_ARTIFACT: search-assembly-1.0.1.jar

  checkout:
    build: ./checkout
    ports:
      - "8082:8080"
      - "9011:9010"
      - "5006:5005"
    links:
      - mysql_checkout
    environment:
      THIRDPARTY_HOST2: http://10.200.10.1:9002
      MYSQL_HOST: mysql_checkout
      MYSQL_USER: admin
      DOCKER_IP: 192.168.99.100
      RMI_PORT: 9010
      DEBUG_PORT: 5005
      APP_ARTIFACT: checkout-assembly-1.0.0.jar

  tickets:
    build: ./tickets
    ports:
      - "8083:8080"
      - "9012:9010"
      - "5007:5005"
    links:
      - mysql_tickets
      - activemq
    environment:
      THIRDPARTY_HOST3: http://10.200.10.1:9002
      DOCKER_IP: 192.168.99.100
      RMI_PORT: 9010
      DEBUG_PORT: 5005
      AWS_ACCESS_KEY: ABCD49DRF02MD5JDK
      AWS_SECRET_KEY: ed7JKKmmK4DNjj32kDJH
      S3_REGION: eu-west-1
      MYSQL_HOST: mysql_tickets
      MYSQL_USER: admin
      ACTIVEMQ_PRIMARY_HOST: activemq
      APP_ARTIFACT: tickets.jar

  exchanges:
    build: ./exchanges
    ports:
      - "8084:8080"
      - "9013:9010"
      - "5008:5005"
    links:
      - tickets
    environment:
      THIRDPARTY_HOST4: http://10.200.10.1:9002
      TICKET_HOST: tickets
      TICKET_PORT: 8080
      DOCKER_IP: 192.168.99.100
      RMI_PORT: 9010
      DEBUG_PORT: 5005


The services ‘redis’ and ‘activemq’ are built straight from an image whereas the rest are built from a Dockerfile.

MySQL databases are represented by 2 different services, ‘mysql_checkout’ and ‘mysql_tickets’. This is the recommended approach as opposed to have both services, ‘checkout’ and ‘tickets’, using the same database.

The volume of these 4 services, ‘redis’, ‘activemq’, ‘mysql_checkout’ and ‘mysql_tickets’, is mapped to a local folder so that any data stored in those volumes is persisted even after stopping/removing the docker containers. Therefore, if for any reason you need to recreate one of those services, the newly generated service will be provisioned with the data existing in the local folder. Speaking of provisioning the database, the schema of the MySQL databases is created with scripts managed by Liquibase. It is also worth noting that, in case we want to expose both MySQL databases to the outside world, it will be necessary to use different ports. In the above script, ‘mysql_checkout’ and ‘mysql_tickets’ are exposed on the ports 3306 and 3307 respectively.

Similarly, the 4 services, ‘search’, ‘checkout’, ‘tickets’ and ‘exchanges’, are exposed on different ports, ranging from 8081 to 8084.

The Dockerfile used to create MySQL services is:

FROM mysql:latest
RUN deluser mysql
RUN useradd mysql

So basically, it is just the original image with the re-creation of the user ‘mysql‘. This was necessary to get around this issue on my Mac.

I also map my local folder “./volumes/mysql_conf” to the container’s folder “/etc/mysql/conf.d” in order to configure MySQL server with the same configuration as Production’s (as I discussed previously with regards to the property explicit_defaults_for_timestamp).

The Dockerfile of the java services is

FROM openjdk:8-jdk
WORKDIR /app
COPY $APP_ARTIFACT .
COPY entrypoint.sh .
RUN chmod +x ./entrypoint.sh
EXPOSE 8080 9010 5005
ENTRYPOINT ["./entrypoint.sh"]

and the content of ‘entrypoint.sh’:

#!/bin/bash

java \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=$RMI_PORT \
-Dcom.sun.management.jmxremote.local.only=false \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.rmi.server.hostname=$DOCKER_IP \
-Dcom.sun.management.jmxremote.rmi.port=$RMI_PORT \
-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=$DEBUG_PORT \
-jar $APP_ARTIFACT

The above snippet shows the java command with the flags to enable remote JMX over RMI and remote debugging using JDWP. This will allow tools like JConsole or Visual VM to connect to the service running on Docker and the IDE to debug said service. The property java.rmi.server.hostname must be set to the externally accessible IP address of my Docker Virtual Machine (if not explicitly set, the RMI server will expose the Docker-assigned internal IP address).

The Docker Compose file also contains a few environment variables of the type

THIRDPARTY_HOST<n>

These variables represent external dependencies with third party services. In the script, all of them have the same value, http://10.200.10.1:9002. This URL corresponds to a local network interface where my WireMock server listens on. I do not want to rely on the availability of external services to run my services and that is why I have a local server to stand in for those external services. This local server is configured to serve different types of responses and allows me to simulate many different scenarios. By the way, I could have set it up on a Docker container as well, but I prefer to run it directly on my laptop so that I can make changes quickly. In order for the services running on Docker containers to be able to hit my WireMock server, I need to assign an IP to my Mac with the command

sudo ifconfig lo0 alias 10.200.10.1/24

 

Conclusion

I hope this post will be of help to all those struggling to keep their microservices straight on their local environment. Using Docker this way, if only as a way to document how to run a microservice, is very helpful and makes switching context among services an easy experience. Also, being able to recreate databases, message brokers, etc. at will and to mirror Production configuration are undeniable advantages.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.