Service discovery.

Service discovery allows you to specify dependencies between two or more services, without hardcoding their addresses and ports. Let's say we have two services:

  • scores-backend which provides REST API on port 80 inside the container (exposed to 32868 outside the container)
  • scores-frontend which displays data provided by scores-backend API.
$ armada list
Name Address ID Status Tags armada 192.168.3.141:49153 ae148a2d3a1a passing - scores-backend 192.168.3.91:32868 e93f803bbea8 passing ['env:dev'] scores-frontend 192.168.3.91:32870 7f50c2d2f285 passing ['env:dev']

Without service discovery.

In this case, our scores-frontend service would have to know on which address and port can it find scores-backend. We can hardcode it, but this is not a good practice. What happens when a backend service port changes? You can always run this service on the same port, but what happens if this port is used by another service? You'd have to change your configuration. As number of services grow, you have to remember which service runs on which port, as well as keep it well documented.

With service discovery.

Armada offers solution to this problem. In our scores-frontend code, we can simply add one entry to our supervisor configuration.

scores_frontend.conf
[program:require_scores_backend] command=microservice require 2000 scores-backend

microservice require finds an armada service with scores-backend name and 'dev' env, then maps it's address to localhost:2000 inside the container. From now on, we can deploy scores-backend on any ship within armada cluster and always use localhost:2000 for API requests.

Configurable requirements.

In our scores-frontend code, in config path we can place service_discovery.json file containing configuration.

service_discovery.json
{ "scores-backend":{"port":"2000"} }

This will have the same effect as previous example. In configuration file you can also specify many instances of required service with different "env" or "app_id" with config like this:

service_discovery.json
{ "scores-backend": [ {"port":"2000", "app_id": "app_1"}, {"port":"2001", "app_id": "app_2"} ] }

Hint.

  • If we run our frontend with dev/legacy/john env, microservice require will first try to find service with matching dev/legacy/john env. Should it fail, it will try to find a service with dev/legacy env and eventually with dev env. Note that it will not match a service without specified env.
  • In python package armada we offer easy way to get address of required service inside your container:
    from armada import service_discovery address = service_discovery.get_address('scores-backend')
    This method will return address of specified microservice, you can also specify required 'env' and 'app_id'.

Load balancing.

  • If there are two ore more scores-backend services with matching envs, microservice require will automatically balance load between them.