Service configuration.

Development and production environments for the services can be very different. There may be a difference in database addresses, domains used, verbosity of service output etc. Presumably you keep those differences for your services in some kind of configuration file/files. As there is a well known pattern of keeping the configuration separate from the code, we also strongly recommend it.

Legacy services.

Legacy service in this context means a service that is not yet suited to run on Armada.

If the service that you want to armadize, chooses its configuration based on some environment variables, you can set them at the moment of running the service:

$ armada run coffee-counter -e "DATABASE_HOST=mysql.initech.com" -e "DATABASE_USER=peter" -e "DATABASE_PASSWORD=trustno1"

If the service relies on configuration file/files being placed under some specific path, you can "insert" the file in the service container like that:

$ armada run coffee-counter -v /etc/coffee-counter/users.txt

That file (it can also be whole directory) will be shared by the ship with the service running inside the container.

Hermes. The Armada solution for service configuration.

Previously mentioned methods can be used to quickly adapt your service to Armada platform, but none of them is the preferred way.

Let's look at the preferred Armada way of doing things by trying to modify the coffee-counter service a little bit.
For example, we may want to set the limit on coffees drunk in one go by each person. It can be as high as 5 coffees in development environment (called dev), and as low as 2 in production environment (called production).

The preferred Armada way in its simplest form consists of these steps:

-->

Gather all your configuration data for all the environments in the files under one directory, e.g. config.

config/ dev/ production/ qa/ ...

Under each environment directory you can place any number of files. Their format doesn't matter for Armada. We'll use json configuration format in our case:

coffee-counter$ mkdir -p config/dev config/production
coffee-counter$ echo '{ "coffee_limit": 5 }' > config/dev/config.json
coffee-counter$ echo '{ "coffee_limit": 2 }' > config/production/config.json
-->

Provide a way for the configuration data to be sent to the ship where the service will be run. It must be placed under directory /etc/opt/SERVICE_NAME-config/, where SERVICE_NAME is the name of your service image, e.g. coffee-counter.
This may seem the hard part, but in fact it can be easily achieved by running Armada service called courier. Its purpose is exactly to move configuration files from one place (git repository, directory on some server etc.) to Armada ship/ships. Whether the ships are local or remote, courier can handle it.
More details about courier can be found here.

For the purpose of our guide we can simply link configuration directory that we've just created, to the right destination path:

coffee-counter$ ln -s `pwd`/config /etc/opt/coffee-counter-config

We're using `pwd` to get our current path, because we want to use absolute path for the symbolic link.

-->

Read the configuration in your code and adjust its behaviour depending on the configuration data.

coffee-counter$ vim src/coffee-counter.py
coffee-counter.py
... import hermes ... class Drink(object): def POST(self, user, count): config = hermes.get_config('config.json') coffee_limit = config.get('coffee_limit', 1) if int(count) > coffee_limit: return "You can't drink that much! The limit is {0}.\n".format(coffee_limit) coffees[user] += int(count) return "{0}'s coffee count is now {1}.\n".format(user, coffees[user]) ...

The highlighted lines are the ones that were added. What happens here?
We are using the function get_config() from the library hermes. It looks for specified file in the configuration directory and returns its contents. If the file happens to have an extension json (which is the case) it also parses the data for us.
Library hermes is part of Armada and the simplest way to use it is to base your service image on microservice_python instead of microservice:

coffee-counter$ vim Dockerfile
Dockerfile
- FROM dockyard.armada.sh/microservice + FROM dockyard.armada.sh/microservice_python - RUN apt-get install -y python python-dev python-pip - RUN pip install -U web.py

Similar libraries and base images may be already available for language/framework of your choice. If there is none, it's quite easy to write your own get_config() function once you know how the hermes works.

The idea behind hermes is that when Armada runs your service with environment specified by --env it looks for the service configuration data under the directory /etc/opt/SERVICE_NAME-config/ENVIRONMENT on the ship.
If such directory exists, Armada mounts it inside the service container and sets environment variable named CONFIG_PATH that points to it. So, all that get_config() function has to do is to read the environment variable CONFIG_PATH, check if requested file exists in there and return its contents.

In reality this require only slightly more work as CONFIG_PATH may consists of a series of paths separated by colon. Your function should search for configuration file in any of them in that order. It is analogous to how shell searches PATH environment variable when executing programs.
More details about hermes can be found here.

-->

Build and run the service.

Let's build the modified service.

coffee-counter$ armada build coffee-counter
Pulling repository dockyard.armada.sh/microservice_python 5612e0d94ce7: Download complete 511136ea3c5a: Download complete ...(skipped)... Removing intermediate container b4ccee7cb473 Successfully built 61deaeef72b8

Now we can check if switching the environment really works:

$ armada run coffee-counter --env production -d local
Running microservice coffee-counter from dockyard: (alias: local) locally... Service is running in container ca9ab262e9b2 available at addresses: 192.168.3.168:49241 (80/tcp)
$ curl -X POST 192.168.3.168:49241/drink/Peter/4
You can't drink that much! The limit is 2.
$ armada run coffee-counter --env dev -d local
Running microservice coffee-counter from dockyard: (alias: local) locally... Service is running in container 6d826bf3eb92 available at addresses: 192.168.3.168:49243 (80/tcp)
$ curl -X POST 192.168.3.168:49243/drink/Peter/4
Peter's coffee count is now 4.

Seems legit!

Feel free to experiment further. For example you can try adding persistent storage to the service, so that drank coffee data is not lost after service restart. You can use a database which address/credentials can be stored in separate configuration file and differ among environments.

Additional info about hermes.

As we've shown Armada offers convenient convention for managing configuration data for your services. When writing new services, following this pattern can bring many benefits. Separation of configuration from code usually means easier to understand and maintain code. Changes to the configuration can be deployed independently of deploying the code. Running services in new environments is also much easier. Keeping configuration data in one place means we can get rid of some mysterious configuration files placed only on production servers etc.

If you just want to quickly adapt your existing service to Armada platform you can try sticking with the intermediate solutions described in here. Keep in mind that using MICROSERVICE_ENV environment variable is recommended as it's understood by all Armada components such as magellan and armada command itself.
Also don't forget to consider taking advantage of the possibilities that Armada brings. It may prove to be well worth the initial work.