Developers Club geek daily blog

1 year, 5 months ago
For these years I became the witness of how more and more people support the manifesto 12 Factor App and begin to implement the provisions described there. It led to emergence of applications which were considerably simplified in expansion and management. However examples of practical application of these 12 factors were quite unusual occurrence on Internet open spaces.
12 Fractured Apps and Docker

During operating time with Docker, benefits of 12 Factor App (12FA) of steel for me more notable. For example, 12FA recommends that logging was configured for standard output and it was processed as the general event stream. You sometime used command docker logs? It also is 12FA in operation!

12FA also recommends to use variable environments for configuring of the application. Docker does it is trivial, giving an opportunity to set variable environments programmatically during creation of containers.

Docker i12 Factor App — a terrible combination which provides the fluent overview of design and expansion of applications of the future.

Docker also partly simplifies moving of legacy applications to the container. I speak "partly" because finally it is necessary to govern slightly Docker containers therefore 2 GB images of containers are created over the full-fledged Linux distribution kit.

Unfortunately, legacy applications with which you perhaps work right now have many shortcomings, especially around start process. Applications, even modern, contain too many dependences and because of it cannot provide pure start. Application which are demanded access to the external database usually initiate connection with the database during start. However, if this database was unavailable, or is temporarily unavailable, then many of applications just will not be started. If to you carries, you will be able to receive the message on an error with details that will help you with debugging.

Many applications which are packed into Docker have quite small defects. It is more similar to microcracks   — prilozheniye continue to work, but can cause fire and brimstone during the work with them.

Such behavior of applications forces to resort to difficult processes of expansion and promotes development of such tools as Puppet or Ansible. Instruments of management of configurations help to solve different problems, for example unavailability of a DB. They start the database on which this application before start of the application depends. Most likely it is similar to gluing of an adhesive plaster on a laceration. The application has to just repeat connection with the database, using some kind of classification for the returned errors and of course logging of errors. In this case there will be two options: or the database will manage to be returned to online, or your company will just go bankrupt.

Other problem for the applications moved to Docker consists in configuration files. It is a lot of applications, even modern, still rely on the configuration files placed locally on disks. Naiboliye often applied solution — to deploy in addition new containers which connect configuration files in an image of the container.

Do not do it.


If you selected such solution, finally you will have an infinite number of images of the containers called approximately so:

  • application-v2–prod-01022015
  • application-v2-dev-02272015

Soon you need to look for tools for management of such quantity of images.

Moving to Docker gave to people wrong opinion that they do not need management of configurations in any form any more. I am inclined will agree with it, there is no requirement to use Puppet, Chef or Ansible during creation of images, but still there is a requirement to manage parameters of configurations during operating time.

The similar logic is used to finish frequent use of management systems a configuration in order to avoid init of systems for benefit of command docker run.

To compensate lack of instruments of configuration management and steady init of systems, users of Docker address shell-scripts to disguise application shortcomings around boot strap loading and process of start.

As soon as you transfer everything to Docker and will refuse to use tools which have no Docker logo, you will drive yourself into a corner.

Application


Now we will pass for example applications to show several general tasks at start of the typical application. The example carries out the following tasks during start:

  • Loads configuration settings from JSON coded a config. file
  • Gets access to a working directory
  • Sets connection with the external MySQL database

package main

import (
    "database/sql"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "log"
    "net"
    "os"

    _ "github.com/go-sql-driver/mysql"
)

var (
    config Config
    db     *sql.DB
)

type Config struct {
    DataDir string `json:"datadir"`

    // Database settings.
    Host     string `json:"host"`
    Port     string `json:"port"`
    Username string `json:"username"`
    Password string `json:"password"`
    Database string `json:"database"`
}

func main() {
    log.Println("Starting application...")
    // Load configuration settings.
    data, err := ioutil.ReadFile("/etc/config.json")
    if err != nil {
        log.Fatal(err)
    }
    if err := json.Unmarshal(data, &config;); err != nil {
        log.Fatal(err)
    }

    // Use working directory.
    _, err = os.Stat(config.DataDir)
    if err != nil {
        log.Fatal(err)
    }
    // Connect to database.
    hostPort := net.JoinHostPort(config.Host, config.Port)
    dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?timeout=30s",
        config.Username, config.Password, hostPort, config.Database)

    db, err = sql.Open("mysql", dsn)
    if err != nil {
        log.Fatal(err)
    }

    if err := db.Ping(); err != nil {
        log.Fatal(err)
    }
}

The complete source code is available on GitHub.

As you can see, there is nothing special, but if you look attentively, then you will be able to see that this application automatically will be loaded only under certain conditions. If the configuration file or the current directory is absent, or the database will not be available during start, then the above-mentioned application will not be started.

Let's unroll an example of the application through Docker and we investigate it.

Create the application, using command docker build:

$ GOOS=linux go build -o app 

Now, create the container from app:v1 of an image Docker, using command docker run:

FROM scratch
MAINTAINER Kelsey Hightower <kelsey.hightower@gmail.com>
COPY app /app
ENTRYPOINT ["/app"]

Everything that I do here — I copy the binary file of the application in the right place. This image of the container will use the scenario of the basic image leading to the minimum image of Docker of the container suitable for expansion of our application.

Create an image, using command docker build:

$ docker build -t app:v1 .

At last, create the container from app:v1 of an image, using command docker run:

$ docker run --rm app:v1
2015/12/13 04:00:34 Starting application...
2015/12/13 04:00:34 open /etc/config.json: no such file or directory

Yes pain will begin! Directly, practically on start, I faced the first problem of start. Pay attention that the application is not started because of the missing /etc/config.json configuration file. I can correct it, mounting a configuration file at the runtime:

$ docker run --rm \
  -v /etc/config.json:/etc/config.json \
  app:v1
2015/12/13 07:36:27 Starting application...
2015/12/13 07:36:27 stat /var/lib/data: no such file or directory

Other error! This time the application does not manage to be started because does not exist /var/lib/data directory. I can easily bypass the gone directory, mounting other host directory in the container:

$ docker run --rm \
  -v /etc/config.json:/etc/config.json \
  -v /var/lib/data:/var/lib/data \
  app:v1
2015/12/13 07:44:18 Starting application...
2015/12/13 07:44:48 dial tcp 203.0.113.10:3306: i/o timeout

We make progress, but I forgot to configure access to the database for this copy.

It is a point where some people begin to use instruments of configuration management to guarantee that all these dependences are started before the application will be started. Though it also works, it nevertheless in some degree is an excess and often the wrong approach for a solution of problems of the application layer.

I hear silent shouts from hipsters of the "system administrators" who are impatiently expecting to suggest to use the user point of entry of Docker to solve our problems of boot strap loading.


The user point of entry in rescue.


One of methods to solve our problems of start consists in creating the shell-scenario and to use it as Docker point of entry, instead of the actual application. Here the short list of things which we can execute using the shell-scenario as point of entry:

  • To generate trebuyemy/etc/config.json a configuration file
  • to create trebuyemy/var/lib/data the directory
  • to test connection with the database so far it is available


The following shell-scenario is engaged in the first two elements, adding a feature to use variable environments together with /etc/config.json configuration file and creating the missing /var/lib/data the directory during start process. The scenario executes an example of the application as a final stage, saving initial behavior at an application launch by default.

#!/bin/sh
set -e
datadir=${APP_DATADIR:="/var/lib/data"}
host=${APP_HOST:="127.0.0.1"}
port=${APP_PORT:="3306"}
username=${APP_USERNAME:=""}
password=${APP_PASSWORD:=""}
database=${APP_DATABASE:=""}
cat <<EOF > /etc/config.json
{
  "datadir": "${datadir}",
  "host": "${host}",
  "port": "${port}",
  "username": "${username}",
  "password": "${password}",
  "database": "${database}"
}
EOF
mkdir -p ${APP_DATADIR}
exec "/app"

Now the image can be recovered, using the file following Docker:

FROM alpine:3.1
MAINTAINER Kelsey Hightower <kelsey.hightower@gmail.com>
COPY app /app
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

Notice that the user shell-scenario is copied in an image of Docker and it is used as point of entry instead of the binary file of the application.

Create app:v2 an image, using the docker build command:

$ docker build -t app:v2 .

Now execute the following step:

$ docker run --rm \
  -e "APP_DATADIR=/var/lib/data" \
  -e "APP_HOST=203.0.113.10" \
  -e "APP_PORT=3306" \
  -e "APP_USERNAME=user" \
  -e "APP_PASSWORD=password" \
  -e "APP_DATABASE=test" \
  app:v2
2015/12/13 04:44:29 Starting application...

The user point of entry works. Using only variable environments we are able to configure and start our application.

But why we do it?

Why we have to use such difficult scenario of a wrapper? Some will tell that it is much simpler to write this functionality in a cover, than to implement it in the application. But matter not only in management of a shell-stsenaryama. Noticed other distinction between v1 and v2 files?

FROM alpine:3.1

v2 the file uses alpine — a basic image to provide the environment of scenarios, but it doubles the size of our image Docker:

$ docker images
REPOSITORY  TAG  IMAGE ID      CREATED      VIRTUAL SIZE
app         v2   1b47f1fbc7dd  2 hours ago  10.99 MB
app         v1   42273e8664d5  2 hours ago  5.952 MB

Other lack of this approach — inability to use a configuration file with an image. We can continue to script and add support of a configuration file and the variable environment, but all this will simply lose working capacity when the scenario of a wrapper appears synchronization with the application. But there is other method of a solution of this problem.

Programming of all will rescue.


Yes, old kind programming. Each of tasks of the shell-scenario of point of entry of Docker, can be processed directly by the application.

Do not misunderstand me, use of the scenario of point of entry is good for applications which you do not manage. But, when you rely on scenarios of point of entry for applications, you add other level of complexity to process of application deployment is groundless.

Files of a configuration have to be additional


I think that there are at all no reasons to use configuration files since the end of the 90th years. I suggest to load a configuration file if it exists and does rollback to default arguments. The following fragment of a code does it.

// Load configuration settings.
data, err := ioutil.ReadFile("/etc/config.json")
// Fallback to default values.
switch {
    case os.IsNotExist(err):
        log.Println("Config file missing using defaults")
        config = Config{
            DataDir: "/var/lib/data",
            Host: "127.0.0.1",
            Port: "3306",
            Database: "test",
        }
    case err == nil:
        if err := json.Unmarshal(data, &config;); err != nil {
            log.Fatal(err)
        }
    default:
        log.Println(err)
}

Use of the variable environment for a configuration.

It is one of the simplest things which you can directly make in your application. In the following fragment of a code variable environments are used to redefine configuration settings.

log.Println("Overriding configuration from env vars.")
if os.Getenv("APP_DATADIR") != "" {
    config.DataDir = os.Getenv("APP_DATADIR")
}
if os.Getenv("APP_HOST") != "" {
    config.Host = os.Getenv("APP_HOST")
}
if os.Getenv("APP_PORT") != "" {
    config.Port = os.Getenv("APP_PORT")
}
if os.Getenv("APP_USERNAME") != "" {
    config.Username = os.Getenv("APP_USERNAME")
}
if os.Getenv("APP_PASSWORD") != "" {
    config.Password = os.Getenv("APP_PASSWORD")
}
if os.Getenv("APP_DATABASE") != "" {
    config.Database = os.Getenv("APP_DATABASE")
}

Management of a working directory of the application.

Instead of shifting responsibility for work and connectivity with directories to external tools or to a script of point of entry, your application has to manage them directly. If for any reason that that does not work, do not forget to configure logging of an error with parts:

// Use working directory.
_, err = os.Stat(config.DataDir)
if os.IsNotExist(err) {
    log.Println("Creating missing data directory", config.DataDir)
    err = os.MkdirAll(config.DataDir, 0755)
}
if err != nil {
    log.Fatal(err)
}

Relieve of need to start services in a certain order

Clean the requirement of expansion for your application a certain order. I saw that in many deployment guides of different applications there is an instruction to start the application after start of the database, in an opposite case the zero result will turn out.

It is possible to get rid of this requirement here so:

$ docker run --rm \
  -e "APP_DATADIR=/var/lib/data" \
  -e "APP_HOST=203.0.113.10" \
  -e "APP_PORT=3306" \
  -e "APP_USERNAME=user" \
  -e "APP_PASSWORD=password" \
  -e "APP_DATABASE=test" \
  app:v3
2015/12/13 05:36:10 Starting application...
2015/12/13 05:36:10 Config file missing using defaults
2015/12/13 05:36:10 Overriding configuration from env vars.
2015/12/13 05:36:10 Creating missing data directory /var/lib/data
2015/12/13 05:36:10 Connecting to database at 203.0.113.10:3306
2015/12/13 05:36:40 dial tcp 203.0.113.10:3306: i/o timeout
2015/12/13 05:37:11 dial tcp 203.0.113.10:3306: i/o timeout

Notice in an above-mentioned output I am not able to be connected to the working destination database being on 203.0.113.10.

Execute the following command to provide access to the MySQL database:

$ gcloud sql instances patch mysql \
  --authorized-networks "203.0.113.20/32"

The application is able to connect to the database and to complete start process.

2015/12/13 05:37:43 dial tcp 203.0.113.10:3306: i/o timeout
2015/12/13 05:37:46 Application started successfully.

The code for execution looks approximately so:

// Connect to database.
hostPort := net.JoinHostPort(config.Host, config.Port)
log.Println("Connecting to database at", hostPort)
dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?timeout=30s",
    config.Username, config.Password, hostPort, config.Database)
db, err = sql.Open("mysql", dsn)
if err != nil {
    log.Println(err)
}
var dbError error
maxAttempts := 20
for attempts := 1; attempts <= maxAttempts; attempts++ {
    dbError = db.Ping()
    if dbError == nil {
        break
    }
    log.Println(dbError)
    time.Sleep(time.Duration(attempts) * time.Second)
}
if dbError != nil {
    log.Fatal(dbError)
}

There is nothing special here. I just repeat connection with the database and I increase time between each attempt.

Perfectly, we received process of start with the friendly message in the log that the application was correctly started.

log.Println("Application started successfully.")

Believe me, your system administrator will thank you.

You can find the reference to an initial source here.

This article is a translation of the original post at habrahabr.ru/post/273983/
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here: sysmagazine.com@gmail.com.

We believe that the knowledge, which is available at the most popular Russian IT blog habrahabr.ru, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.
Best wishes.

comments powered by Disqus