Developers Club geek daily blog

2 years, 11 months ago
We continue the story about development methodologies in the field of the Big Data applied in the Megafon company (the first part of article here). Every day brings us new tasks which require new solutions. Therefore also techniques of the organization of development are constantly improved.

As we work now

Continuous Delivery

The practical embodiment "Kanban" is results with extremely operative back coupling. The concept of Continuous Delivery (CD) which can be visualized in the form of the pipeline meets these requirements:

"Big data" — it is boring?

The methodology of development accepted by us means very short periods of iterations. For this reason the CD pipeline is most automated: all three stages of testing are carried out without participation of the person on the server of continuous integration. He takes all changes made at the stage "Development", applies them, carries out tests then he issues the report on success of passing of testing. The last stage, "Implementation", becomes also automatically on test circles of developers, and for expansion on user environments the command of the person is necessary.

If to speak about any one application (for example, the simple website is created), then process of CD does not result in any difficulties. But in case of development of a platform the situation changes: the platform can have a set of different applications. It will be even more difficult to test all changes during creation of a platform for data handling: for receipt of exact results it will be required to load couple of tens terabyte of data. It significantly extends a cycle of continuous integration therefore it needs to be separated into smaller tasks and to hold testing on small data volumes.

Delivery objects

What is delivered within process of CD:

• The regular processes working at Hadoop (ETL).
• Analytical services of real time.
• Interfaces for consumers of results of analytics.

The typical business requirement covers all three sets of delivery: it is necessary to adjust regular and on-line (in real time) processes, and also to provide access to results — to create interfaces. A variety of interfaces significantly complicates development process, all of them demand obligatory testing within process of CD.

Integration of non-feet

Testability of a product – one of the main requirements of methodology of CD. For this purpose we created the tools allowing to test automatically delivery objects by the developer's machine, on the server of continuous integration and in the environment of acceptance testing. For example, we developed a maven-plug-in which allows to test locally the scripts which are also written on Apache Pig for the processes developed in the Apache Pig programming language, as if they work at a big cluster. It allows to save time considerably.

Also we developed own installer. It is executed in the form of DSL language based on Groovy and very simply allows to specify and visually where it is necessary to send each subject of delivery. All information on the available environments — test, by preproduction and production — is stored in the configuration service created by us. This service is an executive intermediary between an installer and Wednesdays.

After expansion of objects of delivery the automated acceptance testing is held. During this process all possible actions of the user are imitated: the movement by the mouse cursor, link navigation in interfaces and on web pages. That is the system operation correctness from the point of view of the user is checked. In fact, business requirements are unambiguously fixed in the form of acceptance tests. Objects of delivery are exposed to also automated load testing. Its purpose consists in confirmation of observance of performance requirements. For this purpose we selected the special environment.
The following stage is statistic analysis of quality of a code on style and typical errors of coding. The code has to be right from the point of view of the compiler, not contain logical errors, bad names and other flaws of style. Quality of a code is controlled by all developers, but in our sphere application of the similar analysis to objects of delivery is not a standard step.


After a successful completion of testing the expansion stage begins. In the course of sending objects of delivery to commercial operation management happens automatically, without participation of the person. Our server park consists from more than 200 machines, for management of configurations of servers the Puppet system is used. Rather physically to rack-mount the server, to specify to management system the environment for connection and a server role, and further everything occurs automatically: downloading of all settings, the software installation, connection to a cluster, the start of server components corresponding to the necessary role. Such approach allows to connect and switch-off servers in tens, but not by the piece.

We use a simple configuration of the environment:

• Working servers (worker nodes) in the form of normal "iron" machines.
• A cloud of virtual computers for the different utilitarian tasks which are not demanding big capacities. For example, management of metadata, repository of artifacts, monitoring.

Thanks to such approach utilitarian tasks do not occupy capacities of physical servers, and office services are reliably protected from failures, virtual computers are restarted automatically. At the same time working servers are not the only points of failure and can be without serious consequences replaced or reconfigured. It is often possible to hear about platforms where emphasis is placed on a cloudy ecosystem at creation of clusters. But use of a cloud for a solution of "heavy" analytical tasks with large volumes of data is less effective from the point of view of cost. The scheme of creation of the environment used by us gives economy in expenses on infrastructure as normal, not virtual, the machine is more effective on input-output operations from local disks.

Every Wednesday consists of part of a cloud of virtual computers and a quantity of working servers. Including we have test environments on virtual cars on demand which are turned temporarily for a solution of some tasks. These machines can be created even by the local machine of the developer. For no-touch deployment of virtual computers we use the Vagrant application.

In addition to test circles of developers we support three important environments:

• The environment of acceptance tests — UAT.
• The environment for load testing — Performance.
• The industrial environment — Production.

Switching of working servers from one environment in another takes few hours. It is the simple process demanding the minimum intervention of the person.

For monitoring the distributed system of Ganglia is used, a log are aggregated in Elastic, for an alerting is used – Nagios. All information is output to the video wall consisting of big TVs under control of Raspberry Pi microcomputers. Each of them is responsible for a separate fragment of the general big image. Effective and very available solution: the overall visual picture of a status of environments and process of continuous delivery is brought to a panel. One look to obtain exact information volume as there takes place development process as services in commercial operation feel suffices.


The volume of the data processed by us exceeds 500 000 messages per second. On 50 000 of them the system reacts less than in a second. The saved-up database for the analysis occupies near the 5th petabyte, in the long term it will grow to the 10th petabyte.

Each of servers sends to monitoring on average 50 metrics a minute. Quantity of indicators on which control of admissible parameters and an alerting — is exercised more than 1600.

Use of results of the analysis

Big Data – an invaluable source of information: turning digits into knowledge, it is possible to develop new products for subscribers, to improve already existing, to react quickly to change of a situation or, for example, behavioral models of users.

Here some examples from rather big list of applicability of Big Data:

• The geospatial analysis of load distribution on a network: where as well as why load of a network increases.
• The behavioural analysis of devices on the cellular network.
• Dynamics of emergence of new devices.

In particular, we use results of the analysis of Big Data at radio planning and upgrade of own network infrastructure. Also we created a number of the services providing different analytics in real time.

For the open market we (for the first time in the telecommunication market of Russia!) started in November, 2013 geospatial service of the analysis of city flows, including pedestrians and public transport. Examples of such commercial services which are not based on GPS in the world — units.


Let's separately a little tell about our command. Except the R&D; command; the development of services which is engaged directly, we have a DevOps direction which is responsible for operability of all solutions.

They on an equal basis with customers participate in process of setting of tasks, offer the completions for each of services. They also impose requirements to quality and functionality of developments and participate in testing and acceptance. Slightly in more detail about this direction it is possible to read here.

"Big data" — it is boring?

About our Moscow office more than once wrote in different editions (for example, here), but it should be noted that developers, DevOps and analysts work at distance from each other): in Moscow, Nizhny Novgorod and Yekaterinburg. In order that process did not suffer, we use a lot of the paid and free tools considerably facilitating all life.

We are very much helped by Slack for communication both in design team, and with contractors. Now it is in general a fashionable hipstersky trend but as the tool for communication it very much and it is very good. Besides, we passed to internal GitLab, integrated all processes with Jira and Confluence. For all offices the uniform standard of development, uniform rules of a design of tasks, uniform approach to providing employees with the equipment and the other things necessary in work is implemented.

Eventually before us there are more and more tasks. Therefore our command is replenished with the new professionals capable to bring benefit in the most different areas. To work with Big Data in the large telecommunication operator is an interesting, ambitious challenge. And we with optimism look forward – ahead of us waits a lot of interesting.


"Subsidiaries" of the Russian Railway received in use the test version of service developed by "MegaFon" for the analysis of passenger transportations. It is the tool which helps to determine the size and detailed characteristics of the market of transportations. Allegedly commercial start of the project will take place in 2016.

The service offered by "MegaFon" gives the chance of the Russian Railway to manage a passenger traffic: to motivate people to purchase tickets for this or that direction, to analyze falling or growth of sales level, fullness of cars. The analysis of the acquired information allows to introduce amendments quickly: to vary ticket prices (for example, doing them attractive both in a certain time of day, and during a "low" season), to optimize the schedule of electric trains (to add additional trains to rush hours and, on the contrary, to clean structures, not best-selling in a certain hours).

For example, service of "MegaFon" analyzed a route Moscow — Volgograd — Moscow on May holidays of this year: demand grew by 6,8% in comparison with the same period of last year. At the same time service showed that loss of regular customers on a route Moscow — Volgograd for the last year made 8,3%.

"MegaFon" counted that transport companies in Russia spend for similar researches more than 1,2 billion rubles annually. At the same time the companies can collect only part of data available to them, and service of mobile network operator gives the chance to see all picture of the market in general thanks to what the carrier increases the share in a common market of passenger transportations by 1,5–2%. And it is billions of rubles.

This article is a translation of the original post at
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here:

We believe that the knowledge, which is available at the most popular Russian IT blog, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.
Best wishes.

comments powered by Disqus