Developers Club geek daily blog

1 year, 12 months ago
Hi, Habr! I work as the intern in the St. Petersburg development center of EMS and I want to give to students couple of advice about creation of future career, and also to tell about tasks in which I am engaged in the company. This year I have received award of Bright Internship Award for one of the solutions as the best trainee of the Center, and it is interesting to me to receive back coupling by the reached results. This article can be interesting to those who is engaged in testing of productivity of systems.


It is a little lyrics (or as I have ceased to be afraid and has got on training)


The history of my acquaintance to EMS originates from campaign on IT exhibition for young competitors in the field of information technologies — Bit-Byte. I have passed on all corporate stands, have filled tens very doubtful leaflets and have taken some business cards. When there is a wish somewhat quicker to start finding experience in the specialty and to receive for it money, not especially you reflect on place for career start.

Many companies claim that to them the students informed on its place in the IT market about its products, clients, etc. come. But, based on the personal experience, I can tell that it not so. 90% of students simply send the résumé on all possible positions in some companies, even without changing the text under specific vacancy. And I was not exception: having come home, I have sent the summary on all email addresses which were on the business cards taken by me. Always it seemed to me that it is in the order of things: hundreds of people monthly send letters with responses, tens are invited to interviews, and only units are employed. It forces you to cover in the mailing as it is possible the bigger range of the companies to increase the chances to be taken at the staff. Next day I already solved different test tasks, and in week me have invited to some interviews.

So it happened that after my first visit to EMC I was refused. It has forced me to approach the second interview more seriously. Questions which set on interviews, are beyond subject of this article, I can only tell that these two meetings were absolutely different. On one of them to me allowed to solve algorithmic problems, and on another there was set of questions on the general IT outlook.

When I was employed in EMC, I finished third year of bachelor degree. Then I perfectly understood that now it is a high time to try the hand in real projects, in group of professionals at whom it is possible to learn much. So it happened what exactly the EMC company became my basic point to the world of industrial development. And one and a half years later I can safely declare that this excellent place for start.

In the program of training a lot of things depend on the mentor who to you was assigned, after all he will give you the first tasks, to prompt and monitor execution of objectives, choice of methods for their solution, etc. Huge plus if this person conducts your interview. I think, many experienced developers try to find on position of the trainee of the student, congenial and to other especially personal qualities, in addition to professional knowledge, skills and abilities. Such creative union has also given me huge potential for future fulfillments. Now I can tell with confidence that I was very lucky with the mentor.

I do not remember any special first impressions of team working in large corporation. I have been well informed on in what conditions employees in firms of similar level work. Habr teems with responses of developers who have got over in Google, Yahoo, Amazon and other corporations, with all details and paints of transatlantic life. I was simply happy with that fact more likely that I had had opportunity to work in the specialty. Yes, the atmosphere here very strongly differs from that that reigns in walls of university, there was pleasant feeling of novelty and lack of information in the events.

Entrance about productivity assessment


My work in the company is connected with optimization of productivity of product of EMC PowerPath. In general, this software working at servers in SAN which optimizes use of ways on the storage networks on the basis of FC, iSCSI and FCoE for providing the predictable, scaled and coordinated information access. Given software provides effective use of all data transmission channels and eliminates need for several separate solutions on management of ways for diverse operating system.

Optimization always affects set of aspects of software development, you have to understand well functionality of product, be able to test it and to automate the executed actions. It was first rather difficult to penetrate into logic of product, and process of adaptation to the existing code has taken rather long time. It was necessary to master quickly enough Perl, means for profiling of different operating systems, shell, standard utilities of Windows and Unix.

In this specific case, testing consists of dynamic verification of behavior of the program on final test set, the program is presented in the form of "black box". Thus tests are selected from usually executed actions of application area and provide check of compliance to the expected system behavior. Very often when developing program complex it is necessary to face one of two problems. Or quality of the developed product is lower than the minimum requirements: or costs of testing exceed all reasonable limits. To reduce costs of approbation of behavior of the program in different environments, it is necessary to automate this process as much as possible.

I have also received award of EMC Bright Internship Award for creation of automation equipment of testing of EMC PowerPath and the multipathing systems which are built in OS. This testing approach of productivity has allowed to increase considerably product productivity due to identification of problems (bottlenecks) which solution has provided essential increase in productivity (in particular, on platform on AIX (> 30%)). Essence in the following: measurement of flow capacity of basic utilities of OS is taken for the purpose of comparison of the received calculations with results for the software product from EMC unrolled with different physical and program components of subsystem. It becomes without excess time expenditure, is automated and unified. The complex which would be highly effective and universal from the point of view of possibility of application for test automation on any operating system (AIX, ESXi, RHEL, WS) was necessary, and also would not demand considerable labor costs for the application. Further I will tell how I have made it.  

That has been made


For a start, we will give definition to software for generation of data (I/O benchmarks) — this application which allows to start the synthetic test of disk and network subsystems as for single, and cluster systems. It can be used as the basic tool for laboratory researches and fault finding. It can be easily configured for reproduction of loading (simulation of behavior) from many popular applications by task of test templates.

For testing of the mechanism of balancing used in product from the point of view of assessment of its productivity I use three different software products of generation of data units: Iometer, Iorate and Vdbench. Their comparative characteristics are given in the table:

Name YAP Open source code The supported OS Generation of accidental data units
lometer With ++ Yes Windows, Linux, Solaris, Netware No
lorate With Yes AIX, HP-UX, HP-US, Solaris zLinux, Linux No
Vdbench Java Yes All main Yes


One of keyword parameters is possibility of generation of accidental data units as the EMC corporation makes SHD supporting technology of deduplication. Such storage systems save only links to the repeating blocks (for example, on 4 Kb) that is carried out much quicker than record of the whole block. Proceeding from it, loading from the repeating data for some SHD does not allow to receive adequate results of productivity, and in this case Vdbench is used.

To start the program for generation of data, it needs to give configuration file in which all parameters of the forthcoming start will be described on input. For testing of productivity of SHD and means of balancing of loading between ways of data I use two types of tests:
  • UIO (Uniformed I/O) — tests with fixed size of data, each of which is based only on one pattern.
  • DBSIM (DataBase SIMulation) — the tests based on mixing of different patterns in certain ratio to emulate work of real business system, for example, of OLTP. At such approach, blocks of the different size go in turn.

The pattern — is sample which is used for the description of all parameters of data unit. Each test can consist of one or several patterns set in percentage ratio. The pattern possesses the following parameters:
  • Block size;
  • Reading/record %;
  • % of accidental/consecutive I/O, model of disk access at block read or write. At consecutive model of disk access, record and reading happen much quicker as thus operation of random access is not carried out.

Time of random access — the average time for which the winchester executes operation of positioning of head of reading/record on any section of magnetic disk. Actually only for the devices based on the principle of magnetic record.

The UIO list of tests for testing of productivity is provided as the explanation told above in the table:

Name of dough Size, byte % reading % chance pattern
4K_Seq_Read_Only 4096 100 0
64K_Rand_Write_Only 65536 0 100
256K_Rand_Read_50 262144 50 100


As a result of start of software for generation of data with configuration file (or several), on each type of tests will receive certain quantitative result.

Except pattern, each test has some main characteristics, some of which are integral:
  • Run time — time during which, all received results of calculations will influence final result. It is usually measured in seconds.
  • Warm up time — warm-up period during which results of calculations are not included into final result. It is measured in seconds.
  • Pause time — down time between tests. It is measured in seconds.
  • IO rate — the greatest possible data transmission rate, is measured in reading/record operations per second (IOPS), by default without restrictions.

Values of these parameters are entered to configuration file and do not change during start.
Each of three programs — Iometer, Iorate, Vdbench — creates one or several output files containing except information, useful to further analysis, and different minor data on start. Therefore it is necessary to develop scripts which will allow to parsit the received files for the purpose of extraction and data transformation to the necessary unified format.

To automate process of obtaining visual data on current status of means of balancing of loading, it is necessary to unify the results of testing received from different operating systems and different generators of data. For this purpose it is necessary to develop special scripts which can be started absolutely on all platforms supported by this generator.

These scripts, except generation of the resulting file, will have to manage completely testing process, record the done operations and the current system configuration. Also scripts have to be able to change politicians of balancing of loading, to check availability and availability of devices.

Configuration of "laboratory" for research


Let's stop in more detail on the laboratorii organization for research:

Standard system configuration

Host side:
Except for AIX with proprietary hosts from IBM, the physical machines Dell PowerEdge R710 are used:
Memory: 11898Mb RAM
Processor: Intel® Xeon® CPU E5530 @ 2.40GHz
CPU: 2394.172 MHz
Cache Size: 8192 KB
with 2 physical sockets, 8 physical kernels, 8 hyper-threading kernels.

Other settings of host, in zamisimost from the test of case:
OS: ESXi, RHEL, Windows Server, AIX
Number of cores: 4/8/16
Logical Units: 4/8/16
Logical Unit size: 5GB (Disks of small volume for the maximum IOPS)
Threads per LU: 64 for 4 LU, 32 for 8 LU, 8 for 16 LU, 4 for> 16LU

Storage side:
EMC XtremIO, 1 X-Brick configuration
EMC Symmetrix VMAX
EMC VNX

SAN:
The selected FC Switch, 8Gb Fibre Channel ports

Standard settings of factory from the point of view of number of FC adapters

image

Jamming configuration
image

Jamming configuration for testing is used for simulation of "narrow" places in SAN. Specifically on this illustration Windows Server OS though upon it there can be any is used. In system two hosts are used: one with the preset PowerPath (then and MPIO), on is mute calculations of productivity are made; another – load. Each server is connected on different SAN to SHD. Red the loaded way to data, green – free is noted.

The adaptive policy of PowerPath defines degree of load of way and allows to redistribute load of the free. The similar configuration allows to create conditions which can quite arise in real laboratory at customers (certain way, illiterate configuration, one general SAN on some servers and td will fail). Conditions in which native multipathing will funkionirovat with speed of the slowest way, and PowerPath has to show significantly higher IOPS.

Results are presented in the form of diagrams which are under construction by means of the Gnuplot program. It has own instruction set, is able to work in the mode of command line and to execute the scripts read from files. Gnuplot is capable as to save diagrams in the form of files of different graphic formats (command mode of work), and to display them.

In drawing the full scheme of carrying out the automated testing, from configuration before receiving the results which are subject to further assessment is represented.
image

Testing configuration


Let's stop in more detail on testing configuration. Important parameter at start of scenarios of approbation is the number of starts. Often, when testing PowerPath and NMP there is problem of deviation of the received value of productivity from average value, expressed as a percentage. In this work has been decided to take for tolerance value, smaller or equal 5%. If the dispersion from mean value on all starts exceeds the set level, we cannot carry out assessment of flow capacity, and such tests demand the further analysis and repeated reproduction.

As example, lower the result of real DBSIM of testing is represented. For creation of histograms Gnuplot to which on input the special script has been generated was used. All functionality of creation of the resulting images has been written in the Perl language. I cannot go into details of implementation of this automation object because of trade secret.

image

On axis of "y" deflection amount of specific start from mean value in % (propusny ability) is postponed. Being guided by axis "x", we see, which PowerPath or NMP version (Native Multipathing), the built-in means of balancing) possesses these results. On the diagram clearly it is visible that in some tests there are starts with deviation more than 5%, and some come nearer to this value. Similar approbations demand further researches.

The deviation, first of all, is connected with physical and program implementation of specific SHD. Besides, it can be caused by availability of large number of flows which manage departure and data acquisition from disk. The similar phenomenon meets not on all SHD at all and practically does not depend on the IO generator. Except number of starts, objectivity of final results is influenced by the following parameters:
  • Runtime (run time);
  • Warm-up period (warm up time);
  • Down time (pause time).

To provide quality and relevance of results of testing, huge analytical work has been done. In configurations in which the wide spacing of resultants of data is observed, tests with progressive tense of execution have been started, and values of flow capacity remained in log with frequency in 10 seconds. By means of statistic analysis of the received results it is possible to pick up competently different time slots and number of starts for specific tests in the set configuration. These intervals it is possible to sum up and receive temporary intervals of any duration. On such sums the relative mean square deviation (RSD or relative standard deviation, still call it variation coefficient), which characterizes uniformity of data that is the valuable statistical characteristic is calculated. On the value of coefficient of variation it is possible to judge degree of variation of signs of set. The more its value, the is more dispersion of rather mean value, the set on the structure is less homogeneous and the average is less representative. Variation coefficient (1) this relation of mean square deviation (2) to arithmetic-mean (4), expressed as a percentage. The mean square deviation, in turn, can be found as root from dispersion (3). N — number of independent tests.
image

Unlike deviation from average to which we paid attention at creation of histograms, RSD allows to represent most precisely dispersion of sign and its value of rather values (relative indicator, but not absolute). Now we will pick up time slot for specific dough and we will consider it acceptable if RSD value for all such intervals is less or equally than 5%.

image

After the carried-out calculations and redistribution of time slots, we will repeat testing and we will construct diagrams by means of the same Perl-script. Deflection amount from average has considerably decreased and results of testing began to reflect real picture of operability of software most precisely.

As a result


Let's sum up the results of the done work:
  • Similar implementation of test automation of productivity of means of balancing of loading allows to receive as fast as possible the end results of approbations which visually reflect the current proizovditelnost of product concerning the solutions supported by vendors of OS.
  • Developers of product under certain platforms can quantitatively evaluate extent of improvement/deterioration of productivity of rather previous versions.
  • For configurations where the dispersion of the end results is observed the analytical work allowing to provide significantly stabler results has been carried out.

For myself I can draw conclusion that the fast, transparent and unified carrying out automatic testing of the developed product allows it to develop with significantly faster speed. In particular, thanks to such approach, we could increase EMC PowerPath productivity by AIX more, than for 30%.

Parting word for students


I want to advise not to tighten with beginning of the career in the specialty, after all experience in the professional sphere is appreciated above the other. Upon termination of university, in addition to scientific publications, the general knowledge and GPA, it is desirable to have the experience connected with practice and "fighting" application of knowledge. And then you will be highly appreciated as the specialist with actual abilities, capable to participate in real projects and to generate profit.

This article is a translation of the original post at habrahabr.ru/post/254599/
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here: sysmagazine.com@gmail.com.

We believe that the knowledge, which is available at the most popular Russian IT blog habrahabr.ru, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.
Best wishes.

comments powered by Disqus