Why and what for?
Unfortunately, at the initial stage of life of our website qualities stood guard, generally only possibilities of manual testing. Considering short release cycles, regression tests had to be banished often. A large number of locales aggravated a situation (checks were multiplied by quantity of locales). Costs for manual testing were big, and benefit of test automation of the website obvious – it was necessary to work.
What do we want?
So, we begin to make requirements to system of automatic testing. We noticed that on processing enough simple defects are spent precious time, there was a wish to minimize amount of the defects brought to the bug tracker, but without losing such errors from a viewing field. Therefore work with such defects is decided to be assigned to system of reports of autotests. It was required that the reporting was very convenient, clear, informative and pleasant. It that any member of our amicable web command could look at the report, understand quickly in what business, and to pofiksit defect. Respectively all look: developers fiksit the website, testers fiksit tests. As a result we have "a green signal of the traffic light" and GO in production.
Historically there was one more requirement – language of development Python. Lowering details of other wishes, we received the list of requirements for future system of automatic testing:
- Convenient, clear, informative reporting;.
- Parallel start of tests;
- Parametrization of tests;
- Inheritance of test data on locales;
- Testing in the environment, the most approximate to the user;
- Minimization of false drops;
- Convenient support and delivery of an environment for testing;
- Minimum of own development and customization;
- Open source;
Torment of the choice
First of all we paid attention Acronis napublikation. Set, began to try. It is plus, of course, much, but now not about them. The main contraindication for us is a work of RF with the parametrized tests and a regular reporting of results for them. It is necessary to notice that the absolute majority of the tests conceived by us are the parametrized tests. In the report of RF if one of cases in the parametrized test is failed, then all test is marked as failed. Also RF reporting from a box was inconvenient for most of our children – a lot of time left on analysis of results of trial autotests. A peculiar "simple" writing of tests by means of a key word in couples tests it was a difficult task more difficult. At the same time, thanking including Habra, we paid attention to Allure from Yandex.
Looked, it was pleasant to all, all had a desire to work with this system that, it is necessary to notice, very important. We studied the tool, were convinced that the project actively develops, has many ready adapters for different popular test frameworks including for python. Unfortunately (or perhaps and fortunately) it turned out that for python there is an adapter only for pytest framework. Looked, esteemed, tried – it appeared, abrupt piece. Pytest much everything is able, the expanded functionality at the expense of ready and own plug-ins, big Internet community is simple in use, and parametrization of tests on it is mute such as to us it is necessary! Started, wrote couple of sample tests, all is excellent. Now business in side-by-side execution of tests, here the choice of solutions it is small – only pytest-xdist. Set, started and …
It turned out that pytest-xdist clashes with Allure Pytest Adaptor. Began to understand, the problem known, already progressive tense is discussed and is not solved. It was not succeeded to find also ready solution for inheritance of test data on locales,
To bypass these problems, we decide to write Toole (wrapper) on python. This wrapper will prepare test data taking into account inheritance, to transfer them to tests, to transfer to tests data of the test environment (the browser, for example) and also to start tests in the set quantity of flows. After execution of tests – to integrate the reports received in different flows in one and to publish final data on the website.
The Paralleling was decided to be implemented rather simply as a parallel challenge of each single test through the command line. At such approach it was necessary to implement transfer of test data in the test independently. Thanks to fikstura in pytest this business of several lines. Important! It is also necessary to comment out couple of lines in (allure-python/allure/common.py) which are responsible for removal of "old" files in a directory of reports of allure of the adapter.
Test data for the parametrized tests are decided to be stored in tsv-files, static test data – in yaml. Belonging of test data to the test and a locale decided to define by means of names and hierarchy of directories in which these data are. Inheritance is conducted from the main basic locale of "en-us", way of removal, adding of unique data. Also in test data it is possible to use a key word of 'skip' and ‘comment’ – for canceling of start of a specific test case with indication of the reason. Such inheritance, for example, if it is necessary to use the same data for all localizations of the website, then inheritance happens automatically without any additional parameters. By the way, for these configurations of the test (an environment, a waiting time, etc.) also implemented inheritance, but not on the basis of localization any more, and on the basis of inheritance of a global configuration file, configs of tests.
One more nuance
Receiving the first reports, began to think how it is more convenient to display results of tests by locales. We considered that it is the most convenient for us to share results on the principle – each locale the copy of the allure-report. And for aggregation of general information on locales we fast wrote an unpretentious, but nice wrapper.
The last that saddened our pleasure, is cases of hangup of separate starts in the test environment. Forgot to mention that as Wednesday for testing we decided to use classical Selenium (as the environment which is brought most closer to real). At a large number of checks in selenium of failures not to avoid. And also all "favourite" false drops, they very much complicate continuous integration and "green ̆ a traffic light signal" for production.
Thought and found a way out. Hangups – overcame with completion of our wrapper. Added a feature to specify the maximum runtime of the separate test and if it is not executed for the specified time, we roughly restart it. And false drops cleaned by means of addition of rerun-xfails for pytest. This plug-in automatically restarts all failed tests, besides we set the number of attempts in the configuration yaml-file for each test or the general.
And, at last, here it is happiness of the beginning avtomatizator: stable convenient working system. It is simple in service, allows to hold testing as fast as possible and without false drops, provides very convenient reporting on results of testing.
Friends, on your fidbeka here on Habré we would like to understand, our experience is how interesting. There is an idea to publish the turned-out ready solution in the form of the docker-container.
This article is a translation of the original post at habrahabr.ru/post/271049/
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here: email@example.com.
We believe that the knowledge, which is available at the most popular Russian IT blog habrahabr.ru, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.