Developers Club geek daily blog

2 years, 10 months ago
Today machines without effort "connect two words" (1, 2), but are not able to carry on with guarantee dialogue on the general subjects yet. However, already tomorrow you will ask them to make correctly the summary and to select for your children the best section on chess near the house. You want to understand in more detail how in this direction scientists from Facebook, with Google and dr work? You come to listen to them.
Hackathon and winter school of sciences on deep learning and question answering systems

From January 31 to February 5 in Moscow Institute of Physics and Technology with scientific support to Laboratory of neural systems and deep learning there will take place the second international hackathon on deep learning and machine intelligence DeepHack.Q&A.; Within school of sciences of a hackathon the leading world experts from Google Brain, Facebook AI Research, OpenAI, Skoltech, University of Oxford, Courant Institute of Mathematical Sciences in NYU, will read a series of lectures on deep learning and its application to problems of natural languag processing. The majority of them — far off, however is expected personal presence of Rob Feargus (Rob Fergus) and Phil Blansom (Phil Blunsom).

About a subject. The hackathon will be devoted to a problem of The Allen AI Science Challenge. From participants it is required to develop the program capable to be trained to answer independently questions of level of the 8th class of the American school. For this purpose participants were given training (2 500 questions) and validation (8 132 questions) sets of questions in a csv-file format with 4 versions of the answer. For a test set the correct answers are known. The validation set is necessary to determine the level of accuracy of replies of your system and, respectively, to range the declared solutions by this criterion. At sets there are questions in the main subjects of the school program: physics, biology, geography, etc.

The problem definition looks so general that the impression is made that to be risen to its solution, without possessing deep examination in the field of natural languag processing, it is impossible. However it not absolutely so. It is possible to apply already acquired neural network methods in several code lines and to receive result in 32% of accuracy.

It was made by children from command 5vision (by the way, winners of a summer hackathon) and published the solution at the forum Kaggle and Github. You will find the installation instruction here. If suddenly there is an irresistible desire to use Linux, but near at hand you do not have it, then can be registered free of charge on (or much still where) and to start everything there. Now it would be desirable will stop in more detail on what this solution does.

It is based on one of implementations 1,2,3 and dr) (and one of authors of this implementation Thomas Mikolov (Tomas Mikolov), will give lectures on a hackathon).

Application of GloVe ( to a question looks as follows:
  1. Preprocessing of a question:
    • we throw out everything, except capital and lowercase letters of the English alphabet and a space.
    • we throw out so-called "stop" - words (words which practically do not influence sense of the sentence).
  2. We set zero vector representation of a question q = 0.
  3. We pass a cycle on all remained words of a question and we add vector submission of each word to q.
  4. We carry out similar calculations of vector submission of all four answers.
  5. We select that answer to distance from which q vector the smallest.

This implementation gives 32% of accuracy on a validation set.

At 5vision there is also other more "classical" implementation ( based on application of a measure of TF-IDF. It works so:
  1. Parsim a key word on the main subjects from the website (for an example the subject "Physics":
  2. We download documents from Wikipedia on a collected key word.
  3. We calculate measures of TF and IDF for this collection of documents and words.
  4. For each question we select the most relevant article from a collection.
  5. For each answer we consider its relevance in relation to this articles and we select what gives the maximum value.

This implementation gives 36% of accuracy. For representation of scale of results It should be noted that at the moment the first place in a rating answers correctly 56% of questions.

Tender has a number of features (here it is possible to find the overview pressing from a tender forum) — for example, the final solution has to work without Internet access. And in general, reading a forum of tender can give a lot of useful.

The schedule action same as well as on last DeepHack’e.

Sunday: collecting of participants, organizational meeting, forming of commands, setup of an environment, input lecture and first night start of calculation.

Monday-Friday: discussion of intermediate results, programming of the new solutions developed in command, lecture, start of calculations.

Saturday: summing up, rewarding of winners.

Accommodation and computing resources will also be provided to all participants. But the notebook should be brought with itself.

Shortly we are going to publish the overview of neural network architecture in the field of natural languag processing from which it will be possible to gather ideas for improvement of the current solutions.
As far as to you the subject of natural languag processing is interesting?

37 people voted. 6 people refrained.

The users only registered can participate in poll. Enter, please.

This article is a translation of the original post at
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here:

We believe that the knowledge, which is available at the most popular Russian IT blog, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.
Best wishes.

comments powered by Disqus