Human: how many legs does a cat have?
Machine: four, i think.
Human: What do you think about messi?
Machine: he ’s a great player.
Human: where are you now?
Machine: i ’m in the middle of nowhere.
(from article A Neural Conversational Model. KDPV from the movie Ex Machina)
And the research organizations approach a solution of a problem of self-training question answering systems of the IT company with different vision of an initial reference point.
Facebook went on the way of determination of the list of 20 specific logic operations and generation under them an artificial set of tasks (so-called bAbi task, the detailed description). About their points of view, these operations are necessary, but not sufficient for creation of artificial intelligence . For example, the system has to be able: to answer positively or negatively questions, to answer questions, proceeding from one or several known facts, to consider, to work with uncertainty, etc.
The original neural network self-training architecture with memory — Memory Networks  and its End-to-End implementation  was developed for the solution proposed a set of tasks (a code from authors, implementation on tensor flow).
Google, developing architecture of Neural Turing Machine , uses more fundamental approach — system which is independently trained what information and when needs to be written and read from memory for a task solution.
However results of this approach are still less competitive at a solution of real tasks. Neural Turing Machine solves problems of sorting and receipt of information from memory, operating at the same time with a small size of memory of 128 cells. Slightly big functionality is shown by Neural Programmer .
The system is capable to be trained to execute basic logic and arithmetic operations over the table with data. The task is set thus: there is column collection with data, there is a set of basic operations, and the system is independently trained in necessary sequence of actions — the choice of data and application to them of operation, for receipt of the required solution.
Allen Institute for Artificial Intelligence for the project of creation of a question answering system about world around (ARISTO system) uses ontologic approach , including with a possibility of training of system due to user interaction . The project is broken into 3 stages — a solution of tests for 4, 8 and 12 classes of the American school. If, more or less, it was succeeded to cope with the 4th class, then for the 8th the problem appeared difficult, and the institute decided to attract the world community of data scientists on Kaggle to its solution — The Allen AI Science Challenge.
Participants were given training (2 500 questions) and test (8 132 questions) selections of questions in a text form with 4 versions of the answer. For the training selection the correct answers are known, for test — no. Because of small volume the training selection is rather intended not for training of system, and for use during the work on a solution for an assessment of its quality in general and degrees of "covering" to them the main subjects of physics, biology, geography and other objects for the 8th class.
Tender has a number of features (here it is possible to find the overview pressing from a tender forum) — for example, the final solution has to work without Internet access therefore to apply so long-awaited Google Knowledge Graph API it will not turn out.
In the table * the comparative overview of modern campaigns to creation of question answering systems prepared within a seminar of Memory and Q&A; is given below; systems of Deep Learning Moscow group (in group there is a complete version of the presentation with links to sources).
* IR — information retrieval; KB — knowledge base; IE — information extraction; BiLSTM — bidirectional long-short term memory; NN — neural net; NTM — Neural Turing Machine; IGOR — architecture of Memory Networks — Input feature map, Generalization, Output feature map, Response.
Earlier on Habré the new hackathon combined with school of sciences, DeepHack.Q&A; was already mentioned; on which it will be possible to test all above-mentioned classical and neural network question-answer methods in business, and also to directly ask questions to their authors.
 Andrew Y. Ng et al. (2014), Deep Speech: Scaling up end-to-end speech recognition
 Bengio Y., Cho K., Bahdanau D. (2015), Neural Machine Translation by Jointly Learning to Align and Translate, International Conference on Learning Representations 2015
 Blunsom P., Grefenstette E., Kalchbrenner N. (2014), A Convolutional Neural Network for Modelling Sentences, The 52nd Annual Meeting of the Association for Computational Linguistics
 Kumar A. et al. (2015), Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
 T. Mikolov et al. (2015), A Roadmap towards Machine Intelligence
 Serban J.V. et al. (2015), A Survey of Available Corpora For Building Data-Driven Dialogue System
 Jason Weston et al. (2015), Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
 Jason Weston et al. (2015), Memory Networks
 Sainbayar Sukhbaatar et al. (2015), End-To-End Memory Networks
 Alex Graves et al. (2015), Neural Turing Machines
 Arvind Neelakantan et al. (2015), Neural Programmer: Inducing Latent Programs with Gradient Descent
 Clark P., et. al (2015), Automatic Construction of Inference-Supporting Knowledge Bases
 Hixon B., et. Al (2015), Learning Knowledge Graphs for Question Answering through Conversational Dialog
This article is a translation of the original post at habrahabr.ru/post/274577/
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here: firstname.lastname@example.org.
We believe that the knowledge, which is available at the most popular Russian IT blog habrahabr.ru, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.