Small spoiler: at the beginning it seemed to me some magic, but then I understood a dirty trick …
Today the Turing machine (further MT) — universal determination of concept of algorithm, so and universal determination of "problem solver". There is a set of other models of algorithm — a lambda calculation, Markov's algorithms, etc., but all of them are mathematically equivalent to MT so though they also are interesting, but in the theoretical world anything is changed significantly.
Generally speaking, there are other models — the Nondeterministic Turing machine, Quantum computers of Turing. However they (so far) are only abstract modeliya, not implemented in practice.
Half a year ago in Science Advances there was interesting article with model of calculations which significantly differs from MT and which is quite possible for implementing in practice (actually article and was how they counted a problem of SSP on real iron).
And yes. The most interesting in this model the fact that, on assurance of authors, in it it is possible to solve (some) problems from the class NP of complete tasks for a polynom of time and memory.
Probably at once it is worth stipulating that this result does not mean a solution. Setting of this problem not "to solve a problem for time" and whether it is possible to simulate the nondeterministic Turing machine on the normal Turing machine for time polynom. As here absolutely other model of calculations, it is impossible to speak about classical complexity classes.
I am skeptical about a possibility of construction of this machine at present in iron (why I will tell below), but the model seemed to me rather interesting to analysis and, quite perhaps, it will find application and in other field of science.
What represents the computer (more precisely the most popular implementation of MT — arkh. von-Neumann) today? Some input-output interface, memory and CPU which physically from them is separated. Are in CPU as the module managing the course of calculations and blocks which these calculations execute.
Physical department of CPU means that we should spend big time for data transmission. Actually exactly different levels of a cache memory were for this purpose thought up. However the cache memory, of course, facilitates life, but does not solve all problems of data transmission.
The offered data model was inspired by work of a brain (the phrase quite hackneyed, but here quite approaches). Its essence is that calculations happen not in the separate device where it is necessary to transfer data, and directly to memories. An order of calculations are controlled by the external device (Control Unit).
This model of calculations, received the name Universal Memcomputing Machines (I did not begin to translate this term. Further I will use reduction of UMM).
In this article we at first will remember as MT formally is defined, then we will look at determination of UMM, we will look how to set algorithm of a solution of a task on UMM on an example, we will consider several properties, including the most important — information overhead.
Formal description of model.
Universal Turing Machine (UTM)
I think all of you remember what is the Turing machine (otherwise sense to read this article is not present). The tape, the carriage, everything put. Let's remember only as it is defined formally.
The Turing machine is a train
where — a set of possible statuses,
— set of possible characters of a tape
— empty character
— a set of the entering characters
— start state
— set of end states
, where respectively a shift to the left, without shift, shift to the right. That is — our table of transition.
For a start we will determine our storage cell by UMM — a memprotsessor.
Memprotsessor is defined as 4 train where — a memprotsessor status — a vector of internal variables. — the vector of "external" variables, that is variables connecting different memprotsessor. In other words, if — a vector of external variables of two memprotsessor, then two memprotsessor of a soyedenenna. Also, if the memprotsessor is not connected to anybody, then, that is is defined only by internal state.
And, at last, that is — the operator of a new status.
I want to remind that a memprotsessor — not that processor which we usually represent in the head. It is rather a storage cell which has function of receipt of a new status (programmable).
Universal Memcomputing Machine (UMM)
Now we will enter formal determination of UMM. UMM — model of the computer created from the connected memprotsessor (which, generally speaking, can be both digital, and analog).
where — a set of possible statuses of a memprotsessor
— a set of pointers on memprotsessor (are used in to select the necessary memprotsessor)
— a set of indexes (number of the used function)
— start state of memprotsessor
— initial set of pointers
— initial index of the operator ($\\alpha$)
— set of end states
where — quantity of the memprotsessor used as an input by function — quantity of the memprotsessor used as an output by function.
By analogy with the Turing machine as you could already guess — transition functions, analog of the table of statuses. If to look on an example, then let — pointers at memprotsessor — a vector of statuses of these memprotsessor, and — an index of the following command, then
Generally speaking, discarding a formalism, the main difference of UMM from MT that in UMM influencing one storage cell (that is a memprotsessor), you automatically influence also its environment, without additional challenges from Control Unit.
Let's note 2 UMM properties directly following from its determination.
- Property 1. Intrinsic parallelism (I was not defined how it is correct to translate this term therefore left as is). Any function can be started on any set of processors at the same time. In the Turing machine for this purpose it is necessary to enter additional tapes and heads.
- Property 2. Functional polymorphism. It is that, in difference from the Turing machine, UMM can have many different operators.
Generally speaking, it is not so difficult to modify the Turing machine so that it possessed these properties too, but authors insist.
And some more notes by determination. UMM, in difference from the Turing machine, can have the infinite state space at final quantity of memprotsessor (because they can be analog).
By the way, UMM can be considered as generalization of neural networks.
Let's prove one theorem.
UMM — the universal machine (that is the machine which can simulate work of any MT).
In other words, we need to show that the Turing machine — a special case of UMM. (whether truly return — is not proved, and if authors of article are right, but it will be equivalent to the proof)
Let in determination of UMM. We will designate one of memprotsessor as, the others (perhaps infinite quantity) as. Further we will define the pointer. we will use as designation of a status, as the tape character ().
at us will consist of the only function (we lower, as function only one). The new status is defined by the jump table MT — there will be a new status — the new character of a tape. The new pointer if there is no transition of the carriage if we move the carriage to the right if to the left. As a result, at record in start state and starting character from, with UTM simulates the universal Turing machine.
The theorem is proved.
Let's look on an example as it is possible to solve problems on UMM (still just that will get acquainted with model). Let's take a task about the subset sum (Subset Sum Problem, SSP).
Let there is a set and the number is set. Whether there is a subset which sum of elements is equal.
Let in our UMM memprotsessor be located in a matrix type (see drawing). Let's define three operations.
- — it directly calculation. Using activation lines, we can select lines and the limiting columns in which calculations are made. A calculation essence in addition of value of an extreme left cell to all line.
- — it is data movement operation. The node of control selects two columns and values from the first are copied in the second. The control node not necessarily itself executes copying operation, it just activates columns the necessary lines.
- — the operation similar on, only it takes 1 value and writes it in a column.
Combining these three operations, we can receive transition function.
On the first step of algorithm we receive the sum of all subsets length, on the second step of subsets and so on. As soon as we found the necessary number (it will be in the left column), we found the answer. Each step is executed for one function call, therefore the algorithm works steps.
Now we will consider how many memprotsessor are necessary for us for execution of these operations. On iteration of k it is necessary for us memprotsessor. An assessment for this expression on Stirlints's formula —. The quantity of nodes grows exponential.
I think now it became more or less clear what the object such is. Now we will pass to the most tasty that to us is offered by UMM, namely to the third property — information overhead.
Exponential Information Overhead
Let us have n of memprotsessor, will designate a status of the selected memprotsessor as. The status of a separate memprotsessor contains in internal variables. — vector. Also for each memprotsessor it is separable external variables on 2 groups — "in" and "out" (out of one memprotsessor is connected to in of another). On the picture the blank circle — a component. Let's assume, also, that we have a device which, being connected to the necessary memprotsessor, can consider at once.
This device connected to several memprotsessor can consider a status both, so their global status defined as where — commutative, associative operation. This operation as is defined
where and — commutative and associative operations with and. And, if for it is executed, then
where — commutative, associative operation, for which.
Now, having a set of integral numbers, we will define the message where — the indexes taken from various subsets. Thus the set of messages consists of equally probable messages, the amount of information according to Shannon is equal
Now, taking n of memprotsessor, we expose nonzero components where. Thus we coded all elements on memprotsessor. On the other hand, being connected to the necessary memprotsessor and reading out their global status (on formulas there the sum of elements just turns out), we can consider any possible status of m. In other words, n of memprotsessor can code (to squeeze information if you want) about messages at the same time.
The algorithm of the solution SSP using Exponential Information Overhead
Here I am forced to tell that I could not understand parts of this algorithm (there was the fact that I am not so strong in electrical equipment and signal processing, and authors, probably, decided not to paint everything for such ignoramuses), but the general idea such.
For a start they suggest to look at function
If we remove the brackets, then we will have works on various sets of indexes (we will designate such set as), and they equal
In other words, our function contains information on the sums of all subsets. Now, if to consider function g as a signal source, then each exponent gives the contribution to the resulting signal, and a contribution with a frequency.
Now, everything that it is necessary for us - it is to apply Fourier transform to this signal and to look at what frequencies are available for us in a signal. If we have a component with a frequency, then the subset, with the sum exists.
If we solve this problem on the normal computer, then now we could apply fast Fourier transform. Let's evaluate an asymptotics.
For this purpose we will evaluate quantity of points which needs to be taken from a signal. According to Kotelnikov's theorem of these points it is necessary where — an assessment on the maximum possible value of frequency. In article authors entered an additional variable which is proportional and considered an asymptotics through it.
Thus, using FFT we can solve a problem for. Here it is necessary to notice that, as well as in a task about a backpack (and SSP — a special case of a task about a backpack), $p$ grows exponential. For our task it is also possible to use Gyortsel's algorithm that will give us. The method offered by authors allows will get rid from in an asymptotics that will give us linear time.
Now, by the own words (for more detailed consideration, address original articles), as they reached it.
We take analog memprotsessor which internal value will be a value of some number from. As operators, respectively, addition and multiplication are also taken.
But it in our model. In iron it turns out that each memprotsessor — the signal generator with the frequency (corresponding to number from), the shared state of memprotsessor — is simple addition of a signal. It turns out that these memprotsessor simulate function.
Well and now, to consider result, it is necessary to check whether there is in a signal a set frequency. Instead of implementation of FFT, they made a piece of iron which passes only the set frequency (here I too not absolutely understood as, but it my knowledge is guilty of electronics), which already works for constant time.
Total the asymptotics on time in general made, the asymptotics on memprotsessor made. We start up salute? You do not hurry.
Some problems of model
Actually, authors cunning shifted "difficult" part of a task which gives us an exponent, from program part in technical. In earlier article about it in general words, in July they are recognized in it, but only several lines.
Business all in coding of a signal (I found distinct explanations here). Because we code analog signals, and we use discrete generators of signals, we need exponential accuracy by determination of the signal level now (in that piece of iron which isolates the necessary frequency) that, will perhaps demand an exponent of time.
Authors claim that this trouble can be bypassed if instead of discrete signal generators to use analog. But I have big doubts that it is possible to use analog circuits for any and at the same time not to drown in noise (because of them refused analog computers in due time and began to use digital).
The wonderful magic did not occur. NP complete problems are still difficult for calculations. So why I wrote all this? Mainly because though physical implementation is difficult, the model seems to me very interesting, and their studying is necessary. Soon (if already not now) similar models are of great importance in many spheres of science.
For example, as I already mentioned, neural networks are a special case of UMM. It is quite possible that we learn about neural networks a little more if we look at them on the other hand, using a few other mat. device.
This article is a translation of the original post at habrahabr.ru/post/274593/
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here: firstname.lastname@example.org.
We believe that the knowledge, which is available at the most popular Russian IT blog habrahabr.ru, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.