Developers Club geek daily blog

1 year ago
This article can be considered as loose translation (though rather an attempt to understand) this article. And yes, it is written rather for mathematicians, than for wide audience.

Small spoiler: at the beginning it seemed to me some magic, but then I understood a dirty trick …

Today the Turing machine (further MT) — universal determination of concept of algorithm, so and universal determination of "problem solver". There is a set of other models of algorithm — a lambda calculation, Markov's algorithms, etc., but all of them are mathematically equivalent to MT so though they also are interesting, but in the theoretical world anything is changed significantly.

Generally speaking, there are other models — the Nondeterministic Turing machine, Quantum computers of Turing. However they (so far) are only abstract modeliya, not implemented in practice.

Half a year ago in Science Advances there was interesting article with model of calculations which significantly differs from MT and which is quite possible for implementing in practice (actually article and was how they counted a problem of SSP on real iron).

And yes. The most interesting in this model the fact that, on assurance of authors, in it it is possible to solve (some) problems from the class NP of complete tasks for a polynom of time and memory.

Probably at once it is worth stipulating that this result does not mean a solutionP = NP. Setting of this problem not "to solve a problemq \in NPcomplete forn^{const} time" and whether it is possible to simulate the nondeterministic Turing machine on the normal Turing machine for time polynom. As here absolutely other model of calculations, it is impossible to speak about classical complexity classes.

I am skeptical about a possibility of construction of this machine at present in iron (why I will tell below), but the model seemed to me rather interesting to analysis and, quite perhaps, it will find application and in other field of science.

Small introduction

What represents the computer (more precisely the most popular implementation of MT — arkh. von-Neumann) today? Some input-output interface, memory and CPU which physically from them is separated. Are in CPU as the module managing the course of calculations and blocks which these calculations execute.
Universal Memcomputing Machines as alternative to the Turing machine
Physical department of CPU means that we should spend big time for data transmission. Actually exactly different levels of a cache memory were for this purpose thought up. However the cache memory, of course, facilitates life, but does not solve all problems of data transmission.

The offered data model was inspired by work of a brain (the phrase quite hackneyed, but here quite approaches). Its essence is that calculations happen not in the separate device where it is necessary to transfer data, and directly to memories. An order of calculations are controlled by the external device (Control Unit).
Universal Memcomputing Machines as alternative to the Turing machine
This model of calculations, received the name Universal Memcomputing Machines (I did not begin to translate this term. Further I will use reduction of UMM).

In this article we at first will remember as MT formally is defined, then we will look at determination of UMM, we will look how to set algorithm of a solution of a task on UMM on an example, we will consider several properties, including the most important — information overhead.

Formal description of model.

Universal Turing Machine (UTM)

I think all of you remember what is the Turing machine (otherwise sense to read this article is not present). The tape, the carriage, everything put. Let's remember only as it is defined formally.

The Turing machine is a train

UTM = (Q, \Gamma, b, \Sigma, \delta, q_0, F),

whereQ — a set of possible statuses,
\Gamma — set of possible characters of a tape
b \in \Gamma — empty character
\Sigma — a set of the entering characters
q_0 — start state
F \subseteq Q — set of end states

\delta : Q \smallsetminus F \times \Gamma \rightarrow Q \times \Gamma \times \{L, N, R\}, whereL, N, R respectively a shift to the left, without shift, shift to the right. That is\delta — our table of transition.


For a start we will determine our storage cell by UMM — a memprotsessor.

Memprotsessor is defined as 4 train (x, y, z, \sigma)wherex — a memprotsessor statusy — a vector of internal variables.z — the vector of "external" variables, that is variables connecting different memprotsessor. In other words,z_1 ifz_2 — a vector of external variables of two memprotsessor, then two memprotsessor of a soyedenenna\Leftrightarrowz_1 \cap z_2 \neq \O. Also, if the memprotsessor is not connected to anybody, thenz = z(x,y), that is is defined only by internal state.

And, at last\sigma[x,y,z] = (x&39;, y&39;), that is\sigma — the operator of a new status.

I want to remind that a memprotsessor — not that processor which we usually represent in the head. It is rather a storage cell which has function of receipt of a new status (programmable).

Universal Memcomputing Machine (UMM)

Now we will enter formal determination of UMM. UMM — model of the computer created from the connected memprotsessor (which, generally speaking, can be both digital, and analog).

UMM = (M, \Delta, \mathcal{P}, S, \Sigma, p_0, s_0, F),

whereM — a set of possible statuses of a memprotsessor
\mathcal{P} — a set of pointers on memprotsessor (are used in \deltato select the necessary memprotsessor)
S — a set of indexes\alpha (number of the used function\delta)
\Sigma — start state of memprotsessor
p_0 — initial set of pointers
s_0 — initial index of the operator ($\\alpha$)
F \subseteq M — set of end states

\Delta = \{ \delta_{\alpha} \  |  \ \delta_{\alpha}: M^{m_{\alpha}} \smallsetminus F \times \mathcal{P}  
    \rightarrow M^{{m&39;}_{\alpha}} \times \mathcal{P}^2 \times S \},
wherem_{\alpha} < \infty — quantity of the memprotsessor used as an input by function \delta_{\alpha}{m&39;}_{\alpha} < \infty— quantity of the memprotsessor used as an output by function\delta_{\alpha}.

By analogy with the Turing machine as you could already guess\delta_{\alpha} — transition functions, analog of the table of statuses. If to look on an example, then letp_{\alpha}, {p&39;}_{\alpha}, p_{\beta} \in \mathcal{P} — pointers at memprotsessor p_{\alpha} = \{ i_1, \dots, i_{m_{\alpha}} \}x(p)— a vector of statuses of these memprotsessor, and\beta \in S — an index of the following command, then

\delta_{\alpha} [x(p_{\alpha})] = (x&39;({p&39;}_{\alpha}), \beta, p_{\beta})

Generally speaking, discarding a formalism, the main difference of UMM from MT that in UMM influencing one storage cell (that is a memprotsessor), you automatically influence also its environment, without additional challenges from Control Unit.

Let's note 2 UMM properties directly following from its determination.
  • Property 1. Intrinsic parallelism (I was not defined how it is correct to translate this term therefore left as is). Any function\delta_{\alpha} can be started on any set of processors at the same time. In the Turing machine for this purpose it is necessary to enter additional tapes and heads.
  • Property 2. Functional polymorphism. It is that, in difference from the Turing machine, UMM can have many different operators\delta_{\alpha}.

Universal Memcomputing Machines as alternative to the Turing machine
Generally speaking, it is not so difficult to modify the Turing machine so that it possessed these properties too, but authors insist.

And some more notes by determination. UMM, in difference from the Turing machine, can have the infinite state space at final quantity of memprotsessor (because they can be analog).

By the way, UMM can be considered as generalization of neural networks.

Let's prove one theorem.

UMM — the universal machine (that is the machine which can simulate work of any MT).


In other words, we need to show that the Turing machine — a special case of UMM. (whether truly return — is not proved, and if authors of article are right, but it will be equivalent to the proofP = NP)

Let in determination of UMMM = Q \cup \Gamma. We will designate one of memprotsessor asj_s, the others (perhaps infinite quantity) asj. Further we will define the pointerp = \{j_s, j\}.j_s we will use as designation of a statusq \in Q,j as the tape character (\Gamma).

\Delta at us will consist of the only function\delta [ x(p) ] = (x&39;(p), p&39;) (we lower\beta, as function only one). The new statusx&39; is defined by the jump table MTx&39;(j_s) — there will be a new statusx&39;(j) — the new character of a tape. The new pointer p&39; = \{j_s, j&39;\}j&39; = jif there is no transition of the carriagej&39; = j + 1 if we move the carriage to the rightj&39; = j - 1 if to the left. As a result, at record inx start stateq_0 and starting character from\Sigma, with\Delta = \delta UTM simulates the universal Turing machine.

The theorem is proved.


Let's look on an example as it is possible to solve problems on UMM (still just that will get acquainted with model). Let's take a task about the subset sum (Subset Sum Problem, SSP).

Let there is a setG \in \mathds{Z} and the number is sets. Whether there is a subset K \subseteq Gwhich sum of elements is equals.

Ekponentsialny algorithm

Let in our UMM memprotsessor be located in a matrix type (see drawing). Let's define three operations.
Universal Memcomputing Machines as alternative to the Turing machine
  1. \chi — it directly calculation. Using activation lines, we can select lines and the limiting columns in which calculations are made. A calculation essence in addition of value of an extreme left cell to all line.
  2. \mu — it is data movement operation. The node of control selects two columns and values from the first are copied in the second. The control node not necessarily itself executes copying operation, it just activates columns the necessary lines.
  3. p — the operation similar on\mu, only it takes 1 value and writes it in a column.

Combining these three operations, we can receive transition function\delta = \mu \circ \chi \circ p.
Universal Memcomputing Machines as alternative to the Turing machine
On the first step of algorithm we receive the sum of all subsets lengthn-1, on the second step of subsetsn-2 and so on. As soon as we found the necessary number (it will be in the left column), we found the answer. Each step is executed for one function call\delta, therefore the algorithm worksn-1 steps.

Now we will consider how many memprotsessor are necessary for us for execution of these operations. On iteration of k it is necessary for us\binom{n-1}{k-1} (n+2-k) memprotsessor. An assessment for this expression on Stirlints's formula —(n/2 \pi)^{1/2} 2^{n-1}. The quantity of nodes grows exponential.

I think now it became more or less clear what the object such is. Now we will pass to the most tasty that to us is offered by UMM, namely to the third property — information overhead.

Exponential Information Overhead

Let us have n of memprotsessor, will designate a status of the selected memprotsessor asx(p) = (x(j_1), \dots, x(j_n)). The status of a separate memprotsessorx(j) = u_j contains in internal variablesu_j \in M_a.u_j — vector. Also for each memprotsessor it is separable external variables on 2 groups — "in" and "out" (out of one memprotsessor is connected to in of another). On the picture the blank circle — a component(u_j)_h = 0. Let's assume, also, that we have a device which, being connected to the necessary memprotsessor, can consider at onceu_j.
Universal Memcomputing Machines as alternative to the Turing machine
This device connected to several memprotsessor can consider a status both, so their global status defined as u_{j_1, j_2} = u_{j_1} \diamond u_{j_2}where\diamond : R^d \times R^d \rightarrow R^d — commutative, associative operationd = \dim(u_j). This operation as is defined

(u_{j_1} \diamond u_{j_2})_{h \star k} = (u_{j_1})_h  \ast (u_{j_2})_k,

where\star: \mathds{Z} \times \mathds{Z} \rightarrow \mathds{Z} and\ast: R \times R \rightarrow R — commutative and associative operations withh \star 0 = h andx \ast 0 = 0. And, if forh, k, h&39;, k&39; it is executedh \star k = h&39; \star k&39;, then

(u_{j_1} \diamond u_{j_2})_{h \star k} = (u_{j_1} \diamond u_{j_2})_{h \star k} \oplus (u_{j_1} \diamond u_{j_2})_{h&39; \star k&39;},

where\oplus: R \times R \rightarrow R — commutative, associative operation, for whichx \oplus 0 = x.

Now, having a setG = \{a_1, \dots, a_n\} of integral numbers, we will define the message m = (a_{\sigma_1} \star \dots \star a_{\sigma_k}) \cup (a_{\sigma_1} , \dots , a_{\sigma_k})where(\sigma_1, \dots, \sigma_k) — the indexes taken from various subsets\{1, \dots, n\}. Thus the set of messagesM consists of\sum_{j=0}^m \binom{n}{j} = 2^n equally probable messagesm, the amount of information according to Shannon is equal I(m) = -\log_2(2^{-n}) = n
Universal Memcomputing Machines as alternative to the Turing machine
Now, taking n of memprotsessor, we expose nonzero components u_{j_0}, u_{j_h}whereh \in \{1, \dots, n\}. Thus we coded all elementsG on memprotsessor. On the other hand, being connected to the necessary memprotsessor and reading out their global status (on formulas there the sum of elements just turns out), we can consider any possible status of m. In other words, n of memprotsessor can code (to squeeze information if you want) about2^n messages at the same time.

The algorithm of the solution SSP using Exponential Information Overhead

Here I am forced to tell that I could not understand parts of this algorithm (there was the fact that I am not so strong in electrical equipment and signal processing, and authors, probably, decided not to paint everything for such ignoramuses), but the general idea such.

For a start they suggest to look at function

g(x) = -1 + \prod_{j=1}^n (1 + e^{i 2 \pi a_j x})

If we remove the brackets, then we will have works on various sets of indexesj (we will designate such set asP), and they equal

\prod_{j \in P}  e^{i 2 \pi a_j x} = \exp \left(i 2 \pi x \sum_{j \in P} a_j \right)

In other words, our functiong contains information on the sums of all subsetsG. Now, if to consider function g as a signal source, then each exponent gives the contribution to the resulting signal, and a contribution with a frequency\sum_{j \in P} a_j.

Now, everything that it is necessary for us - it is to apply Fourier transform to this signal and to look at what frequencies are available for us in a signal. If we have a component with a frequencys, then the subsetG, with the sums exists.

If we solve this problem on the normal computer, then now we could apply fast Fourier transform. Let's evaluate an asymptotics.

For this purpose we will evaluate quantity of points which needs to be taken from a signal. According to Kotelnikov's theorem of these points it is necessary N = 2 f_{max} + 1wheref_{max} < n \max\{|a_j|\} — an assessment on the maximum possible value of frequency. In article authors entered an additional variable pwhich is proportionalN and considered an asymptotics through it.

Thus, using FFT we can solve a problem forO(p \log(p)). Here it is necessary to notice that, as well as in a task about a backpack (and SSP — a special case of a task about a backpack), $p$ grows exponential. For our task it is also possible to use Gyortsel's algorithm that will give usO(n p). The method offered by authors allows will get rid fromp in an asymptotics that will give us linear time.

Now, by the own words (for more detailed consideration, address original articles), as they reached it.

We taken analog memprotsessor which internal value will be a value of some number fromG. As operators\star\ast, respectively, addition and multiplication are also taken.

But it in our model. In iron it turns out that each memprotsessor — the signal generator with the frequency (corresponding to number fromG), the shared state of memprotsessor — is simple addition of a signal. It turns out that these memprotsessor simulate functiong.
Universal Memcomputing Machines as alternative to the Turing machine
Well and now, to consider result, it is necessary to check whether there is in a signal a set frequency. Instead of implementation of FFT, they made a piece of iron which passes only the set frequency (here I too not absolutely understood as, but it my knowledge is guilty of electronics), which already works for constant time.

Total the asymptotics on time in general madeO(1), the asymptotics on memprotsessor madeO(n). We start up salute? You do not hurry.

Some problems of model

Actually, authors cunning shifted "difficult" part of a task which gives us an exponent, from program part in technical. In earlier article about it in general words, in July they are recognized in it, but only several lines.

Business all in coding of a signal (I found distinct explanations here). Because we code analog signals, and we use discrete generators of signals, we need exponential accuracy by determination of the signal level now (in that piece of iron which isolates the necessary frequency) that, will perhaps demand an exponent of time.

Authors claim that this trouble can be bypassed if instead of discrete signal generators to use analog. But I have big doubts that it is possible to use analog circuits for anyn and at the same time not to drown in noise (because of them refused analog computers in due time and began to use digital).


The wonderful magic did not occur. NP complete problems are still difficult for calculations. So why I wrote all this? Mainly because though physical implementation is difficult, the model seems to me very interesting, and their studying is necessary. Soon (if already not now) similar models are of great importance in many spheres of science.

For example, as I already mentioned, neural networks are a special case of UMM. It is quite possible that we learn about neural networks a little more if we look at them on the other hand, using a few other mat. device.

This article is a translation of the original post at
If you have any questions regarding the material covered in the article above, please, contact the original author of the post.
If you have any complaints about this article or you want this article to be deleted, please, drop an email here:

We believe that the knowledge, which is available at the most popular Russian IT blog, should be accessed by everyone, even though it is poorly translated.
Shared knowledge makes the world better.
Best wishes.

comments powered by Disqus