What do robots and humans have in common, or why did John von Neumann believe machines should be "alive"?
The Pitch Avatar team has collected several quotes about the similarities and differences between living organisms and machines from the creator of “cellular automaton” theory.
John von Neumann (1903-1957) was a Hungarian-American mathematician, physicist, engineer, and computer science theorist. Among his many contributions, he believed that various challenges in engineering and computing could be solved by finding and studying analogous solutions in nature. He developed the concept of “cellular automata” – also known as “von Neumann automata” – devices capable of self-replication and, in one variation, forming complex systems from multiple simple automata. The quotes are taken from his work “The General and Logical Theory of Automata,” published in 1951.
- Natural organisms are, as a rule, much more complicated and subtle, and therefore much less well understood in detail, than are artificial automata. Nevertheless, some regularities which we observe in the organization of the former may be quite instructive in our thinking and planning of the latter; and conversely, a good deal of our experiences and difficulties with our artificial automata can be to some extent projected on our interpretations of natural organisms.
In these words, von Neumann clearly expresses the idea that successful development of robotics would directly depend on how well machine and software creators follow certain laws that govern the development and life of natural organisms.
- The neuron transmits an impulse. This appears to be its primary function, even if the last word about this function and its exclusive or non-exclusive character is far from having been said. The nerve impulse seems in the main to be an all-or-none affair, comparable to a binary digit.
- The stimulation of a neuron, the development and progress of its impulse, and the stimulating effects of the impulse at a synapse can all be described electrically. The concomitant chemical and other processes are important in order to understand the internal functioning of a nerve cell. They may even be more important than the electrical phenomena. They seem, however, to be hardly necessary for a description of a neuron as a “black box,” an organ of the all-or-none type. Again the situation is no worse here than it is for, say, a vacuum tube. Here, too, the purely electrical phenomena are accompanied by numerous other phenomena of solid state physics, thermodynamics, mechanics. All of these are important to understand the structure of a vacuum tube, but are best excluded from the discussion, if it is to treat the vacuum tube as a “black box” with a schematic description.
Von Neumann’s insights into the functions of neurons, as one might expect, became a fundamental building block in the development of modern artificial neural networks. Equally significant is his direct acknowledgment of the striking similarities between the biological nervous system and artificial neural networks.
- The living organisms are very complex-part digital and part analogy mechanisms. The computing machines, at least in their recent forms to which I am referring in this discussion, are purely digital.
Unlike science fiction writers, who often speculate about “erasing” the boundaries between natural and artificial intelligence, von Neumann remained grounded in reality. He consistently emphasized that while biological organisms and machines share similarities, they are fundamentally different in their underlying principles.
- …if a living organism is mechanically injured, it has a strong tendency to restore itself. If, on the other hand, we hit a man-made mechanism with a sledge hammer, no such restoring tendency is apparent. If two pieces of metal are close together, the small vibrations and other mechanical disturbances, which always exist in the ambient medium, constitute a risk in that they may bring them into contact. If they were at different electrical potentials, the next thing that may happen after this short circuit is that they can become electrically soldered together and the contact becomes permanent. At this point, then, a genuine and permanent breakdown will have occurred. When we injure the membrane of a nerve cell, no such thing happens. On the contrary, the membrane will usually reconstitute itself after a short delay. It is this mechanical instability of our materials which prevents us from reducing sizes further. This instability and other phenomena of a comparable character make the behavior in our componentry less than wholly reliable, even at the present sizes. Thus it is the inferiority of our materials, compared with those used in nature, which prevents us from attaining the high degree of complication and the small dimensions which have been attained by natural organisms.
- Natural organisms are sufficiently well conceived to be able to operate even when malfunctions have set in. They can operate in spite of malfunctions, and their subsequent tendency is to remove these malfunctions. An artificial automaton could certainly be designed so as to be able to operate normally in spite of a limited number of malfunctions in certain limited areas. Any malfunction, however, represents a considerable risk that some generally degenerating process has already set in within the machine. It is, therefore, necessary to intervene immediately, because a machine which has begun to malfunction has only rarely a tendency to restore itself, and will more probably go from bad to worse. All of this comes back to one thing. With our artificial automata we are moving much more in the dark than nature appears to be with its organisms. We are, and apparently, at least at present, have to be, much more “scared” by the occurrence of an isolated error and by the malfunction which must. be behind it. Our behavior is clearly that of overcaution, generated by ignorance
John von Neumann was not the first scientist to recognize that theoretical advancements were outpacing the technical ability to implement them. However, as both an engineer and a theorist, he articulated this gap with remarkable clarity, highlighting how technological progress lags behind nature’s achievements over billions of years of evolution. In doing so, he subtly pointed to nature as a model for those striving to miniaturize technology and address challenges related to machine errors, malfunctions, and failures.
- There is a very obvious trait, of the “vicious circle” type, in nature, the simplest expression of which is the fact that very complicated organisms can reproduce themselves. We are all inclined to suspect in a vague way the existence of a concept of “complication.” This concept and its putative properties have never been clearly formulated. We are, however, always tempted to assume that they will work in this way. When an automaton performs certain operations, they must be expected to be of a lower’ degree of complication than the automaton itself. In particular, if an automaton has the ability to construct another one, there must be a decrease in complication as we go from the parent to the construct. That is, if A can produce B, then A iii some way must have contained a complete description of B. In order to make it effective, there must be, furthermore, various arrangements in A that see to it that this description is interpreted and that the constructive operations that it calls for are carried out. In this sense, it would therefore seem that a certain degenerating tendency must be expected, some decrease in complexity as one automaton makes another automaton. Although this has some indefinite plausibility to it, it is in clear contradiction with the most obvious things that go on in nature. Organisms reproduce themselves, that is, they produce new organisms with no decrease in complexity. In addition, there are long periods of evolution during which the complexity is even increasing. Organisms are indirectly derived from others which had lower complexity. Thus there exists an apparent conflict of plausibility and evidence, if nothing worse.
- It is relatively easy to draw up such a list, that is, to write a catalogue of “machine parts” which is sufficiently inclusive to permit the construction of the wide variety of mechanisms here required, and which has the axiomatic rigor that is needed for this kind of consideration. The list need not be very long either. It can, of course, be made either arbitrarily long or arbitrarily short. It may be lengthened by including in it, as elementary parts, things which could be achieved by combinations of others. It can be made short-in fact, it can be made to consist of a single unit by endowing each elementary part with a multiplicity of attributes and functions… The problem of self-reproduction can then be stated like this: Can one build an aggregate out of such elements in such a manner that if it is put into a reservoir, in which there float all these elements in large numbers, it will then begin to construct other aggregates, each of which will at the end turn out to be another automaton exactly like the original one? This is feasible…
- There is… a certain minimum level where… degenerative characteristic ceases to be universal. At this point automata which can reproduce themselves, or even construct higher entities, become possible. This fact, that complication, as well as organization, below a certain minimum level is degenerative, and beyond that level can become self-supporting and even increasing, will clearly play an important role in any future theory of the subject.
Identifying one of the key challenges of “self-reproducing robotics,” von Neumann proposed a solution. To do so, he relied not only on his own reasoning but also on the works of Alan Turing and the McCulloch-Pitts theory, which introduced the concept of an artificial neuron as a fundamental unit of an artificial neural circuit. In other words, he laid the foundation for a path where the most promising technological progress lies in advancing universal computers and artificial neural networks—enabling the creation of self-reproducing, self-learning machines. Such machines, in turn, would almost inevitably evolve, becoming a kind of technological analog to living nature. It is important to emphasize that recognizing this possibility should not provoke fear but instead serve as motivation to develop mechanisms for managing and controlling the process of machine evolution.
To conclude, let’s allow ourselves a moment of self-promotion. If you’re an online content creator, an active user of social media, or a video-hosting platform enthusiast, we highly recommend trying Pitch Avatar. This AI-powered assistant specializes in creating “live” virtual presenters and speakers based on uploaded images, text, and voice samples.
But Pitch Avatar goes beyond just generating digital lookalikes. When properly customized, its virtual speakers can actively engage with audiences—answering questions, retrieving information, recording feedback, and more.
Additionally, Pitch Avatar offers a suite of powerful tools for content creators, including:
- AI-powered text generation for scriptwriting
- Slide builder and editor
- Built-in questionnaires and tests
- Multilingual translation capabilities
- Real-time communication features for seamless audience interaction
With these capabilities, Pitch Avatar streamlines and enhances the content creation process, making it easier and more efficient than ever.
Try it today and experience the future of AI-driven content creation!