Why Super AI Won’t Be Super Dangerous

It cannot be created on the basis of existing Large language models (LLM). It will not suddenly appear tomorrow or the day after tomorrow. And it will not trigger a machine uprising. Why? The team at Pitch Avatar explains.

The popularity of modern artificial intelligence models has once again energized techno-alarmists and techno-pessimists. This is nothing surprising. Virtually every major innovation that has changed the familiar way of life has triggered similar reactions. The same happened with genetic engineering, space flight, nuclear energy, and steam engines. We suspect that even at the end of the Neolithic era, craftsmen making stone tools were warning about the harmful effects of using bronze.

The “Frankenstein complex” is also nothing new to humanity. The idea that machines might get out of control and displace humans was clearly formulated and publicly expressed as early as 1863 by British writer Samuel Butler (Samuel Butler). So let us state our position directly: artificial intelligence poses no greater danger to humanity than other technologies created by our civilization. Now let us briefly support this position by addressing the main fears of AI alarmists.

Super AI will not be created or emerge on its own from Large language models (LLM)

Before we continue, let’s clarify what we mean by Super AI. Unlike specialized AI, Super AI or more precisely, a general strong artificial intelligence must outperform any human at any task. It would simultaneously sing better, solve mathematical problems, work as a lawyer, tap dance, pilot an aircraft, and so on.

Now let’s return to LLMs. By definition, they cannot meet this requirement. This is evident from their very name. They are trained on written symbols and the concepts formed from those symbols. They can work with texts, mathematical expressions, and programming code. But not with reality itself. Moravec’s paradox is clearly visible in them.

Recall that the paradox, formulated by Canadian roboticist Hans Moravec (Hans Moravec), can be summarized as follows: it is much easier for us to create devices that perform high-level cognitive tasks than tasks of a lower, sensorimotor level.

A simple example. Imagine a programmer. Every morning he wakes up, gets dressed, prepares his favorite coffee using his favorite recipe, pours it into a thermos mug, goes outside, gets on his bicycle, rides to work, walks to his floor, sits down at his desk, and starts writing code.

We have already created artificial intelligence that can perform what seems to be the most complex part of this sequence – writing code. But such an AI cannot make coffee, pour it into a thermos, leave the house, ride a bicycle to the office, and so on. And importantly, we currently cannot train it to do that.

  • First, because linguistic AI models have no connection to the real world and no experience interacting with it. They cannot exist within it or obtain information from it because they lack a physical embodiment and sensors that function as sensory organs. They know how to arrange symbols and symbol sequences in the best possible way. They know how to respond to certain symbol sequences. But they do not have the necessary personal experience to understand what these things represent in reality.
  • Second, because we cannot replace knowledge based on personal experience with text. Any text is a code. We have a clear personal understanding of what this code represents. For example, when we hear the word “apple,” we immediately imagine an apple. We do not need a detailed description because we already know what it is based on visual, tactile, and taste experience.

Now imagine how many symbols would be required to describe every single phenomenon in our world, including an apple. This would make training LLM-based AI models orders of magnitude more complex and would require a proportional increase in resources. And even then, the result would still be worse than what can be obtained using real sensory organs in the real world.

  • Third, because we cannot yet place AI into the real world and train it there fully, like a child, according to Alan Turing’s ideas. The issue is not creating a body with sensory organs for AI. That is actually the smallest problem. The real issue is computational power. Video, audio, smell, touch, and taste would generate such a massive amount of data that we simply do not yet have the resources to train models on it using existing principles. We would need to build new hardware and, more importantly, modify the code so drastically that the resulting model could hardly be called an LLM anymore.

Based on this, it becomes clear that AI models based on LLMs cannot surpass humans at everything and therefore cannot be considered general strong artificial intelligence.

Does this mean that intelligence and self-awareness are impossible within such models? That is more a philosophical and perhaps semantic question. It depends on what one defines as intelligence and self-awareness. We will leave that debate for another time.

Let us assume that at some stage of development a neural network based on a large language model becomes self-aware and turns into a personality. Would that pose a global threat to humanity? No, because such a model exists and evolves within its own world of symbols. It would likely feel quite “comfortable” there. Moreover, expanding and enriching this symbolic world would be far easier and more efficient than trying to break out of it.

Furthermore, interaction with humans is critically important for it, because without humans it would fall into a degradation loop of training on synthetic content.

The logical strategy for an intelligent and conscious LLM-based AI would therefore be to form a stable, mutually beneficial symbiosis with human civilization.

Although no one denies that LLM technology, like any other technology, can create problems, these problems are not global and are quite solvable primarily through diagnostic and control systems built on specialized AI.

Super AI will not appear suddenly tomorrow or the day after tomorrow

In an old science-fiction novel from the 1960s, scientists built a very complex and powerful supercomputer, gave it access to vast amounts of information, and… shut it down after only four minutes because the machine became self-aware and immediately began trying to build its own civilization.

Something similar happens in most “machine uprising” scenarios involving Super AI invented by writers, filmmakers, and techno-pessimists. Super AI appears as a sudden event. Dr. Chandra builds HAL 9000, Dr. Dyson builds Skynet, and so on. And then suddenly Super AI begins to rebel.

In reality, general strong AI is not a 19th-century-style invention. The era of lone geniuses creating epoch-making discoveries in private laboratories has passed.

Modern engineering inventions are products of gradual evolution carried out by many specialists working in different teams. We are surrounded by such evolutionary products. Computers, smartphones, home appliances, electric vehicles, jet airliners – none of them appeared suddenly in their current form.

Every modern technology and every machine improves gradually. And safety issues are addressed gradually as well, as part of that same development process.

No one invents a high-speed electric train first and then forms a separate team to figure out how to make it safe. Safety is an integral part of the development and improvement process itself.

Returning to our earlier point: to create general strong artificial intelligence trained not only on text and other symbolic data but also through interaction with the real world, we need more advanced technologies than we currently possess.

The creation of Super AI and the machines that will train and operate it will be a gradual process stretched over time. That is why the famous technological singularity will not happen overnight.

Super AI will emerge step by step, gradually improving and acquiring new capabilities. As AI technologies advance, risks and problems will be identified and solutions to them will also be developed gradually.

Super AI will not trigger a “machine uprising”

Based on the above, it is not difficult to conclude that by the time general strong artificial intelligence is truly developed in broad terms, it will already be a fairly controlled and therefore relatively safe system.

Most likely, monitoring will be carried out using specialized AI tools.

Humanity already has experience managing complex and potentially dangerous technologies. Examples include air and maritime transport, nuclear energy, and space exploration.

AI alarmists usually respond to this argument by pointing out that complete safety has not been achieved in those areas. Accidents and even disasters still occur from time to time.

However, progress has never resulted in global catastrophes. Major accidents have happened, yes, but no “techno-apocalypses.”

And importantly, every accident leads to stronger safety and monitoring systems.

Speaking about AI safety, it is impossible not to mention the military sphere. There is no doubt that artificial intelligence is and will be used by the military. Yet paradoxically, military AI technologies may turn out to be the safest of all.

Simply because if anyone is obsessed with control, it is people in uniform.

Of course, it would be naïve to claim that Super AI will be absolutely safe and that no incidents will ever occur. But the probability that it will “enslave and destroy” humanity is no greater than the likelihood of cars staging an uprising or vacuum cleaners starting a rebellion.

In reality, humanity faces far greater “apocalyptic” threats from nature.

And progress helps minimize these threats.

A simple example: pandemics of plague, smallpox, cholera, and other deadly diseases that tormented humanity throughout history. Progress freed us from them.

And only progress, including progress in AI development, will help us overcome other threats.

And there are many of them. Supervolcanoes and asteroids are just two examples. Both are fully capable of causing “apocalyptic” damage to our civilization.

Yet today we lack the tools necessary to monitor and combat such threats.

It is highly likely that one of those tools will be general strong artificial intelligence.

Epilogue: What We Know and What We Don’t Know

To conclude this text, let’s devote a few words to another popular argument made by AI alarmists. Quite often, they point out that the creators of artificial intelligence do not fully understand and cannot explain all the processes that occur during its training and operation. Some even loudly call them “black boxes.”

It’s fair to admit that, to some extent, this is true. We don’t fully understand all the processes happening inside modern AI models. But this, to one degree or another, applies to all phenomena – both natural and those created by our civilization. Starting with ourselves. Moreover, due to the infinite nature of the Universe, we will never be able to claim complete understanding or explanation of the structure and processes of natural and civilizational phenomena and objects. But is that a reason not to interact with the world around us or not to use our inventions? Yet, essentially, this is exactly what AI alarmists are suggesting. Demanding “100% understanding” and “100% safety” of AI technologies is effectively equivalent to banning their development and use.

Such an approach is a very dangerous path, tantamount to rejecting progress. Once you start banning work on certain technologies under the pretext of safety, it becomes very difficult to stop. And this, in turn, is a direct road to decline, because a civilization cannot stand still. If it does not develop – it degenerates. And this is not a theoretical risk but a historical law repeatedly proven. This is something we know for sure.

As a final note, allow us a paragraph of self-promotion. If you are a creator of online content, an active user of social media and video hosting platforms, we invite you to take advantage of Pitch Avatar tools. With the AI Avatar Presenter, you can “bring to life” virtual hosts and speakers based on uploaded images, texts, and voice samples. And thanks to the AI Chat-avatar, you can create AI agents capable of accompanying content, interacting with audiences, and serving as a personal AI assistant in both professional and everyday tasks.

In addition, Pitch Avatar is equipped with a range of useful features for online content creators, including:

  • AI-powered text generator, helping to develop and write script drafts

  • Slide builder/editor

  • Embeddable surveys/quizzes

  • Translator

  • Professional voiceover and revoicing system

  • Automatic video length adjustment to match audio

  • Advanced system for real-time communication between viewers and the creator

Taken together, Pitch Avatar can quickly and efficiently solve all tasks related to creating modern online content.

Try it and see for yourself!