TL;DR: The debate over AI and jobs is usually framed as alarmists vs. optimists, but both sides miss the middle ground where reality actually lies. Complete human replacement faces engineering limitations and unresolved liability issues. Rigid state protection creates its own distortions. The most plausible future is not “AI instead of people” or “people protected from AI,” but a hybrid economy where AI takes care of the material basis, and people focus on science, governance, and the social economy of human presence. We’ve adapted to technological revolutions before, and we have the tools to adapt to this one as well.
Will humanity manage to avoid “living in cyberpunk” and build an AI-driven economy that is comfortable for itself?
In almost any conversation today about the future of work, one question inevitably arises: “How many jobs will remain for humans – and will any remain at all?” Robots have stepped beyond factory floors and now appear in warehouses and on city streets, in offices and hospitals, taking up ever more space in services and logistics. Artificial intelligence writes texts and draws pictures, creates scripts and films videos, calculates budgets and formulates business strategies, analyzes data and manages production, sorts resumes and makes hiring recommendations.
It is not surprising that calls are increasingly being heard (at both government and legislative levels) to protect people from the “insane” AI automation and robotization. Alarmists argue that otherwise, a significant portion of the population will either lose their jobs or face a radical deterioration in working conditions and a sharp drop in income.
In their view, it is futile to hope that the market will regulate the situation in workers’ interests, since company executives and owners tend to prefer a policy of “AI automation and robotization instead of humans” over one of “AI and robots assisting humans”.
Unfortunately, experience shows that turning to the state to regulate and restrict everything is not the best idea. Before you know it, implementing any AI tool will require official approval, and each specific AI product will need to be licensed and certified, forcing companies to navigate a complex bureaucratic maze. As a result, the entire “legal” AI industry would end up in the hands of a few giants capable of building relationships with governmental and international institutions. Around these “islands of legality”, an ocean of AI technologies in the “gray zone” would churn, spilling out of countless “garages” and “basements”. This will not reduce the risks. On the contrary, it would create many additional problems.
Yet it would be simply foolish to ignore the alarmists’ position. Let us note, by the way, that the words “alarmist” and “skeptic” carry no negative connotation. Science and progress are impossible without people who carefully and thoughtfully examine ideas, hypotheses, and inventions from the standpoint of skepticism and potential negative consequences. Among them are many respected specialists and organizations who point to real problems created by the implementation of AI technologies. Let’s consider their arguments.
The Specter of "Cyberpunk"
Alarmists proceed from one, but very important, premise: the current technological revolution is qualitatively different from previous ones. Above all, it differs in two key characteristics – speed and scalability. If the Industrial Revolution gradually eased and replaced physical labor, the AI wave is invading the cognitive sphere almost instantly. AI-based systems and tools are easily replicable, and their implementation does not require the reconstruction of complex physical infrastructure. An AI model created in one place can begin competing with millions of workers at once, spreading across the globe within weeks or months.
The pace of robotization, due to its more offline nature, is not developing as quickly. However, being directly linked to the development of AI technologies, it has also gained momentum – with global factory installations doubling over the past decade, according to the International Federation of Robotics – that is incomparable to the pace at the end of the 20th century.
Of course, a rational employer will prefer an AI solution or a robot to a human if it is cheaper, more controllable, and generally more efficient. And the market, naturally, will not care about the fate of specific workers.
The conclusion reached by techno-alarmists is clear: if human labor is displaced faster than the economy can create new roles and forms of employment, a structural rupture arises, leading to unemployment and social instability.
Within this logic, market self-regulation mechanisms do not guarantee a result that is satisfactory for people, since companies seek to reduce costs rather than preserve jobs. Moreover, this model is characterized by increasing profit concentration and decreasing bargaining power of workers.
The result is classic cyberpunk: an automated economy with a useless, “declassed” majority. And by cyberpunk here we do not mean neon aesthetics and techno-noir, but a world in which humans are, at best, mere appendages to machines.
Alarmists and skeptics are ready to support all of the above with studies and formulas easily found on the Internet. However, this point of view is based on two assumptions:
- that the replacement of humans by technology in the economy will be nearly complete – close to 90%;
- that workers displaced in this way will not find new areas of employment.
Both assumptions are open to dispute, and opponents have their counterarguments.
The Engineering Barrier and Responsibility
The floor for response goes… not yet to the optimists (or, if you prefer, the positivists), but first to rationalist supporters. Their counterarguments are simple, logical and do not require complex evidence.
Let’s start with the so-called “engineering barrier to complete replacement”. In reality, we see that the vast majority of AI tools require human involvement – someone to set tasks, monitor their execution, and verify results. Moreover, this usually involves specialists who understand the topic that is assigned to artificial intelligence.
Objectively speaking, we are not seeing the mass production of AI-based solutions capable of completely and autonomously performing the work of human professionals – Penn Wharton Budget Model analysis confirms that only ~1% of jobs are fully automatable by AI without significant human oversight.. As a rule, modern AI products support a professional’s work, expand their capabilities, take on a significant portion of routine tasks, and allow them to work faster – but they cannot yet replace them. And how long this “yet” will last, no one can say. Years? Or maybe decades? It is quite possible that such AI systems will never be created.
Under such uncertainty, a logical strategy for modern business seems to be the concept of “Expand, Don’t Cut”. If a human, together with an AI solution or robot, can accomplish more than either could alone, why should humans be ignored? Wouldn’t it be better to focus on expanding the company and increasing production volumes?
But even if specialized AI solutions capable of completely replacing humans emerge in certain areas, their use without professional human involvement will immediately face liability issues. It is well known that creating a machine or software that operates without failures or errors is, within our current engineering understanding, an insoluble problem. In the foreseeable future, it will not be possible to eliminate a significant number of errors from AI systems.
Now imagine that an AI accountant makes a mistake and a company is accused of tax evasion (or concealment). Who exactly will be held responsible? Even if not before the law, then before shareholders and the board of directors? And this, mind you, is still a relatively harmless example. More egregious examples can be cited: an error by an AI diagnostician leading to tragic consequences. Or by an AI air traffic controller. Or by an AI system managing hazardous production. It is clear who is responsible when a person makes such a mistake. But who will be held accountable to the victims, their families, and the state if a tragedy occurs “due to the fault” of artificial intelligence? The real answer to such challenges can only be one: AI systems must operate under the overall guidance and control of human professionals, as advanced tools and “extensions of their minds”.
This is no longer a hypothetical. In 2024, a Canadian tribunal ordered Air Canada to pay damages after its customer service chatbot invented a discounted fare policy for deceased passengers that did not actually exist, misleading a passenger into buying full-price tickets. The airline’s defense was striking: it claimed that the chatbot was a “separate legal entity” responsible for its own actions. The tribunal rejected that argument outright, ruling that companies remain liable for everything their AI tools produce, no matter how interactive those tools appear to be. A small amount, a big precedent – and concrete proof why the issue of liability cannot be shifted to software.
Incidentally, investigations into various incidents related to AI system errors and failures (determining whether each case is the result of a malfunction, negligence, or malicious interference) will also have to be carried out by humans.
As you can see, even these arguments alone are enough to understand that, within the current process of AI transformation, movement toward a “Human + AI” system, rather than “AI instead of humans”, appears logical.
The Transition Period Nobody Wants to Talk About
Both sides in this debate tend to discuss the end goal (cyberpunk dystopia on one side, hybrid civilization on the other) and sidestep the path itself. That is a mistake. Even if the long-term equilibrium turns out to be favorable, it is the path from the current state to the desired result that is the main problem, and an honest discussion of AI and jobs must acknowledge that.
Consider what happens when a job category is reduced faster than workers in that category can be retrained. A 45-year-old paralegal whose work is partially automated does not instantly become an AI auditor, a hospice assistant, or an employee of a modular research center. Skills don’t transfer overnight. The World Economic Forum estimates that 39% of existing skills will become outdated between 2025 and 2030. Neither qualifications, nor self-confidence, nor professional connections are transferred. Retraining programs exist, but they are uneven in quality, slow to scale, and often disconnected from real, in-demand professions. An employee who has lost his job and found a comparable level within a year is considered a success story. Many people need more time. Some never fully recover their previous level of income.
This is an honest counterbalance to the optimistic view. The long-run argument for Human + AI collaboration may be true, we believe so, and the transition may still be really difficult for a lot of people. Those two things do not contradict each other. Treating them as if they are is what produces misguided policies: either denial of the problems or panic-stricken restrictions that freeze technological development without bringing real help to those affected.
What would a serious transition strategy look like? It would include portable retraining benefits that are provided to the employee rather than changing jobs. It would include industry partnerships where companies deploying AI contribute directly to reskilling funds in the areas they are transforming. It would include reliable labour market data showing which roles are actually growing, so retraining is aimed at real opportunities, not wishful thinking. And it would include social infrastructure for the gap between jobs – not as charity, but as recognition that asking workers to adapt to such a scale of change without support is unfair and economically unsustainable.
None of this contradicts the broader argument that AI will ultimately create more jobs than it destroys. It’s simply a serious question of who bears the costs in the years when the “ultimate goal” has not yet been achieved.
Humans for Humans
And yet it would be unfair to ignore a scenario in which AI systems become truly powerful and are indeed capable of effectively replacing humans in most existing professions. What then?
You’ve probably already guessed that the time has come for the arguments of positivists and optimists who view AI-assisted automation and robotics as mechanisms that free humanity from working “for survival” and encourage it to work “for development”. Don’t worry, we’re not going to indulge in any fantasy like “Human and robot, hand in hand, on a shining starship, conquering the Universe”. Let’s stay on Earth and talk about developing the human qualities that make us human.
Let’s start with the fact that throughout our history, labor has changed its form. Technological revolutions (say, the Bronze Age or the first Industrial Revolution that ushered in the “Age of Steam”) certainly destroyed a number of professions. But they simultaneously gave rise to new ones, often of a kind people could hardly have imagined. Who, for example, could have foreseen the profession of a flight attendant in the era of the development of the first internal combustion engines? Could Charles Babbage, when he conceived the first computer, and Ada Lovelace, when she wrote its first programs, have imagined that in the future, highly paid professionals would create computer video games? Is it not logical to assume that the AI revolution will also give rise to new professions, the nature and content of many of which we simply cannot imagine today?
Still, we promised not to fantasize. So let’s return to what can essentially be imagined based on existing phenomena.
Let’s start with science. It is no secret that in this area, there is a shortage of people at almost all levels. There are more ideas and concepts than there are “brains” capable of developing them. Scientific research has long needed to move from “elitism”, with its lengthy and complex preparation of a relatively small number of specialists, to a kind of industrialization. AI systems capable of taking on computations, hypothesis testing, and basic education or retraining of entry-level and mid-level specialists undoubtedly open the way to scalability and a new approach to scientific work. Its essence lies in the idea that scientific work should cease to be the prerogative of the “chosen few” and become one of the fundamental forms of human activity, thereby maximizing the potential of human intellect. Accordingly, the production of new knowledge and skills will increase many times over.
For the economy, in particular, this would mean the emergence of a large group of “scientific workers” – people with analytical skills working on modular research projects supported by AI solutions and robotic systems.
Let’s continue with socially significant projects. This is another area where there has traditionally been more work than working hands. Urban improvement, park development and maintenance, systematic efforts to restore and sustain ecosystems, returning abandoned industrial and residential sites, technogenic landscapes, and zones of industrial accidents to nature or to economic use – this is far from an exhaustive list. Employment in this sector can (and indeed should) become one of the primary “absorbers” of the displaced workforce – naturally, supported by the same achievements of the AI revolution.
But the main area of human employment in the future will most likely be the social economy – or, in other words, the economy of human presence. The idea is that the value of human contact will not disappear. On the contrary, the more digital the world becomes, the higher that value increases. Take sports or board game partners, for example – they are not just service providers, but carriers of indispensable “human-to-human” interaction. Of course, one can compete in chess against AI or hit balls launched by a “smart” machine. But genuine pleasure comes from experiencing successes and mistakes, victories and defeats, alongside another person.
Even more important is the economy of human presence for the social sphere, where today the shortage of labor is compensated, though clearly insufficiently, by volunteers. Staff at public events, children’s activity organizers in parks and playgrounds, hospital patient caregivers, senior citizens, and hospice residents, caregivers in orphanages and schools, confessors, support group leaders, and, of course, teachers, coaches, and mentors in various artistic fields – the list goes on and on. The point, we believe, is clear.
A very reasonable question: who will pay for all this? And here we return to the role of the state. But not as a “job protector” who legislates against “replacing humans with machines” and adorns every AI model with labels and licenses. Rather, as a regulator redistributing funds obtained from taxing AI-automated and robotized businesses into the aforementioned areas, creating and finances new jobs. This, mind you, would likely do far more good than simply introducing a “universal basic income”, which has been hotly debated in recent years.
In this scenario, automation provides the material foundation, while people become increasingly involved in the production of knowledge, meaning, comfortable and sustainable living environments, and human relationships. The result would not be cyberpunk, but a new stage in the development of civilization, based on a complex, hybrid system of interaction between humans and “smart” machines.
Will we be able to create such a positive picture of the world? Why not? After all, we have been quite successful in dealing with the challenges posed by previous technological revolutions.
Frequently Asked Questions (FAQ)
The realistic answer is no, at least not in the way alarmists describe. Two severe limitations prevent complete replacement: the engineering barrier (most AI tools still require human guidance, monitoring, and verification) and the accountability problem (when an AI makes a serious mistake, someone must be held accountable, and current systems prevent that someone from being the AI itself). Roles will change and some will disappear, but the “Human + AI” model is much more likely than “AI instead of humans”.
- Rigid regulation tends to concentrate power rather than distribute it. Licensing every AI product would put the legal AI industry to a few large players with the resources to handle bureaucracy, while pushing everyone else into a gray area. Risks don’t disappear – they migrate and multiply. Smart regulation is about accountability and redistribution, not about restricting which AI tools people can use.
It’s a business strategy that views AI as a way to do more, not as a way to cut staff. If a person working with AI can do significantly more than either of them could do individually, the rational move would be not to fire the person, but to increase production volume, expand the business, and use the combined productivity to conquer new markets. This reframes AI as a lever, not a replacement.
- Responsibility currently cannot lie with the AI itself. Whether the error concerns financial calculations, medical diagnosis, air traffic control, or industrial safety, the chain of responsibility must pass through the professionals who oversee the system. This is why effective AI implementation in high-stakes areas requires human oversight as a structural feature, not as a favor.
Three areas appear to be the most stable. First, science – where the shortage of researchers has always exceeded the shortage of ideas and AI can expand the circle of participants. Second, socially significant projects like ecological restoration and public space management have always required more labor than could be hired. Third, and most importantly, the social economy of human presence (teachers, coaches, caregivers, companions, organizers, mentors) where the value of genuine human contact increases precisely because the rest of the world is becoming increasingly digital.
By redistributing tax revenue from businesses automated by AI and robotics to socially valuable work. The argument is that this would produce better results than just a universal basic income, as it would fund the creation of meaningful jobs rather than simply transferring money. This positions the state as a regulator of flows, rather than a gatekeeper in the technological sphere.