The team at Pitch Avatar offers their perspective on this widely debated question in the title.
The rapid development of artificial intelligence and the solutions built upon it has quickly sparked a movement advocating for the limitation of its use. Efforts to establish rules and even laws concerning the boundaries of AI’s application are being developed in various fields such as academia, education, advertising, journalism, and politics. This isn’t just about disclosing that content is generated by AI, but also about prohibiting the use of AI tools and solutions in certain contexts. Let’s explore where these ideas are emerging from and assess their potential impact.
Devil's machines
Let’s take a step back in time. One of the authors remembers a time when students were only allowed to use ink pens at school in an effort to improve their handwriting, and calculators were strictly prohibited so that students would learn the “valuable” skill of doing calculations by hand. This was, notably, long after the invention and widespread use of personal computers. Even back then, it was clear that while these traditional skills weren’t entirely useless, they were secondary at best. In practice, it would have been much more valuable to teach students how to type using the “blind” method, how to navigate a PC competently, programming, and performing calculations using specialized software. In short, all the things today’s students are comfortably learning. Interestingly enough, the author and their classmates, despite the bans, would secretly use calculators hiding them under their desks.
Guess where we’re going with this? Progressive solutions always find their way into everyday life across all areas of human activity including research and education. And they do so amid the loud protests and serious warnings from conservatives. These range from educators fearing that new innovations be it computers yesterday, smartphones and the Internet today, or AI tomorrow will make the next generation dumber than the last, to techno-alarmists who constantly warn that humanity is becoming too dependent on inventions.
A similar pattern emerged during the early history of printing. In many cities and countries, scribes and calligraphers resisted the spread of the printing press. Some lobbied rulers to protect their interests, others organized attacks on printers and their workers, while some, through church leaders, even labeled the printing press as a diabolical invention.
By the way, this last argument has always been applied to any technological innovation. The claim was that every invention that eased human labor inevitably led to laziness. Doesn’t that sound familiar? It’s very much like the reasoning of people who argue that inventions simplifying intellectual work “stun” people, making them mentally lazy.
Phobias and jealousy
Unfortunately, as we can see, in many cases, the motivations behind advocating for limiting AI usage are not driven by the desire to “make the world a better place.” Instead, they are often rooted in various kinds of fear.
First, there’s the familiar technophobia. This leads to the unrealistic demand that inventors, designers, and developers ensure new technologies are “absolutely safe.” However, the nature of our world means that a 100% safety guarantee is fundamentally impossible. Beneath many seemingly reasonable arguments, like “let’s delay implementation to focus on safety,” lies the usual irrational fear of anything new. Let’s be honest if we followed this mindset, Fulton’s steamboat would never have set sail, Stephenson’s steam locomotive would never have operated, and the Wright brothers’ airplane would never have taken off.
So how do you spot a technophobe in a debate about the adoption and use of AI? It’s quite simple.
Technophobes outright reject the idea of a “reasonable risk.” They typically refuse to evaluate technology in a balanced manner, focusing only on potential negative consequences even those that are completely fantastical.
Next, there’s Luddism, the fear of losing one’s job. Or, more broadly, the fear of the changes that new technologies may bring, which might force people to change themselves and their way of life. To make a crude analogy, it’s like the fear cab drivers once had when cars were introduced. Emotional arguments like “robots are stealing jobs from people,” reminiscent of Isaac Asimov’s fictional worlds, are common among modern Luddites. No one denies that progress can and usually does lead to painful transitions in the job market. However, this is hardly a reason to ban or restrict technology. Instead, the solution lies in creating social and public mechanisms that help people adapt to new circumstances. The key difference between those who genuinely seek to solve the problem and the neo-Luddites lies in the approach – those looking for real solutions focus on how to help people transition rather than resisting technological change.
In this context, we should be thinking about how to introduce AI into education and train people to use these tools, rather than seeking to limit its use in this area.
Let’s conclude this section by addressing an unpleasant phenomenon – envy. One of the authors once encountered an old, experienced editor who, observing younger colleagues working on their computers, liked to grumble, saying, “What do they pay you money for?” In essence, among those opposing the widespread adoption of AI are individuals who feel the current generation has it too easy. Their mindset can be summed up as, “If it was hard for me, it should be hard for them too.” Of course, they would never admit to this, and instead, they find more acceptable justifications for their stance, such as claiming that the use of modern technologies “stuns” people and prevents them from “developing beautiful handwriting.”
Clark's Guillotine instead of bans
At first glance, it might seem that the authors are advocates for the unchecked introduction and widespread use of AI tools in every corner of life. But that’s not the case. What we’re really highlighting is that formal bans and restrictions will achieve little. People will find ways to use artificial intelligence – whether it’s allowed or not. After all, we didn’t just randomly mention calculators being hidden under desks.
Furthermore, the rapid pace of AI development indicates that in the near future, it will be nearly impossible to distinguish results produced by artificial intelligence from those created by humans. So, how do the “restrictors and prohibitors” plan to monitor and enforce compliance with these restrictions and bans? Objectively, we must acknowledge that AI, as a widely accessible technology, will be applied in all areas – from preschool education to the management of complex systems like international corporations, governments, and supranational organizations. Rather than focusing on bans and restrictions, we should concentrate on two clear, solvable tasks.
The first task is the creation of “competent” AI agents – those capable of handling the tasks they are assigned without falling into machine delusions or hallucinations. It’s likely that the development of such solutions will become feasible once we reach the era of Strong AI.
The second task is the establishment of a system for controlling AI agents, where the ultimate authority remains with humans. The science fiction writer and scientist Arthur C. Clarke, who explored the relationship between AI and humans extensively in his works, once suggested that people should always have the ability to “unplug” AI. In one of his novels, he even envisioned a device he called a “guillotine,” designed to cut off power to an AI supercomputer at the command of a human. It seems that now is the time for humanity to consider how to implement such a “guillotine.”
Would it be ethical for a Strong AI to emerge and be recognized as an individual equal to humans? We believe that no one would argue that humanity must retain the right to have the final say in all matters and decisions concerning humans. That being said, it’s crucial that we begin discussing the “degree of personal freedom” of Strong AI today. This conversation is far more important than debating where or how we can or cannot use AI-based tools.