AnalysisArtificial intelligence discourse dominated by utopian and dystopian prophecies
Artificial intelligence discourse dominated by utopian and dystopian prophecies
Both sides of the AI argument, both the fascinated and the frightened, actually belong to the same passionate camp
Last week, the U.S. Department of Commerce asked the public to weigh in on ways the government can ensure AI algorithms are reliable, legal and ethical. The request came following the enormous interest these systems have received in recent months, especially with the launch of OpenAI's line of products such as ChatGPT. It can be assumed that this move and the idea of imposing restrictions or regulations will take a long time, if they even happen at all. Until then, the intense discussion will continue to be conducted between experts on social media and in newspapers.
Currently, there is one position that dominates the discourse, although it pretends to be two rival competitors, and in any case it is an act of public deception. It includes the implicit assessment that these tools will reshape life, literally turn it upside down. This revolution is described once in a utopian way and a second time in a dystopian way, but in any case as an inevitable event.
Those who focus on the utopian implications praise the quality of the products, how they are so "talented" and "creative" that they have already passed bar exams for lawyers and certification exams for doctors. Look, they wrote this text and replaced journalists (not really). Accordingly, the greatest danger is to all white-collar jobs, including programmers, designers, illustrators and composers. This change is of course not terrible, it will actually increase productivity, increase leisure hours and maybe actually do a better job than humans.
Those who deal with the dystopian consequences, the prophets of rage, claim that all this is just the beginning, and these products are an early and inevitable glimpse of powerful successors. Soon we will have "digital minds", artificial general intelligence (AGI) that will equal or surpass that of humans. And then, the battle will be over, an extinction event is expected to come upon us. The idea, which received a strong public response, was formulated in an open letter that was signed by thousands of people, including the richest man in the world Elon Musk, the historian Yuval Noah Harari, engineers from Microsoft, Facebook and Amazon, one Nobel laureate and others. A letter in which they called for a halt to all developments in the field for six months seemingly opened the conversation about regulation. "Should we risk losing control of our civilization?" they asked helplessly.
The letter was just one in a sea of publications, all warning of the existential dangers of artificial intelligence developments on the future of humanity itself. The hypothetical risks are focused on an ideology called "longtermism thinking", which ignores the actual damages resulting from the deployment of artificial intelligence systems. In the "New York Times" Yuval Noah Harari, Tristan Harris and Aza Raskin claimed that "if we don't control artificial intelligence, it will control us", and warned against the way it could harm human culture. In "Time" magazine, Eliezer Yudkowsky called to destroy data centers with an airstrike, and Sam Altman, CEO of OpenAI, who can choose to build products in a responsible and transparent manner, said in an interview that he "thinks we are not far from potentially scary tools." Musk, who also funded the institute that published the open letter, has since announced the establishment of a company that aims to compete with OpenAI.
It may seem as if there are two competing sides here - one fascinated, the other scared. However, both belong to the same camp in terms of enthusiasm. Both attribute unavoidable dangers to these products, rather than to the organizations that build them. Both ignore the problems they are already creating today: violation of copyright and privacy, enormous environmental damage and an unbearable human cost to those who sort and filter the data used to train the models. Therefore the regulation they call for is derived from the same assumption of immense power and inevitability.
1. Disconnected from reality
This may or may not come as a surprise, but enthusiasm mixed with chilling dystopian dread is a cyclical occurrence. Thus, for example, back in 1961, the computer scientist and one of the fathers of the field of artificial intelligence, Marvin Minsky, wrote that "in our lifetime, the machines may surpass us in general intelligence." In 2004, American judge and author Richard Posner proposed establishing a government entity that would be responsible for identifying technologies that could lead to global catastrophe. In 2008, Peter Cochrane, former head of BT's research laboratories, said: "I estimate that the time frame for the appearance of significant machine intelligence is 2020. By 2030 it is the end of the game." These were always accompanied by the gnawing fear of job automation. "The robots are coming to take your job," the headlines would explain. In 2013, one study from Stanford University, which estimated that 47% of all jobs will disappear in the next decade or two due to artificial intelligence, agitated the economic press for more than a year. A few years later, the focus shifted to IBM's Watson which was supposed to cure cancer.
All of the predictions, except those who saw the end of the world coming in an imagined future hundreds of years from now, were wrong. Tech experts have proven to be pretty bad at predicting capabilities. Where does the great disconnection from reality come from? The scientist and writer Arthur C. Clarke put the tendency well: "Any sufficiently advanced technology is indistinguishable from magic." Scientific reporter Michael Shermer offered a competing explanation: "I find a lot of AI scientists want to have their Oppenheimer moment of “we have sinned”, as if ChatGPT = nuclear weapons in threat value, thereby elevating what they do to epic levels of attention and concern. I am worried about nukes but I just don’t see the equivalency case," he wrote on Twitter.
To date, and contrary to Minsky's predictions, scientists have not been able to understand how the human brain functions, and we still do not know the nature of the neurological mechanisms behind the most basic components of humanity such as creativity, logic and humor. "These programs do not have the intuition of the common sense of a 4-year-old child," wrote critic Hubert Dreyfus in the 1980s, in a statement that remains true even today. Meanwhile, everything is repeating itself. Last March, OpenAI published a working paper in which it claimed that tools like the one it develops threaten 49% of jobs, while the discourse was flooded with doomsday predictions by those jealous of Oppenheimer.
2. Start managing
Suffocated in this one-dimensional discussion were a vigorous group of female computer scientists, linguists and ethicists. "It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a "flourishing" or "potentially catastrophic" future," wrote the DAIR Institute, which deals with ethics and artificial intelligence in response to the prevailing discourse. "Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media. This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders."
They are right. The real question today is not one of responding to an inevitable event, but of political will and treating technology as something that must be managed, instead of managing us.
First published: 14:01, 18.04.23