OpinionOpen letters won’t stop AI arms race - governments must intervene
Open letters won’t stop AI arms race - governments must intervene
Despite the headlines created by the letter from the heads of the technology industry warning humanity against the too rapid advancement of Artificial Intelligence models, there is no chance that this innovation will stop or be delayed. Technology researchers Inbar Dolinko and Liran Antebi call for the rapid entry of state bodies into the picture to avoid unwanted consequences
What is the chance that the Future of Life Institute's open letter will succeed in pausing the development of powerful artificial intelligence models for six months? Without the immediate intervention of governments on the issue, the odds are slim and this step is probably ineffective; mainly with regard to addressing disinformation and widespread possible damage to the labor market. Intervention is required to ensure global citizens can enjoy the technology’s benefits while reducing risk exposure.
This letter is undeniably strong in its language and message. It asserts that AI systems pose extensive risks and that their further development should be contingent upon passing a test that ensures their effects are positive and the associated risks are manageable. The letter also expresses deep concerns regarding the impact of AI systems on the spread of false information, potential damage to the labor market, and the development of a superhuman, limitlessly powerful general AI. It raises the critical question of whether we should take the risk of losing control over our civilization. However, the letter's influence lies primarily in its ability to bring these important issues to the public discourse.
The letter appears to convey a message as if industry leaders are shouting: "We're heading toward world destruction, please stop us!" However, in some cases, this call is met with skepticism, and understandably so. Over the past few years, the discussion about regulating AI has gained momentum. Tech companies, for instance, have started formulating guidelines and ethics teams, but their effectiveness remains uncertain, especially considering reports of team layoffs. Moreover, these steps are voluntary and often motivated by public relations considerations. In contrast, decision-makers at the state and international levels struggle to keep up with the rapid pace of technological advancement. It's clear that the world is still a long way from effective regulation and legislation in this field. Therefore, criticism of the letter's call for safety standards and collaboration between the industry and decision-makers to create a regulatory framework is understandable. The crucial question is whether the letter will motivate influential figures to take action.
Although the open letter has garnered signatures from numerous experts and opinion leaders, including high-profile names such as Elon Musk, Yoshua Bengio, and Stuart Russell, there is no guarantee that the technology industry will slow development. Studies indicate complexity in the technology community's attitudes regarding regulation. A new study conducted at Tel Aviv University (as part of the M.A thesis of Inbar Dolinko, a co-author of this article) found that among technological experts there is moderate support for regulation, which becomes more significant when the regulation is narrow and focused on a specific application or use. Furthermore, the letter has been met with cynicism by some in the artificial intelligence industry. These people view it as overly broad and possibly motivated by personal interests. In the midst of an escalating 'arms race' between companies to produce the strongest and largest AI models, the call to pause development may be interpreted as an attempt by those falling behind to catch up. Finally, it should be noted that the letter is non-binding and unenforceable, even if signed by many.
Furthermore, the letter ignores the significant role of governments in the technological sphere, as they invest massive amounts of money into AI research and development, often as part of national programs aimed at achieving dominance in the field. Their involvement is sometimes also the key to success for private companies operating within their borders. This is done through investment in national infrastructures or education that accelerates talent growth. Given the enormous potential of technology for the security, operational, and intelligence sectors, it is difficult to imagine a country committing to such a non-binding agreement and jeopardizing its preparedness in these critical areas.
The ongoing international effort to restrict lethal autonomous weapon systems, some of which are based on AI, can offer insight into the challenges involved in regulating and limiting AI in general. This process is trapped in international politics and has been moving slowly at the UN for over a decade. It lags far behind the rapid pace of technological advancement, which is among the fastest in history. This is also an 'arms race' in which participants cannot afford to be left behind, given the immense potential of the technology and the risk of their rivals surpassing them or gaining an advantage by not imposing similar limitations.
While it's unlikely that the letter alone will bring about a delay in AI development, its extensive media coverage has provided an opportunity to raise important and urgent issues related to AI. Although AI has enormous potential for the economy, security, and healthcare, there are also some negative aspects, such as deep-fakes, disinformation, algorithmic biases, unpredictable algorithms, and increased cyber risks. Companies and governments should not ignore these issues and try to address them through technological solutions and rapid legislation. States have an obligation to respond, especially in dealing with disinformation and widespread possible damage to the labor market. Private companies alone cannot act effectively, and it is necessary for governments and tech giants to work together to create tools to deal with this complex reality. With this approach, technology can be used to benefit while reducing exposure to extreme risks.
Inbar Dolinko is a technology and policy researcher, she holds a master's degree in political science from Tel Aviv University, specializing in artificial intelligence.
Dr. Liran Antebi, is the director of an advanced technologies program at the Institute for National Security Studies, a lecturer at the academy and a consultant to companies and organizations in the field.