Gary Marcus

Interview
"This is the teenage phase of AI. Tools with extraordinary power that are completely unreliable”

Gary Marcus, one of world’s foremost AI experts, recognizes the positive potential inherent in technology, but warns that we must urgently develop regulations against the spread of disinformation

“The way that artificial intelligence is progressing today will lead us to a world where no one trusts anything, and these circumstances will make it difficult for democracies to function," says leading AI expert Gary Marcus in an exclusive interview with Calcalist. "The combination of deepfaking and advanced language models is going to lead us into a world where bad actors can make up as much information as they want and whatever narrative they want, filled with references to studies that never happened with compelling bits of data that don't exist with fake references. And most people will likely simply despair, or believe what they want to believe and not be open to alternative views that might be well-reasoned and well-argued.
"And there are other concerns, of course: science will suffer from huge amounts of fake articles; people will get medical misinformation because of this, which will lead to them taking the wrong medicine or even dying; there will be an increase in cybercrime, such as forgery and kidnapping of people using deepfakes to extort money from their families; and then of course there's AutoGPT, a system where one bot makes another bot write code even though it doesn't understand things like security, which is going to cause a bunch of security holes. It's a disaster waiting to happen. These machines can be made to do very bad things, And I believe that we will see all kinds of alarming scenarios materialize, such as stock market crashes and countless other damages."
3 View gallery
מוסף שבועי 4.5.23 ד"ר גארי מרקוס
מוסף שבועי 4.5.23 ד"ר גארי מרקוס
Gary Marcus
(Credit: FLICKR BY Christopher Michel)
A professor emeritus of psychology and neuroscience at New York University, Marcus is one of the world's leading voices in artificial intelligence. He was born in Baltimore, Maryland, to a Jewish family, and showed an early interest in AI, writing code at the age of eight. At 16, he wrote software that translated Latin to English, which earned him early entrance to Hampshire College. Marcus studied cognitive science and went on to earn his PhD at MIT. He wrote his doctorate on early childhood language acquisition under the guidance of renowned neuroscientist Steven Pinker.
But that isn’t all, Marcus also founded two startups. Geometric Intelligence, which he sold in 2016 to Uber, and went on to become the basis of the company's AI laboratory, and robotics company Robust.AI, where he is no longer involved. He has also written about AI in The New Yorker for four years, and published five books. His most recent book, "Rebooting.AI,” published in 2019, quickly became a bestseller, was featured on Forbes' must-read books on AI, and was recommended by some of the most renowned scientists in the world including Pinker and Noam Chomsky, as well as chess grandmaster, Garry Kasparov, who famously played chess against AI-based software. Of Marcus’ book, Kasparov wrote, “Finally, a book that tells us what AI is, what AI is not, and what AI could become if only we are ambitious and creative enough.”
Marcus should be taken seriously because his prophecies often come true. For example, last year he wrote a column in Wired UK predicting that in 2023 we would see a first chatbot-related suicide. Since he wrote that column, a man committed suicide in Belgium after chatting with an AI chatbot on an app named Chai during which he was convinced to sacrifice himself so that the bot will save humanity from the climate crisis.

About two months ago, Marcus tweeted that 2023 would be the year in which the term “prompt injection attack” would make major media headlines outside the tech pages, which has since happened in both the Washington Post and The Register. This has resulted in more and more leaders of the AI community to call for a total paradigm shift, many of whom once criticized Marcus for similar remarks.
Despite the reputation he has gained as one of the most prominent skeptics in the industry, Marcus is a self-proclaimed AI lover. "Its potential capabilities are revolutionary," he says. "In medicine, it will help us understand how the brain works, and develop drugs for diseases that we have not been able to cure ourselves. It may be able to help us deal with climate change, and also do things like build house robots to take care of the elderly. If we do it right, in the long run, the sky's the limit."
So what concerns you?
"Many of these positive developments require us to do a better job in terms of safety and reliability, and in the short term, we have to ask ourselves if the benefits to productivity outweigh the possible risks to democracy. I'm not sure about the answer.
"I think of this moment as the teenage phase of artificial intelligence. What we have right now are tools with extraordinary power on the one hand, that are completely unreliable on the other. In many ways it resembles a teenager, who suddenly, for the first time, has some power, but doesn't really have a complete prefrontal cortex to stop doing some things that shouldn't be done. In 50 years we'll probably have a more mature artificial intelligence with much better control, but right now we have technology that is suddenly being used a lot and that worries me."
Prove that the bot is lying
AMarcus is currently less worried about doomsday scenarios where the machines rise up against us, like those voiced by some of his colleagues. He believes that we are still far from AI that is equivalent to or surpasses the intelligence of a human being, known in the industry as AGI (Artificial General Intelligence). His main concern, which is the focus of his advocacy, is the danger to democracy due to disinformation and fake news.
As an example, Marcus presents the alarming case of Jonathan Turley, a well-known American legal scholar, who about a month ago received an email from a colleague, who said that his name came up in ChatGPT's answer to his question, “Which law professors have been accused of sexual harassment?” The chat detailed that the alleged harassment took place while Turley was traveling to Alaska with a group of his male and female students from Georgetown University in 2018, and even attached as a reference a link to an article published about the incident in the Washington Post.
But such an event never happened. Not only did Turley not "make sexually suggestive comments" to the student or "attempt to touch her sexually," as the chat claimed, he also never traveled to Alaska on behalf of Georgetown University, where he never taught. Even the link attached to the article was broken, because such an article was never published.
Turley had no one to turn to to correct the chatbot, so he did what any man in his position would do, he published a column about it in USA Today, in which he clarified that this incident never took place, and expressed concern about the unreliability of this technology and its consequences. In the meantime, the Washington Post, which got involved in the scandal involutarily, decided to investigate the matter itself. When a Post reporter asked ChatGPT about law professors accused of sexual harassment, the bot refused to respond, but Microsoft's Bing system, powered by GPT4, repeated the false accusations against Turley, ironically citing as one of its "sources" the column he wrote for USA Today. "The bot completely fails to understand the Op-Ed, and this is the reality, it simply does not understand the words it’s manipulating," Marcus said.
3 View gallery
מימין אלון מאסק סאטיה נאדלה ו סטיב ווזניאק
מימין אלון מאסק סאטיה נאדלה ו סטיב ווזניאק
Elon Musk (from right), Satya Nadella, Steve Wozniak.
(Photo: AP Photo/Susan Walsh SeongJoon Cho/Bloomberg Anindito Mukherjee/Bloomberg)
Why did all this happen in the first place? Was there any information linking Turley to sexual harassment in Alaska?
"Turley spoke to the Washington Post in 2018 about his former student, attorney Michael Avenatti, who represented porn star Stormy Daniels against Donald Trump. The chat simply did not understand the fact that Turley has a former student working on a case related to a sexual scandal is not equivalent to Turley being personally involved in one. He only knows that the word ‘sexual’ appeared somewhere in the proximity of Turley's name."
This problem is not exclusive to ChatGPT, but characterizes all of its competitors, such as Meta's Galactica, which claimed last November that Elon Musk died in a car accident in 2018.
Disinformation is not a new problem exclusive to artificial intelligence. What makes this technology so threatening in this context?
"It's like saying, 'What's the big deal with guns? We already had ways of killing people even without them.' But as the scope and type of guns increases, and at the same time their costs decrease, the problem becomes much bigger. In the 2016 United States presidential election, Russia paid about $1 million a month for people to produce disinformation, and that means that it was limited in scope, because the amount could have been $2 million, but there wasn’t a budget. And suddenly, thanks to artificial intelligence, the cost of producing disinformation drops to almost nothing and the quality increases dramatically. You don't even have to be an English speaker in order to influence the elections in the United States with lots of variations of lies, which sound reasonable. There is a quantity, diversity and plausibility here that leads to the weaponization of disinformation, far beyond where it already was."
How do you teach common sense to a bot?
In recent months, it feels as though artificial intelligence is everywhere. From being the technology that will go on to replace more and more jobs, and achieve levels of productivity that we could previously only dream of, to even solving the climate crisis and curing cancer, to rising up against us and exterminating us all. Just lask week, The New York Times even published an article about how artificial intelligence is able to read minds based on fMRI readings. In many ways, writing about AI at this moment in time is a challenge in which failure is almost inevitable: the pace of progress is so rapid that by the time this article is printed, there’s a chance that it will already be outdated. Perhaps this is not surprising considering that last year the scope of investments in the field was already about $92 billion.
Marcus agrees that this is a dramatic crossroads, and helps map out the problems. To get to the root of the problem, he says, it is worth focusing on two aspects: the first is the overwhelming reach of ChatGPT in particular, which boasted 100 million users in three months, and is the fastest growing internet platform in history; the second is that in the same period of time, its parent company, OpenAI, went from a non-profit AI lab to a for-profit corporation with shareholders, which Microsoft is tightening its grip on by buying more and more shares. And, all this before we even discuss the damage to the environment caused by the massive consumption of these tools.
"We will have to monetize it somehow at some point," entrepreneur Sam Altman, CEO of OpenAI, tweeted in December. "The compute costs are eye-watering."
Such a situation urgently requires regulation. Marcus, who in recent months has become one of the most sought-after interviewees on the subject, is convinced that it is possible and points out some of the tech’s Achilles' heels. One of them is the excessive reliance on deep learning, which is based on big data. The problem, he explains, is that for it to work properly, the data needs to be collected from a very stable and closed system whose rules haven't changed in 2,000 years, but when it comes to driving, for example, we may never have enough information. This is why the road to autonomous cars is still long. "Musk has been promising that driverless cars are almost here since 2014, and we have demos, but making them into a formal product that you can trust in critical safety matters turns out to be really, really hard," he says. "The problem is in extreme cases. These systems are trained on routine circumstances, and have difficulty dealing with unusual things that happen."
Another major problem that Marcus points out is the difficulty of teaching artificial intelligence common sense. "Common sense is like dark matter - this thing that we know is out there but we can’t really understand it yet. Broadly speaking, common sense is the basic understanding of how the world works, the physics of certain objects and the psychology of people, and we use it to make decisions. If I show you that I have a mug here, you will be able to guess, for example, that there might be something in it, so I better not turn it over, otherwise its contents will spill. If I put coins in it, you will be able to guess that they are still in it, unless there's a hole at the bottom. Then if you hear them rustling, you can assume they're still there. These are things almost any human can do, because you have a coherent understanding of the world, and current machines don't have that."
Give me another example.
"When we watch a movie, we understand what is happening in it. If, for example, it appears that someone has left a child alone in a car and locked the door, we can guess what the potential consequences will be. Even if we have not seen such a chain of events before, we can all understand what is happening and what it says about the motives of the various characters: some of them want to rescue the child, and others want to keep holding him hostage. These are things we can draw conclusions about very quickly, which means we have an understanding of what is happening."
How do you know that they don't understand common sense? What are they actually doing when they write me some decent questions for an interview with you, or spew out some very impressive paragraphs about the historical justification for a Jewish state?
"When it comes to standard questions, the system has a lot of information that it can draw upon. On the question of why there should be a Jewish state, 10 million articles must have been published in the past, so when the system gives you an answer, it will give you all the important points. It may not be full of nuances, and it may not have an original angle, but it will give you the highlights that have been said by many people many times.
"The biggest problem occurs when you go outside of the box, such as with controversial issues. The more you ask follow-up questions and ask for more details, the more likely the system will make mistakes. For example, when people ask the chat for their biographies, then the majority will probably be correct, but when you go a little deeper, if they are scientists, for example, all of a sudden made-up studies will appear that have never been written about subjects they have not worked on, which may sound a bit like what they are working on. And I am sure that if you go a little deeper about the history of Israel it will be similar.
"People are already running ahead and saying that they will use ChatGPT as a history tutor, and I say: Are you really going to use a tool that makes stuff up? With the old style of web search, you could look at a website and say 'OK, it looks like garbage, I don't believe it', but with chatbots everything seems as if it came out of Wikipedia, both the true stuff and false stuff, and there is no indicator."
Head-to-head with Elon Musk
These days Marcus focuses his efforts on warning about the dangers of artificial intelligence, at least while it can still be controlled and steered in healthy directions. The reason for this, among other things, is his lessons from Covid-19. "Social media has hurt us with the spread of disinformation about vaccines, and artificial intelligence has contributed to that," he says passionately. "And the price was heavy - more variants spread and more people died. This is what pushed me to where I am today, after I thought that my next step would be building an artificial intelligence system. My eyes were opened to the consequences of the technology, and I became very worried."
As part of his efforts, he signed a letter published by the Future of Life Institute in March, alongside other well-known figures such as Yuval Noah Harari, Apple founder Steve Wozniak and Elon Musk. The letter, which was signed by about 1,000 of the most respected and senior people in the industry, called for a re-evaluation of artificial intelligence: "Should we let machines flood our information channels with propaganda and untruth?" it read. "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders."
The letter you signed sounds much harsher than what you have been saying.
"First of all, if I had written the letter, it would have sounded different. But in life, compromises are made. The goal was to raise awareness, and it succeeded in that - it forced congress to stop ignoring the problem and start taking it seriously. Suddenly, I began getting calls from senators once or twice a week.
"Secondly, contrary to what many people claimed, the letter did not call for a halt to all research on artificial intelligence, on the contrary, it called for increased research to make the technology more reliable. Although the letter focuses too much on the long-term dangers, in my opinion, rather than the short-term dangers, this wake-up call is critical. And the fact that Elon signed it, and also Yoshua Bengio, who is a huge pioneer in the field, really caught people’s attention."
Marcus is not used to agreeing with Musk. A year ago, when Musk tweeted that by 2029 we'd have advanced artificial intelligence (AGI), Marcus was not impressed. "I didn't believe it then and I still don't believe it," he says with a smile. "I wrote a little essay explaining why I don't think we're close to advanced artificial intelligence, and why I think Elon can't be trusted when it comes to schedules—it's one of his weak points. He built amazing rockets, but his predictions about time are far from amazing. At the same time, by chance, I was working on another article with my collaborator Ernie Davis, which detailed the criteria needed to conclude that we have reached the stage of AGI, so I decided to challenge Elon with a $100,000 bet that we would not have AGI until 2029. That's a lot of money for me, and pennies for him, but it's real money."
What are the criteria?
"AGI needs to know how to do five things, some of which are things any reasonable person can do, such as watching a movie or reading a book and understanding what's going on in them. Others are a little more difficult, such as coding 10,000 good lines of code, and not pulling them from some database, but to work on a new problem, like humans do. I challenged Elon that if artificial intelligence knows how to do three of the five criteria by 2029, then he wins. Within 24 hours someone upped my offer to $500,000, but so far he has not responded, and I don't think it's in his best interest to respond because he can't really stand behind his prediction. We only offered this bet so that he would take some responsibility for his public statements, which are often hasty and not really based on an understanding of science."
3 View gallery
Generative AI
Generative AI
Generative AI.
(Photo: MMD Creative/shutterstock)
Now he has announced TruthGPT, which is supposed to compete with existing models and prevent the spread of fake news.
"I think he confuses truth with political correctness, and doesn't quite understand what GPT is supposed to do. To start with, the name he chose is an oxymoron, because GPT systems are good at many things, but they are not good at distilling the truth. They don't have the internal architecture to track a set of facts and validate things relative to those facts. This is a completely different process from what they do, which is to predict the next word in the sentence and the one after it."
What do you think Musk is trying to do?
"He seems to think that a reduction in political correctness will lead to an increase in the truth. But I believe that these are actually two separate questions, with some overlap between them, and indeed, sometimes extreme political correctness can neuter the truth in all kinds of ways, just as any political view can. But the core issue is how you get the system to respect the facts, which are widely agreed upon, and use them as a point of reference. For example, we all need to know that even if someone says the moon landing was fake, it's not something that's ever been proven and it's not a matter of political correctness. It's a matter of doing your homework and understanding the science, and what is happening in the real world. He is simply mixing political correctness with facts, and I think this will lead him to the same place as many other people who have previously stated that they are working for the truth, like Pravda or Donald Trump's platform Truth Social. Often, when people declare truth rather than working on truth, they wind up with a lousy result".
Learn from the Food and Drug Administration
Marcus not only warns of the dangers of artificial intelligence, he also offers practical solutions from tools such as anti-disinformation software, similar to antivirus software; broader regulation, such as that of the U.S. Food and Drug Administration (FDA), for experiments in artificial intelligence systems; and the establishment of a global AI governance body, in the spirit of the European Organization for Nuclear Research (CERN) and the International Monetary Fund, which he is currently promoting.
"This administration will sit in the United Nations, or it will simply be an independent organization," he predicts. "It is important that it be international, neutral and non-profit, with representatives of companies and governments, with an opportunity for citizens from around the world to make their voices heard and a sincere attempt to coordinate between everyone.
"Governments around the world are currently dragging their feet, and are not very coordinated in this matter, as we learned from Italy banning the use of ChatGPT, and on the same day England announced that it does not intend to appoint a central regulator or ban the use of tools like it. The whole world will benefit from some coordination, from the convergence of experts in the field, because many of the laws that are being written now are being written by people who do not understand the tools, and things are progressing quickly. This will also benefit tech companies themselves. After all, they do not want to train a separate language model for each and every country, certainly not within the United States. It's just too expensive, and the ecological price is also heavy.
“I proposed banning the mass distribution of AI tools until we have some kind of regulatory infrastructure to evaluate the costs versus the benefits, like we do with new drugs. Nobody's going to say, 'Hey, I've got a great cure for cancer, go ahead, everybody take it,’ right? First you test it on 100 people, see how it affects them, then you test it on 1,000 people, and so on. So we need to apply clinical trial processes to artificial intelligence.
"Another type of policy is legislation against the mass production of harmful disinformation. It's one thing if you post a lie on Twitter to your friends, but if you're going to produce 100 million lies a day to systematically affect the world economy, manipulate stock markets or presidential elections, maybe it should be punishable under international law. We need to think about all these questions that weren't thought of before because there simply weren't the tools to commit all kinds of crimes, and now there are. We'll also need to think about how to enforce such things. There are many, many questions, but I maintain that the creation of a neutral international non-profit organization would be a good start."
You also claim that regulation does not necessarily conflict with innovation, and give the example of electric vehicles as a result of environmental regulation. In a recent newsletter you wrote that China will beat the West in the race for artificial intelligence if we don't step up regulation. Really?
"It was of course a deliberately provocative text to get people thinking. Because the libertarian extremism of Silicon Valley people, who don't want any regulation on the grounds that it stifles innovation, is stupid. After all, this story of letting the market sort itself out doesn't work so well, and the evidence is the failure of crypto. That's why we need to produce good regulation, which will drive positive progress."
One of the problems in promoting such regulation right now, Marcus explains, is the rift in the AI ​​safety and ethics community, which is divided between those who focus on its short-term risks and those who focus on its long-term risks. "We need to formulate a sort of investment portfolio of how we distribute our money in risk research," he says. "I would put a lot more of it on short-term risk than long-term risk."
About a month ago, the influential AI researcher Eliezer Yudkowsky published an article in Time Magazine, in which he claimed that the continued development of technology will inevitably lead to the extinction of the human race in the long term. This is even more worrying than the damage to democracy that you are talking about.
"I think he is wrong and exaggerating, because he thinks we are much closer to AGI than we really are, and in my estimation he does not understand enough about how these systems work and what their technical limitations are. On the other hand, many people ridiculed him without really addressing the issues he raised. And I don't think that's the right thing to do. I think it's better to think deeply about how we intend to stop the danger. Here he has a good point, which is we actually don't have any plan. Even if he thinks the risk of a disaster occuring is 100% or 90%, and I I think it's less than 1%, we still need to have a plan.
"I wish all the people who are worried about the long term would stop yelling at those who are worried about the short term, and vice versa, and that we could be more united in front of the rest of the world and simply say: artificial intelligence carries many risks, all of them essentially stem from the fact that we don't have good enough control over these systems.
"In the short term, if I ask the system to write me a biography, it should write me a biography and not make things up. I shouldn't have to tell it not to do it. And in the long term, for example, if I allowed machines to control nuclear weapons, then that would require me to be 1,000,000% sure that they wouldn't randomly kill people. We still don't have a good enough solution for this, and the members of the safety and ethics committee should have each other's backs, instead of tearing each other to pieces. Because right now congress is saying: 'Well, if you can't agree, then what am I supposed to do?'"