Yoav Shoham

The man trying to ground the AI boom

Prof. Yoav Shoham, one of the world's leading artificial intelligence researchers and founder of AI21, discusses hype cycles, unreliable systems, and the long road to trustworthy artificial intelligence.

The interview with Prof. Yoav Shoham takes place in the stylish offices of AI21 on Leonardo da Vinci Street in Tel Aviv, next to the elegant residential building that was hit by a missile during the first Iran War. The offices are almost empty, this time because of the second Iran War and the missiles flying toward the city.
A sign still hangs on the front door with a high-tech joke: "Please do not enter, Purim is loading," a reminder of a Purim party that was planned but never happened. Purim was not "loaded" this year, and Passover was barely celebrated, but the decorations still hang in the large hall that was intended to host the event. "Pompeii," Shoham offers as a charged image of the office, emptied of employees, frozen in its decorations, and preserved like the ancient Roman city buried by the eruption of Mount Vesuvius.
Shoham (70) is one of the leading scientists in the research and application of artificial intelligence and among the first in the world to work in the field. Unlike the residents of Pompeii, who ignored warning signs before the eruption that led to their destruction, he observes with open eyes the AI revolution sweeping the world. During the interview, he points out its failures and dangerous blind spots, not in an apocalyptic sense. Shoham, who himself founded a promising AI company, does not believe the technology will destroy the world anytime soon. But the conversation with him clarifies the scale of expectations and defines the boundaries of artificial intelligence, which currently seems to be taking over nearly every aspect of our lives.
3 View gallery
מוסף פרופ' יואב שוהם A21
מוסף פרופ' יואב שוהם A21
Yoav Shoham
(Orel Cohen)
"Until recently, the world talked about the power of the technology," says Shoham. "On the one hand, people said things like 'there will be no more programmers' or 'the business models of companies will change.' On the other hand, they spoke about 'terrible dangers', that AI could make the human race redundant or bring about its destruction. These statements, on both sides, attribute enormous power to artificial intelligence. Both the utopia and apocalypse scenarios are based on a belief in the unlimited potential of technology and ignore its limitations. Only recently have we begun to discuss the limitations of artificial intelligence."
Like what?
"For example, and this has been widely discussed, the very large gap between the interest organizations show in AI, the money invested in the field, and the number of actual AI product launches. The main reason for this gap is the unreliability of artificial intelligence systems, which can be both empty and, at times, surprisingly ineffective. They are like a brilliant employee who can also destroy the company with a wave of his hand. You can’t launch a product when you don’t have enough confidence that it won’t produce complete nonsense."
And today, are you more encouraged or discouraged by this statistical unreliability of AI?
"My approach is not emotional, so I wouldn’t say I’m depressed or discouraged. Behind ChatGPT, for example, there is a whole system that decides which model to use or which tool to activate. If you need to solve a mathematical problem, the system can use a calculator instead of reinventing the wheel, and producing a flawed one. The more we rely on proven external tools, the more we will build systems in which the emphasis is on verification and validation. The stronger the validation, the greater our trust in these systems will be. We are moving in the direction of systems that can be trusted, but the distance is still significant."
In other words, are we still far from overcoming the tendency of AI systems to hallucinate answers?
"These are systems that can be very misleading, precisely because they are so eloquent and convincing. They can present something false in a way that sounds highly credible. When you read a text about something you understand well, you can recognize how flawed it is, and an AI-generated summary may appear as a mix of invented facts and incoherent reasoning. Sometimes it’s remarkable. For example, I am an amateur sculptor, and I worked with a type of stone that I wasn’t sure was natural. I asked Gemini and ChatGPT, and they provided arguments that sounded logical, based on the shape, the ‘hairlines,’ and even the supposed name of the stone. But even facts and arguments that seem relevant and correct can be wrong, despite being presented very convincingly."
So they didn’t recognize the stone. It’s not a big deal.
"When you’re a sculptor, the cost of a mistake is not that high, it doesn’t matter much if the stone is chalk rather than flint. Even when the technology is used to help artists create images or videos, inaccuracies are relatively harmless. But if you make a decision to operate on a patient based on incorrect information, the cost of a mistake is far greater. The same applies if you rely on an AI-generated summary of a financial report that contains errors, the consequences can be significant."
So in your opinion, the weakness is that in certain fields, less than 100% accuracy is unacceptable.
"Let’s put it this way: there are areas where there is no objective truth, and all that can be done is statistical prediction. In those cases, it is reasonable to use such technology, for example, in weather forecasting. But in fields where there is a correct answer, and there is a real probability of error, reliance on such systems becomes problematic."
So is verification and accuracy the main challenge?
"Reliability and accuracy are the main barriers when it comes to artificial intelligence products. We’ve reached a point where AI provides excellent support for brainstorming, generating ideas, and handling much of the routine work effectively. But we are not yet at a stage where we have a ‘fire-and-forget’ assistant that can be fully trusted. Not only because of reliability and accuracy issues, but also because these systems are expensive for businesses, and in some cases not economical. In addition, the lack of transparency, how conclusions are reached and why, also undermines trust."
3 View gallery
יואב שוהם מוסף משרדים סגורים בפורים
יואב שוהם מוסף משרדים סגורים בפורים
AI21 offices decorated for Purim
(Diana Bahur-Nir)
"The market is now manic-depressive"
For now, it does not seem to be stopping the market, and companies perceived to be threatened by AI are plunging on the stock exchange.
"I separate the market reaction from reality. The market follows slogans, and now the slogan ‘software as a service is dead’ is taking over everything. These are waves that come and go, and I'm a little concerned, because I do believe in the AI wave, but where there is overpromise, there will also be overcorrection. I hope we can moderate the hype so that the correction doesn't hurt too much. Now we are seeing a manic-depressive market. On the one hand, there is blind faith in AI and supercomputing, and on the other, talk of a major correction. Right now, people are beginning to question whether there are compelling uses for AI that justify the large investment in it."
Slogans have a real-world impact. Israeli company Wix, for example, is often cited as a company taking a hit from AI, having lost two-thirds of its value since the beginning of 2025.
"The stock is not performing well, but I don't know their business in detail. There is talk of a ‘SaaSpocalypse’ - the idea that software-as-a-service companies will be made redundant by AI. In my opinion, this is a sweeping and inaccurate claim. There are aspects in certain industries where AI can replace people or empower them. Yes, there are companies laying off developers. But in reality, companies that over-hired are using this as an opportunity to reduce staff. On the other hand, companies like IBM, which laid off thousands, are now hiring aggressively. It is true that things are changing, some tasks can now be done faster and more cheaply with AI, but I don’t think there is room for such sweeping conclusions."
Is there an exaggeration in the promises of “we do AI”?
"Absolutely. AI washing is not just using the term AI to justify layoffs, there is also an overstatement in how companies present their use of the technology. Show me one company today that does not call itself an ‘AI company’ simply because it uses AI in some limited way. The term has lost some of its meaning. There are many so-called AI experts who lack a real background in artificial intelligence. It has become a passing trend that has diluted the meaning of what it means to be an ‘AI company.’"
"It’s problem to be talking nonsense 5% of the time"
Compared to self-proclaimed AI experts who appear everywhere, Shoham is a deeply experienced figure in the field, both academically and commercially. After earning a degree in computer science at the Technion, he completed his master’s and PhD at Yale University, before moving to Stanford, where he spent 30 years in academia. Among other achievements, he founded the Stanford AI Index, an annual report analyzing trends and developments in artificial intelligence. After returning to Israel, he also founded “WeCode,” an initiative to train programmers from underrepresented communities. In June 2023, he was appointed head of the scientific advisory committee for Israel’s National Program for Artificial Intelligence.
At the same time, Shoham has consistently been involved in business ventures. He has sold three startups - Katango, TradingDynamics, and Timeful - and is now focused on AI21 Labs, the artificial intelligence company he co-founded in 2017 with Prof. Amnon Shashua and Ori Goshen. In December, reports emerged that the company was in talks to be acquired by Nvidia for $2-3 billion. While the company denied the report, it did not deny that discussions with potential acquirers were taking place. “The news was denied by both us and Nvidia,” CEO Goshen wrote to employees, “We are in talks with several potential parties. If and when there is something concrete, you will be the first to know.”
On the morning this interview was published, foreign media reported that AI21 was in talks to be sold to the AI infrastructure company Nebius. Shoham remains cautious: “Every now and then there are rumors that we are being sold. Our consistent policy is not to comment. We will report when there is something concrete.”
The sale to Nvidia did not come to fruition?
“A sale is not what we are working toward. The goal is to create value, technologically or commercially.”
AI21 is an unusual company in the Israeli landscape, largely because it identified the potential of large language models early, even before the ChatGPT wave. It has raised about $600 million from prominent investors, including Nvidia, Google, and Intel. Its first product, Wordtune, focused on writing assistance, but as the space became crowded, the company shifted toward developing more advanced and reliable language models for enterprise customers willing to pay for accuracy.
“We founded AI21 based on a technological thesis centered on deep statistical learning,” says Shoham. “Today, we understand that language models are necessary but not sufficient in terms of reliability. Competing in the consumer space requires enormous investment, so that is not our game. We shifted to enterprise customers, but in that market, if you are correct 95% of the time and wrong 5% of the time, you will not be trusted. That brings us back to the central challenge: achieving sufficient reliability.”
And what is the solution?
“The focus is shifting from the language model itself to orchestration, managing multiple AI agents working in parallel. If previously people used tools like ChatGPT for simple queries, today we are moving toward systems of agents capable of handling complex tasks. The emphasis is now on managing these systems and significantly reducing hallucinations, the errors produced by language models. Our system, called ‘Maestro,’ is based on this concept. While the core idea remains the same, both the technology and the business model have evolved.”
Who are your customers?
“For example, the French retail company Fnac, which sells cultural products, books, music, electronics, and computers. It has millions of products online and initially approached us to automate product descriptions. Once it gained confidence in our technology, it expanded usage, for instance, to generate thousands of responses for customer service inquiries after purchase. These are not trivial questions like ‘tell me a joke,’ but real issues such as ‘I have a problem with the vacuum cleaner I bought, what should I do?’ We have done similar work with One Zero Bank in customer service.”
You have raised $600 million and are still not profitable, correct?
“I can’t comment on that.”
How was AI21 founded?
“When I returned to Israel, many people wanted to meet me. I was like a new dog in the park that everyone wanted to check out. At the time, I started the social initiative ‘WeCode.’ A talented participant named Noga suggested I meet her boyfriend, Ori Goshen, who later became the company’s CEO. When we decided to start AI21, we invited Amnon to join us. I had known him from his time at Stanford.”
Recently, there have been rumors of tensions between the founders.
“Amnon is very different from me,” says Shoham.
In what way?
“He is an entrepreneur who is also an academic, while I am an academic who is also an entrepreneur. When we invited him to join, he agreed on the condition that he would be actively involved, not just an investor. And he has been, both strategically and technologically, although naturally not to the same extent as Ori and me. He has other commitments.”
Other commitments? He founded another AI company, AAI.
“He is our chairman and is involved in multiple ventures. He is a very busy person, and his involvement is different in nature, that is clear.”
You have also sold several companies.
“That’s right, but sequentially, not in parallel.”
"Companies will charge according to the value they provide"
It is no coincidence that Shoham is cautious about selling the company or focusing on profitability at this stage. Artificial intelligence is a revolution on a historic scale, but for now, chatbots and AI applications have become resource-intensive systems that consume vast investments and enormous amounts of energy, with uncertain paths to profitability. Each new innovation, such as Claude Code, the currently popular programming tool, renders previous products obsolete while simultaneously requiring costly system adjustments.
"The basic cost structure of AI products is different from that of classic software," says Shoham. "At the heart of modern AI products is a language model, which is expensive and fundamentally different from the cost structure of traditional software. When you call on a language model, you wake a beast from its den, and it is a big, hungry beast. The more you use it, the more it costs. Using language models from companies like OpenAI or Anthropic is inherently more expensive than using classic software."
Will, and when, will these investments pay off?
"In the short term, there is a certain level of overhype around AI, and in the coming years there will likely remain a gap between the investment and what the technology can actually deliver. But in the long term, over the next decade, the hype may actually be insufficient. Any company that wants to position itself for the AI era over the next 10 to 20 years must make the necessary investments now."
When will we know what truly works and what is actually needed?
"We are still in the early stages of the modern AI revolution, so it is difficult to say exactly when things will become clear, perhaps within the next two or three years. The technological stack will become better defined, and systems for building and managing AI agents will mature. There are many layers in this field that have not yet stabilized. It will take several years for both the technology and business models to solidify."
Will the gap between "hot air" and useful products narrow?
"Out of the many products currently being developed, only a small percentage will succeed, but the gap between investment and successful product launches will narrow. This year and next, we will see a stronger focus on return on investment and on building products that address real needs. The emphasis will not only be on AI technology itself, but also on product management, turning technological ideas into usable, reliable products for companies. We have moved from a phase of widespread experimentation with limited application to a clearer understanding of where the technology can genuinely add value. We are still at the beginning of this process, but in the coming years we will reach a more stable and responsible phase of AI adoption across industries."
So the number of programmers or lawyers will not decline dramatically in the coming years?
"The key issue is not which jobs will disappear, but how the scope of tasks will change. Some tasks previously performed by lawyers are already becoming redundant, but new tasks are also being created. In the legal profession, for example, I expect the traditional billing model, charging by the hour, to change. As long as work was done manually, there was a clear link between effort and output. AI breaks that link. Going forward, companies will increasingly charge based on the value they deliver, not the number of hours worked."
"Who are we if a computer is part of us?"
Shoham was born in Haifa to a father who was a civil engineer and a mother who was a homemaker. He studied at the city’s Reali School and, according to him, arrived at the Technion almost by accident. "Like many things that happen by chance, I ended up studying computer science at the Technion somewhat reluctantly. I served in the Shaked Battalion. I didn’t initially want to study computer science, but there was clear pressure from my parents. I actually thought I wanted to be an architect, but I didn’t want to go through the required entrance exams. I didn’t think I was good enough at drawing, and my grades were sufficient for computer science. It seemed boring but useful, and when I started, I found that it really was boring. But there were theoretical aspects of computer science that were fascinating, it’s where art meets mathematics."
You didn’t want to study it, but you went on to do a doctorate.
"Because I didn’t want to work in the profession. Then I encountered the concept of artificial intelligence. I’ve always been interested in philosophy, and to some extent psychology and human behavior, and this seemed like a fascinating opportunity to explore how intelligent a machine could be. It’s a question that still occupies me, what is humanly possible for a machine. In 1982, after the First Lebanon War, I went to Yale for my doctorate. I thought to myself: ‘You can study philosophy and get paid like in computer science.’"
How did you end up with a 30-year career at Stanford?
"I was offered a position as a lecturer, and it was clear that this was the best place in the world to study AI. Unlike Yale, Stanford had an outpouring of intellectual energy in the field. At my first faculty meeting, I felt like I was sitting around a table with gods, the best computer scientists in the world, including John McCarthy, who coined the term ‘artificial intelligence.’ At the time, I felt I had arrived a bit late, that I was one generation behind. Today, those who came after me are considered the fathers of AI. So I didn’t invent AI, but I am among the elders of the field."
3 View gallery
יואב שוהם (משמאל), עם אורי גושן (מימין) ואמנון שעשוע מוסף
יואב שוהם (משמאל), עם אורי גושן (מימין) ואמנון שעשוע מוסף
From left: Yoav Shoham, Amon Shashua, Ori Goshen
(Orel Cohen)
So why is the major breakthrough in artificial intelligence happening only now?
"Modern AI is about 70 years old and has gone through cycles of hype and disappointment. In the 1980s, there was a major wave, conferences, funding, excitement, followed by a downturn in the 1990s, what we called the ‘AI winter.’ There was deep disappointment because the technology promised more than it delivered, especially in machine learning. The dominant approach then was ‘symbolic logic,’ which tries to define explicit rules about how knowledge works. It had value, but it was limited. Over the past 15 years, that has changed. What was once a subfield, machine learning, has become dominant, thanks to massive amounts of data and computing power. Deep learning applied to language was the real breakthrough, and the field still retains elements we don’t fully understand."
Why is it still mysterious?
"Because it is essentially statistics at an enormous scale that can produce profound and sometimes brilliant answers, while also making mistakes and producing complete nonsense. In other words, this is a powerful field that we still don’t fully understand. I believe that eventually we will combine these powerful statistical methods with symbolic approaches. Integrating symbolic reasoning with statistical models is one of the central challenges of modern AI."
Today, do you allow yourself to be enthusiastic about artificial intelligence? Perhaps even fascinated by it?
"I support measured enthusiasm. Excessive enthusiasm will ultimately harm us, the more hype there is, the harsher the eventual disappointment."
You emphasize the distinction between “knowledge” and “understanding,” and you are researching it today.
"Ultimately, the difference between machines and humans can be seen in that distinction. A language model may provide correct answers, but if it also produces nonsense, it does not truly understand. When humans understand something, they can explain it. Language models usually cannot explain their reasoning, sometimes they generate explanations after the fact, but that’s not how they arrived at the answer.
"Another key element of understanding is compactness, having a concise representation or method. Memorization, for example, does not mean understanding multiplication. On the other hand, writing out every possible multiplication is impractical. True understanding requires a compact procedure that can be applied across problems. If we can better understand the gap between what machines do and what humans do, we may better grasp the limitations of AI. But beyond that, what interests me most is understanding human beings themselves. If there is a fundamental difference between humans and machines, it may help us understand ourselves better."
Intuition suggests there is a clear difference.
"When we see machines performing tasks we once thought impossible, the question becomes how far they can go. We know humans are different from, say, a refrigerator, but what about a computer? What will computers be like in 100 years?
"But asking how we compare to computers may not be the right question, because computers are becoming part of us. Even today, if my mobile phone, essentially a computer, is taken away, I feel as if a limb has been removed. And this will become less metaphorical over time. We will increasingly interact with computers indirectly, whether through brain-machine interfaces or embedded devices. The real question becomes: who are we when the computer is part of us?
"There is no simple answer. In a TED talk, I asked whether a computer can think, be creative, feel, be conscious, or have free will, qualities we associate with humans. But the more you examine concepts like ‘free will,’ the more you realize how little we truly understand them. These are deeply human ideas that are becoming less obvious."
A person is more than statistical and behavioral patterns.
"I see that as a question, not an answer."
"It's hard to be young in a chaotic world"
Shoham lives in northern Tel Aviv. He is a father of three daughters (aged 33, 21, and 20) and a 15-year-old son from his first marriage. His ex-wife died in tragic circumstances, and two months ago he remarried Dity Ayalon, an architect and owner of a software firm for architects, who is 17 years younger than him. "I got married two months ago, but I’m still effectively a single father with late parenthood, and also in a late relationship and marriage. But, knock on wood, I’m in better shape now than I’ve been in 20 years. I ran 10 kilometers in the Tel Aviv Marathon for the first time, seven times more than I used to be able to run."
Doesn’t biology limit you?
"One day it will catch up with me. But for now, so far so good, like the man falling from the 30th floor who says on the 17th floor that everything is fine."
Someone as rational as you gets married at 70?
"We are all complex and full of contradictions, myself included."
Does youth make you jealous?
"I’m not driven by envy or regret. If anything, I feel a bit sorry for young people because of what they don’t yet know. One day I may become slower or less sharp, but until then, I’m content."
What knowledge do young people lack?
"The hardest thing is knowing what you want. We tend to focus on how to achieve goals, rather than on choosing the right goals. Being young in a chaotic world is very difficult. It’s hard to understand reality and predict what lies ahead. I would have liked to start life with optimism, but today we live in a world of uncertainty, distress, and suffering, as the Buddhists remind us."
Is your business activity alongside academia driven by intellectual curiosity or by financial motives?
"I have an entrepreneurial side, and starting companies comes from both curiosity and a desire to make money. Anyone who claims money doesn’t matter is not being honest. It begins with an exciting idea that you feel compelled to pursue, but financial success is also important. A company must be commercially viable, otherwise, why not stay in academia?"
Is AI21 the last company you’ll start?
"Every company is the last one, until the next one. At some point, I’ll stop, even if only because I’m no longer around. I don’t feel the need to prove anything. I’d rather help others and contribute to the country’s success. That matters more to me now than starting another company."
You head the national AI advisory committee, but the state does not seem to support you.
"Not much is happening at the moment. An AI administration has been established in the Prime Minister’s Office, and perhaps it will be more effective going forward."
You have publicly criticized the prime minister and judicial reforms. Has that affected cooperation?
"I don’t think so, but you’d have to ask him."
What are Israel’s chances of achieving global leadership in AI?
"If we try to compete head-on with the biggest players by building ever-larger models, we will fail. That race is already lost. Even wealthy countries like Saudi Arabia or the UAE will struggle there. Instead, the opportunity lies in areas like AI agents and orchestration, managing intelligent systems. This is still an open field.
"In applied AI, building specialized systems for domains like customer service, transportation, finance, or combining AI with cybersecurity, there is real potential. Historically, Israel has a strong foundation in these areas, and that is where it can create meaningful impact."