
The AI that detects AI: The companies fighting lies in the age of deepfakes
Young startups like Israel’s AI Light are racing to build tools that expose fake images, videos, and text before disinformation becomes unstoppable.
Even before returning to the White House, it was clear that Donald Trump and his allies are waging war against reality itself. Throughout the election campaign, Trump, his ally Elon Musk, and the campaign echoed false messages, from spreading AI-generated images (such as Taylor Swift fans supporting Trump or opponent Kamala Harris in communist uniform), to baseless claims that the Harris campaign was the one spreading AI images.
Trump, from whose school the term "alternative facts" emerged, is not only at war with the American people, democratic regimes, and the global economy, but also against truth itself. And he's not alone. Other authoritarian regimes, from Russia to Israel, are leading an ongoing effort to blur the line between reality and fiction, and now they have more powerful tools than ever at their disposal: generative artificial intelligence (GenAI) models that can create completely fictional yet realistic-looking texts, images, and videos. The result of these efforts is creating a world where it's very difficult, sometimes impossible, for the average person to know what's true and what's fake, what's reality and what's fiction.
But a young Israeli company from Tel Aviv, AI Light, is confident it has a solution: an advanced system for analyzing texts, images, and videos that can identify within seconds whether they were created by AI (and which model), underwent human manipulation (in Photoshop, for example), or, in the case of texts, were created by humans but contain fake news. "As time passes, more people understand they need to verify the content they see, regardless of the source," CEO and founder Maureen Sarnio told Calcalist. "Our mission is to make the technology that enables them to do this accessible to everyone."
The idea for AI Light was born in September 2023 from a familiar phenomenon to anyone with an elderly relative. "My mother constantly receives fake news messages," she shared. "Images, messages about 'Arabs are standing outside your door.' She really wants to help, so she spreads it and sends it to everyone. I thought it's a shame she can't, before spreading this incendiary content, forward it to a WhatsApp account that would tell her if it's true or not."
Sarnio, who was on maternity leave at the time, shared the idea with her co-founder, CTO Guy Rosenthal. "We started talking about things we wanted to do, and came up with the concept of a WhatsApp account you could forward content to and it would tell you if it's true or not. That was the first product we released."
Then came October 7th. "I have a friend who works at Channel 13 News. I told her what we were working on, and she asked, 'Help us because we're getting lots of images from Gaza and don't know what was shot today, what was shot yesterday, what's real. The IDF says they bombed a market in Jabaliya, but there are reporters saying the market is fine. They send pictures saying the photo is from now.' They have no way to verify it. They used to say if you trust the source, you trust the content. On October 7th, that was canceled. They asked us to help, and we created an image search code that could pull images from repositories like social networks and make comparisons. We saw that indeed, the images claimed to be taken now were not taken now."
Rosenthal: "When an image is fed into the system, we run algorithms on it. Part of it is searching for elements in the image, areas of interest, against sources like social media, to check if it's an image that was published on Facebook, X, etc. To this we add analysis, either looking at metadata or the context of the image. For example, I found a similar image but it's not Gaza yesterday but Syria from three years ago."
And you did this for Channel 13 News.
Sarnio: "Yes. They sent us some very difficult cases. This was right at the beginning, we weren't even a company yet. October 7th was the trigger to turn the idea from something my mother needs to something more business-oriented. Initially, we thought the clients would be only news organizations. But as we started delving deeper into who could benefit from this, how to take very technical tools and make them accessible to a wide audience, we realized there's a variety of professional users, not just investigators and news organizations interested in such technology."
AI Light is competing in a particularly crowded and emerging field, with several notable Israeli competitors. One of the prominent and veteran local players is Copyleaks from Kiryat Shmona. The company, founded in 2015 by Alon Yamin and Yonatan Bitton, began its journey developing an automatic text plagiarism detection system, primarily for academic use.
When ChatGPT appeared in 2022, the company quickly adapted its system to detect AI-generated texts, and now offers solutions tailored for academic institutions (detecting fake papers), corporations (verifying that AI models developed by companies are trained only on human content and preventing sharing of sensitive information with AI models), and advanced capabilities like detecting use of copyrighted texts by AI systems.
Another prominent competitor is Clarity, which develops a platform for detecting AI forgeries in videos, images, and audio. The technology of the company, founded in 2022 by Michael Matias, Dr. Natalie Friedman, and Gil Avriel, identifies AI manipulations in video clips, images, audio and verifies their authenticity. Clarity integrates advanced cybersecurity technologies, along with AI models that successfully identify the means used to disguise synthetic deepfake products.
Dtect Vision, founded by Prof. David Mendlovic, Dr. Dan Raviv, and Lior Moyal, focuses on preventing fraud in the financial sector by detecting deepfake use at critical junctions such as onboarding new customers or existing customer login, alongside additional capabilities like document authenticity verification, including AI forgery detection and user identification verification through movement and activity pattern analysis.
Cyabra, founded in 2018 with the goal of developing solutions for detecting malicious content and disinformation, has leveraged its system in recent years, which uses artificial intelligence capabilities, to also detect AI-generated content, alongside fake profiles and harmful content created in more "traditional" ways. The company was founded by Dan Brahmy, Ido Shraga, and Yosef Dar.
You started with images that were recycled or edited in Photoshop. But there are also GenAI models that can generate completely authentic-looking images and videos, or authentic enough, that you won't find on social media.
Rosenthal: "We started dealing with this relatively quickly. All our development is driven by user feedback. We saw people were interested in this, and it's part of the analyses we perform. An image I haven't found anywhere undergoes analyses that check whether it's a GenAI image, whether it's a deepfake, whether someone did Photoshop manipulation."
How is the analysis performed?
Rosenthal: "Through machine learning models we've developed, combined with existing tools. Ultimately, you need to develop AI that detects AI. The task here is relatively defined. Unlike the GenAI task of generating content for me now, our task is to detect whether it's original media or media created by GenAI."
Sarnio: "In parallel, there are analyses that are more old-school, but we saw news organizations are still very interested in this. Like detecting classic Photoshop."
What do you do for text detection?
Sarnio: "You can detect whether it's AI-generated text, and additionally, you can ask the system about factual claims in the text, and it will say whether they're true or not with appropriate sources. The next thing we'll release is the ability to input full texts, entire articles, to examine whether the information in them is correct or not. There's lots of use in text, lots. Many students, universities, lecturers checking if there's AI use in student submissions."
Today AI Light offers two main products. The first, a free WhatsApp bot, where you can forward or upload images and videos and receive binary feedback on whether it's authentic content or fake created by AI or Photoshop edited. The second, a paid online system that allows you to input images, videos, and texts and receive detailed analysis for each. Whether it's generated content, at what level of certainty, and if so, which model was used to create the content. The system also identifies editing changes made to visual content, and particularly which areas in the image or segments in the video underwent manipulation. For text, the system will also analyze whether the factual information presented in the text is correct.
"Some of our users just want to know what's real and what's not," said Sarnio. "There are lots of Tinder pictures that people want to know if they're authentic. It makes sense, it's a very sensitive place. There's lots of checking of news stories. Lots of images were uploaded of people walking on the Temple Mount with children on their shoulders and the children holding weapons. Lots of people checked these images. Someone sits, maybe in Saudi Arabia, thinking why would someone teach their child to walk around with a weapon? It's probably fake. But no, the images aren't fake and our system says so."
Another feature of the system allows content copyright holders to perform broad network monitoring for unauthorized use, including creations based on the original content. You can upload content individually or connect a social media account for automatic monitoring of uploaded content.
"We started developing the feature following a request from a media outlet," said Sarnio. "They said, once we release a story, someone can cut it, take three frames from it, use it, and we'll never know. Another person we helped was an Israeli fashion startup. They did a photo shoot, photographed a model with all their clothes, and a competitor in Korea took all the photos, made deepfakes on the faces, and published the same photos. They sued him, and then wanted to know if it's possible someone else did this. They used the system, and we found lots. The entrepreneur was amazed that he doesn't need to search manually, and that if someone else uploads his content he'll know."
Rosenthal: "Another point where we see lots of need is explainability. A system that can say, I think this is 50% fake isn't good enough. Our system can explain why it thinks so, whether because of a specific area in the image or something else. The ability to provide an accessible explanation to the user provides significant value."
There are, as mentioned, other companies operating in the field, some more veteran with more mature products, more money, and more manpower. What's your advantage?
Sarnio: "We haven't seen a company that combines text, image, and video. There are companies that do only text, companies that do real-time deepfake but only on Zoom. Our system can take a video, extract its text, analyze the frames and provide a detailed explanation of the findings. The ability to stand above and manage everything, and in real-time, that's where the greatest advantage lies. There are many very technical tools, but their accessibility, this ability to look from above, manage everything and provide this feedback, is unique."
We're in a time when often those promoting false narratives and fake news are the government itself, the state. Are you a counter-force to this?
Sarnio: "We didn't ask to be a counter-force to the government, but targeted the truth. The idea is to enable democratization of super-technical tools that most people can't access. We don't interfere with the technical results of the search. A government can distort reality, we don't touch that. It could be completely against Israel, against Maureen or Guy, we don't interfere there. It's a point that's very much needed in the world, especially in a world with Trump and Musk, who hold lots of power together. After Facebook canceled fact-checking and dramatically reduced content monitoring, people are super-exposed. They can't know what's true and what's not. Here you'll get the truth because that's how it is."
Your solution is ultimately technical. But the problem with misinformation is that people simply believe what they want to believe, and nothing you say will change that.
Sarnio: "It all started from what my mother received, and she won't go to Google and search. The accessibility, that you forward and get an answer, is super-significant for the average person. As time passes, more people understand they need to verify the content they see, regardless of the source."
Rosenthal: "Even if only 10% of people have the responsibility or desire to go and do this small additional thing to check, and we give them the ability and tools to do it, I think we've done our part."