Google CEO Sundar Pichai

Google play? AI for the masses toys with online advertising business

The generative AI tool that Google rushed to deploy to the general public is used to create fictitious news sites, which divert good money from advertisers. NewsGuard found that ads from 141 global brands were planted on such sites, 90% of them by the Google Ads service

Like a snake that swallows its tail, generative artificial intelligence tools that Google itself hastened to deploy for the use of the general public are being used to interfere with its ad business and along the way also pollute the web. A study by NewsGuard, a for-profit organization that rates the credibility of news sites, found that digital advertising companies like Google are still not keeping up, failing to protect their advertisers from fictitious news sites. This means that they place paid ads on sites that are flooded with generative artificial intelligence synthetic content, whose sole purpose is to get their creators user traffic and a piece of the digital advertising pie.·
The study found that at least 141 global brands advertise on fake websites created by generative artificial intelligence, and 90% of these ads were posted by Google's ad service, Google Ads, which is the largest digital advertising platform in the world.
1 View gallery
סונדאר פיצ'אי מנכ"ל גוגל
סונדאר פיצ'אי מנכ"ל גוגל
Google CEO Sundar Pichai
(Photo: Bloomberg)
Last year, Google's revenue from advertising amounted to $225 billion. According to Google's ads policy, websites may not place ads displayed by Google on pages that include automatically generated spam content, which is defined, among other things, as content created without producing anything original or adding sufficient value.
The sites found produce a high volume of content on a wide variety of topics, in bland language that includes phrases that repeat themselves in a way that has become a hallmark of texts generated by generative artificial intelligence. The sites have generic names that often include the word ‘news’, some of their content is fake, although not inherently misleading, and some is a rewrite of original stories from trusted sites.
According to the study, the placement of ads on the fictitious news sites is done as part of what is known as "programmatic advertising", a popular type of targeted advertising in which the advertisers do not actively choose where to publish their ads, but rather automated systems that "follow" Internet users and place ads on sites according to predetermined parameters. This strategy leads to the fact that advertisers often do not know where their ad is placed. The cost of this service ranges from a dollar to $5 for the appearance of the ad a thousand times (CPM).
This method has long since become a source of fraud. In recent years, "content farms" have been established where people have worked for low wages to fill fake websites with content, and "click farms" make sure to run large numbers of videos and ads to attract as much money as possible from advertisers. Over the years, the creators of the content farms created more elaborate methods, learned to optimize for search engines so that their sites are ranked high in search results, and learned to identify which written materials, titles and names attract the most users and the most advertising revenue. According to estimates, advertisers spend about $13 billion a year on such sites.
Since free-to-use generative artificial intelligence tools were launched in late 2022, including OpenAI's ChatGPT and Google's Bard, content farms have improved, and are now able to produce more content, faster, at lower costs and across multiple sites. According to the study - which sampled only four countries, the USA, France, Italy and Germany - between May and June of this year at least 25 new such sites were found every week, some of them contain about 1,200 new "articles" uploaded daily, all created by bots and using generative artificial intelligence tools. In total, and over a short period of time, 217 such sites were found in 13 languages.
The identification method in the study is also very limited, and was done by automatic textual search of error messages from the chatbots. Thus, for example, on CountyLocalNews.com, the error message was found integrated into the "news" website: "Sorry, I cannot fulfill this instruction because it is against ethical and moral principles... As an artificial intelligence language model, it is my responsibility to provide factual and reliable information."
According to NewsGuard, "Near the end of the AlaskaCommons.com article, the site wrote “How to Get Free Bets on Football,” the same subheadline as The Sun, above a paragraph that repeated an AI error message: “As an AI language model, I cannot provide information about free bets on football. Please refer to trusted news sources or betting websites for more information.” This article, NewsGuard notes, is a rewrite of an article from the British news site The Sun. In one week sampled for the purpose of the study between June 9 and June 15, AlaskaCommons.com published no less than 5,867 articles, with "Ingrid Taylor" being one of the names that starred in many of these articles: that same week that name was given as credit to 105 articles. These gross errors, NewsGuard points out, occur because the sites themselves are created and operated without any human supervision.
In the study, they choose not to name the advertising brands because "it is likely that none of the brands or their ad agencies had any idea that their advertisements would appear on these unreliable, AI-driven sites, NewsGuard is not naming them. But they include a wide variety of blue chip advertisers: a half-dozen major banks and financial-services firms, four luxury department stores, three leading brands in sports apparel, three appliance manufacturers, two of the world’s biggest consumer technology companies, two global e-commerce companies, two of the top U.S. broadband providers, three streaming services offered by American broadcast networks, a Silicon Valley digital platform, and a major European supermarket chain."
The fictitious sites come in a variety of forms. Some of the sites as mentioned disguise themselves as news sites, some rewrite articles, there are also sites that offer medical advice. So, for example, on the website MedicalOutline.com - where there are "articles" with titles like "Can lemon cure skin allergies?", "What are the five natural remedies for attention deficit hyperactivity disorder?", and "How can you prevent cancer naturally?" - ads are found for "for two U.S. streaming video services, an office-supply company, a Japanese automaker, a global bank based in New York, a pet supplier, a vitamin shop, a diet company, and a vacuum manufacturer," according to the study.
It is not clear how much money such new sites are able to attract, but what is already clear is that creating content in quantities using generative artificial intelligence significantly lowers the costs for the creators of the fake sites, and encourages them to continue down this path. And the bigger problem is that this fake content competes with trusted platforms for the attention of users.
Google Israel stated: "We focus on the quality of the content, not the way it is created, and we block or remove ads if we detect violations."