Will AI Ever Win a Golden Globe?
While artificial intelligence can write anything from code to screenplays and fake news, it seems only humans can be considered authors. Thus, they are also the only ones held liable for spreading misleading false content
With the Golden Globes behind us, the Hollywood award season is now in full swing, culminating in the 71st Academy Awards, also known as the Oscars, in early February. This time of year is an opportunity to examine how online streaming services, with their seemingly bottomless budgets, continuously pressure traditional film studios to step up their game and create both legitimate Oscar contenders, as well as fan favorites meant solely to fill their coffers.
Most recently, Warner Brothers, the Burbank, California-based entertainment giant, has become the latest studio looking to artificial intelligence to help it manufacture such winners. Last week, the studio announced a deal with AI film management startup Cinelytic Inc. intended to help it with all the less glamorous aspects of film picking and production. Cinelytic claims to use AI to provide relevant movie ideas, offering, for example, insights for the optimization of content for best returns, as well as working with film studios in determining best practices for financing, producing, and distributing new films. Other firms in the same field crunch data from various sources to suggest talent for films or brands that said talent might successfully hock.
Nevertheless, there have been a number of efforts to involve AI in other creative aspects of moviemaking, albeit off-screen, including assessing the potential success of a film based on its script, a service offered by Antwerp, Belgium-based startup ScriptBook NV, or even screenwriting from scratch, albeit sometimes with limited success.
While it remains unclear if an AI can legally be an author within the limitations of various copyright regimes—for example, can it be granted legal authorship over its scripts and, arguably more importantly, get a screenwriting credit—it is clear that under current European patent regulation, it cannot be an inventor. This legal principle was demonstrated last month, when the European Patent Office refused patent applications listing an AI called DABUS as an inventor, stating that in order to qualify as such it would have to be a human and not a machine.
However, in addition to being barred from counting ad authors or inventors in the movie industry, Facebook recently banned posting deepfake videos created by AI ahead of the upcoming U.S. election season. Facebook specifically disallows videos that are the "product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic." Facebook's ban, however, is not comprehensive, and will still allow "content that is parody or satire, or video that has been edited solely to omit or change the order of words."
Some have argued that Facebook does not go far enough in keeping fake news off of its social media site and many questionable videos will still get past these restrictions. Alternatively, a recent study suggested that the impact of fake news and video on Facebook has actually been falling rapidly since the 2016 U.S. election, in contrast to on Twitter, where it continues to rise. Some have suggested that this may be the result of Facebook's successful flagging of suspected fake content on its platform.
For some, however, Facebook's efforts are not enough. Among them is the Singaporean government that recently passed the Protection from Online Falsehoods and Manipulation Act, aiming to curb the publication of fake news on social media platforms. Although the government has only employed the law on a handful of occasions since it came into effect in October, it is already being challenged in court concerning at least one of its implementations.
Taking a different approach, Wikipedia, the crowd-sourced encyclopedia, ironically, as it itself is a less than trustworthy source, has a list of blacklisted sources that are not reliable enough to be included in its articles, given the perception that these sources are intentionally involved in the distribution of fake news. While it has been suggested that this practice has helped Wikipedia limit the amount of fake news on its website, some also suspect that the blacklist is politically biased.
Regardless of your own political views, however, it is clear that fake news can create panic and anxiety, particularly at times of heightened tensions. Consider the aftermath of the American killing of Iranian general Qassem Soleimani earlier this month. The website of the U.S. Selective Service, the agency responsible for military conscription, crashed soon after, when rumors surfaced on Twitter and TikTok that the assassination could mean reinstatement of the mandatory draft, especially in light of an Iranian counterstrike that was rumored to have killed 20 Americans, and was also proven to be fake news.
To combat this and other types of fake news, the internationally distributed American newspaper, USA Today published a list of suggestions to help readers work through the deluge of less-than-trustworthy information. But the list, which includes the suggestion for time-consuming follow-ups and reading of undigested information from primary sources, is unlikely to be followed, especially by those who are most prone to be fooled by fake news, namely, the lazy.
Lazy social media users create another complicated situation associated with fake news: libel. Many courts, including in Israel, have begun to think about how the intentional dissemination of a fake story on social media can be interpreted in light of decades-old libel laws, in particular, how liking or sharing a libelous fake news story could create liability for said sharer or liker. Courts in some jurisdictions have found users to be liable for simply liking a post, or commenting, while other courts, including Israel's Supreme Court, in balancing free speech and defamation, distinguished between simply liking a post—which may or may not cause the defamatory fake news to be distributed to more people via their social media feeds, depending on the vagaries of algorithms—and sharing a post, which could be construed as an actual defamatory publication under the law.
Fake news can also thrive if people choose to trust AI predicted winners of the Golden Globes and other awards over the formal reports of the actual winners. Given the many acerbic remarks of host Ricky Gervais at the recent Golden Globes, it would be interesting to see if sharing his comments through social media—he claims to have gained hundreds of thousands of new Twitter followers as a result of his viral Hollywood roast—rises to libelous fake news.
Dov Greenbaum, JD-PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, at the Interdisciplinary Center in Herzliya.