Chatbots May Not be Uncanny, But Way Ahead Is Riddled with Legal Thorns
Insomnobot-3000 by Casper talks to you when you can’t fall asleep, while Replika is designed with the unnerving sole purpose of becoming your friend
With the recent passing of President George Bush Senior, the conversation inevitably turns to how the past president will be remembered and immortalized. However, immortality no longer needs to be the domain solely of the elite, requiring a lifetime of accomplishments or an expensive endowment. Rather, emerging technologies may soon allow any individual to gain digital immortality by preserving their personality, or at least a reasonable facsimile thereof, in perpetuity, in a chatbot.
This isn’t the stuff of science fiction: Chatbots, essentially software designed to converse with humans, have been amongst us for some time and have come a long way since the universally despised Microsoft Paperclip, Clippy. Current chatbot technology is pervasive and relatively easy to implement. IBM offers a service to build one in literally ten minutes.
Immortality via chatbot, is relatively straightforward as well. By simply providing an artificial intelligent (AI) machine with the entirety of your loved one’s social media presence, the chatbot can learn to mimic the tone and content of that person’s social interactions. Fans of the science fiction show Black Mirror can appreciate the dystopian future that could result from the use of this technology, for example, by hijacking the grieving process of a lost loved one who continues to talk to you from beyond the grave, immortalized in a physical rather than metaphysical cloud.
However, interactions with chatbots need not lead you into the Uncanny Valley. You have likely knowingly or unwittingly already interacted nonchalantly with a chatbot. If you have looked for an apartment in Tel-Aviv, Doron, a Facebook chatbot, may have helped you find a great apartment, and, if you searched for a job online, JobBot may have helped you land that interview.
In most instances modern chatbots are simply AI programs that interact with people via text, the interactions typically ranging from ordering pizza to financial advice to travel recommendations. Insomnobot-3000, devised by the mattress company Casper, just talks to you when you can’t fall asleep.
Other bots might be viscerally unsettling: Replika is designed with the unnerving sole purpose of becoming your friend. Other chatbots are actually malicious and nefarious, designed to spew hate and fake news on social media, reportedly for nationalistic goals. In 2016, Microsoft released a Twitter Bot, Tay, which quickly learned to spew racist, bigoted anti-semitic tweets. A year later, Facebook shut down its AI chatbots after they started disconcertingly communicating to each other in unintelligible code. Most recently, Cimon, a chatbot housed in a physical floating orb on the International Space Station, seemed to presage the villainous Hal computer from the film 2001 Space Odyssey with its strong opinions.
As chatbots become more advanced, for example speaking with human-sounding inflections and idiosyncrasies for a more realistic experience, they are increasingly used to replace humans, in an ever-increasing number of areas. Gartner, a global research and advisory firm, forecasts that by 2020, over 85% of customer interaction will be handled by chatbots without any human involvement. According to their predictions, people will converse more with chatbots than with their spouse.
Already Chatbots are taking over high-level and creative jobs. DoNotPay is a ChatBot for disputing parking tickets like an attorney might do. Last year it started helping people apply for asylum, and this past October, DoNotPay expanded to provide a platform to allow users to sue anyone, further encroaching on the legal profession.
While DoNotPay ostensibly levels the legal playing field by giving anyone access to what might have been expensive legal counsel, it raises a number of interesting issues: Can software practice law? If it makes a mistake, is that legal malpractice and who is liable? What about all the lawyers who might end up losing their jobs to software? And, most importantly, can the legal justice system be optimized to allow for more of these types of interactions, for example, by making the legal code more machine-readable, or optimizing the justice system to allow for efficient and accessible bot representation, when applicable.
The Stanford University developed Woebot is another Facebook ChatBot that offers interactive cognitive behavioral therapy. Depression is a leading form of disability, especially among U.S. college students and Woebot is a cheap and easy solution for those who find it hard, uncomfortable or stigmatizing to go to a real therapist.
Like DoNotPay, Woebot operates in a regulated space of therapists and psychologists. But here the stakes are even higher considering the sensitivities of the target audience grappling with mental illness, anxiety, and depression. Malpractice in this space could have much greater repercussions than simply the payment of a fine.
The legal questions are not the only potential setbacks for these chatbots. In fact, the immediate future of chatbots isn’t guaranteed. Some are even predicting a backlash against ineffective chatbots in the coming years, with customers demanding real human interactions and suspicious of what sort of data the chatbots are compiling. Perhaps an immortal chatbot will never satisfactorily replace a loved one.
Dov Greenbaum, JD-PhD, is the director of the Zvi Meitar Institute for Legal Implications of Emerging Technologies at the Radzyner Law School, at the Interdisciplinary Center in Herzliya.
Arthur Shayvel, a student at the institute, contributed to the research and writing of this article.