For Law and Lawyers, Artificial Intelligence Spells a Major Headache
Large law firms are built on models of inefficiencies characterized by outdated pricing models, like billable hours. AI software, on the other hand, aims to be efficient and cost-effective
The field of law is ripe for innovation.
To wit: In late July, LegalZoom, a state-of-the-art online platform designed to connect small businesses to legal representation, announced a funding round of $500 million; it is now valued at two billion dollars.
Artificial intelligence (AI) is one of the best emerging tools to lead that innovation. You can’t read the daily paper without coming across some article regarding some new implementation of AI.
Notably, AI already has a foothold in the legal sector and it is getting bigger. Israel, in particular, excels in the development of AI tools for the practice of law. These tools range from case analysis to business management, to tools for the compliance with complex regulatory schemes.
This growing use of AI in law will have many implications, including changes to the practice of law itself, a changing of social norms undergirding areas of law and the application of law, and even the creation of novel questions for law to answer.
All of AI’s promises notwithstanding, law as an industry, remains particularly slow in the uptake of AI tools.
Why? As a business, lawyers, and especially large law firms, are built on models of inefficiencies characterized by built-in redundancies and outdated pricing models, e.g., billable hours. AI software on the other hand, aims to be efficient and cost-effective. Although it is not just the money. Law firms are also risk averse and, particularly with their bespoke aging legacy computer systems, they are often unwilling to make technological leaps that are not seen as absolutely necessary. However, as AI technology becomes more entrenched in the boutique practices of law, we might see the law itself, particularly rules relating to malpractice, forcing mainstream law firms to adopt otherwise undesired technology simply out of fear that failure to exploit state of the art tools might open them up to negligence lawsuits.
There is another good reason why many law firms are loathe to initially adopt AI. AI feeds on tremendous amounts of data. That data that must be accumulated and warehoused on the law firm’s hardware, including personal cell phones and laptops. But, with even democratic countries demanding warrantless access to cell phones and laptops as borders are crossed, customs officers’ access to that data can be a breach of attorney-client privilege, creating additional possibilities of lawsuits.
It is not just law firms that need to change in the wake of AI. So does the law itself. For example, freely available AI software can be trained to replicate any voice. Further, UC Berkeley researchers just announced they can extract body movements from a source video and apply them to subjects in a target video. These open source and relatively computationally light software programs can create realistic, albeit benign videos of you dancing, or convincingly make a president look like they are saying something that they are not. Not only are there questions of intellectual property rights over your visage, but consider the possibility that evidence that relies upon video and audio recordings of real people may no longer accurately reflect reality. Not to mention that a convincing enough video could lead to international conflict.
It is not just the law that will have to update itself to accommodate new AI realities. Even longstanding assumptions that underpin our legal system may become outdated in light of AI. Discussions of autonomous vehicles are quick to raise the concern that the AI driving the car may be at fault for torts or criminal actions. AIs question our current applications of legal personhood. If a corporation can have free speech, criminal liability and even religion, why not an AI?
There are even more timely concerns. Many advanced prosthetic devices include AIs that help the disabled by predicting movements, based on past histories of similar movements, and then producing the predicted movement via robotics, with or without the disabled individual’s active approval. Consider the likely scenario where an AI kicks in to complete a predicted standard action, but instead of completing that action, it commits a tort. Who is responsible? Pushed further, there are existent AIs that are directly integrated into the minds of disabled individuals. These brain-machine interfaces (BMIs) are designed to decipher brain activity and convert neuronal impulses into actions that are affected by a prosthetic. Research has suggested that the AI is optimized when it is plugged into a part of the brain that is associated with the preconscious. If that AI interprets neural activity and commits a crime, is the disabled person responsible, or is the neural activity of a preconscious area of the brain akin to the defensible unconscious acts committed while asleep or sleepwalking. Unpacking the legal liabilities in these scenarios may call into question fundamental principles of criminal law such as the binary understanding of consciousness and unconsciousness vis-à-vis liability.
AI also creates brand new questions. Consider, for example, privacy law. Under most privacy regimes there is a definitive distinction between standard private data, for example, your address, and highly personal information, say your health records. However, AI, in conjunction with big data has shown its ability to correlate, or even determine, very personal information from sufficient plain vanilla private data. As such, new privacy laws need to acknowledge that this anachronistic dichotomy no longer exists.
As long as we keep in mind its limitations and society keeps pace with the changes AI will mandate, these new technologies can be a force for good, even within the legal field.
Dov Greenbaum is a director at the Zvi Meitar Institute for Legal Implications of Emerging Technologies, at Israeli academic institute IDC Herzliya.