Vered Zlaikha.
Opinion

Regardless of AI regulation - enhanced AI legal risks are already here

"The regulatory developments and enforcement proceedings clearly indicate: AI systems invoke new risks to senior management," writes Vered Zlaikha, a partner and Head of Cyber Affairs & Artificial Intelligence Practice at Lipa Meir law firm.

Whether dedicated AI legislation is adopted or not, the legal and regulatory developments in Israel and abroad make it clear: Organizations implementing AI systems must manage their legal risks and ensure the proper use of such systems - or be held liable.
Recently, an initial bill was brought before the US Congress by the Republicans, designed to prohibit state legislation that limits the use of artificial intelligence (AI) systems. This development is a counter-response to the emerging trend in certain US states to promote AI legislation.
1 View gallery
עו"ד ורד זליכה משרד ליפא מאיר ושות'
עו"ד ורד זליכה משרד ליפא מאיר ושות'
Vered Zlaikha.
(Photo: Aya Ben Ezri)
In Israel, similar to the UK and US, the position taken is currently not to enact a dedicated comprehensive AI law - as the EU did last year. At the same time, this does not mean that the use of AI systems goes without legal risks.
Regulatory developments and legal proceedings in Israel and abroad demonstrate the practical questions that require decisions of the board of directors, legal advisors, and authorities in Israel and worldwide; as well as the significant importance of raising awareness among senior management, and of calculated risk management pertaining to AI systems.
As AI systems spearhead deeper into the heart of organizational activity, legal exposure grows - specifically, when the system is wrong, infringes on rights, or operates without sufficient control. In these very days, legal and regulatory proceedings interpreting existing laws mark the way and outline the “rules of the game” in AI.
A recent US case demonstrated the potential exposure of companies providing AI systems: Last month, the Federal Trade Commission (FTC), mandated to protect consumers in the US, filed an administrative complaint against a company providing AI systems aimed at identifying AI-generated content. It was argued that the company promoted the system publicly as having 98% accuracy, while according to independent tests, the system’s accuracy was in fact only 53%. Now, the company is required to provide scientific and reliable proof for its alleged accuracy, or bear the liability under a proposed order of the FTC, limiting the company from publishing misleading representations on the system accuracy and ordering it to preserve evidence on its efficiency, publish a notification to its customers on the proceeding and settlement, and provide the FTC with annual compliance statements.
Several legal proceedings in the US also refer to breach of the California privacy legislation against tech giants and tech companies, which provided organizations with AI tools intended to communicate with customers for various needs (for example, in one case, for communication and in order to improve services, and in another case, for voice recognition and biometric authentication to prevent fraud). It was also argued that the AI system providers used recordings to train and improve their models.
These proceedings clarify that alongside the numerous advantages of using AI in customer service, such use must comply with the privacy laws, and that it is important to reach an agreement on the use and data security between the organization deploying the AI tool and the system providers, alongside disclosure and transparency towards customers, in relevant circumstances.
In this regard, just recently the Israeli regulator also stepped up. The Privacy Protection Authority has published a draft directive on the applicability of the Protection of Privacy Law to AI systems, referring, among other things, to the implementation of the duty to notify and receive informed consent with regards to AI systems, as well as data security aspects, data scraping issues, and more. The draft directive is currently open for public comments.
Although it seems that this draft directive goes too far in certain aspects, and the public comments procedure seems significant, it may also serve as a catalyst to raise awareness in the public and industry to privacy risks embodied in AI use.
Legal exposure is not limited to providers of AI systems. The deploying organizations may also be subject to potential legal exposure.
For example, legal proceedings currently take place in the US against a healthcare insurance enterprise, arguing that the use of AI tools has led to denial of healthcare insurance claims for medical and nursing treatments. The plaintiffs argued that instead of decisions based on judgment of healthcare professionals, as the company undertook, the insurance company, without due disclosure, authorized an AI tool to decide, leading to serial denial of healthcare claims and damaging the health of policyholders. In February, a Minnesota judge approved to continue the procedure in relation to breach of contract and breach of the duty of good faith.
The regulatory developments and enforcement proceedings clearly indicate: AI systems invoke new risks to senior management. Reasonable, non-negligent conduct of the organization and implementation of the known corporate law “business judgment rule”, stress the need to consult with experts, maintain documented processes, and hold preliminary risk assessments referring, among other things, to technological documentation, AI system accuracy and terms of use.
Who will bear the liability for an AI system? The system provider or the organization deploying it? Generally, liability will be determined according to the applicable law in the specific context. At the practical level, the importance of the agreement between the parties is clear. If personal data is involved, and the system provider has access to this data, the organization deploying the system has regulatory duties to reach an agreement with the vendor pertaining to data security. However, an agreement between the parties also has benefits for managing additional legal risks, such as regarding the system accuracy, decision-making by the AI systems, and issues of explainability.
Adv. Vered Zlaikha is a partner and Head of Cyber Affairs & Artificial Intelligence Practice at Lipa Meir law firm; she is also a member of the Experts Forum advising the government on Policy & Regulation of AI.