Assaf Harel.

Opinion
Startups, prepare for your AI due diligence

"With great power comes great responsibility, and irresponsible use of AI, especially in startups, can have far-reaching consequences, impacting the company's valuation and its ability to be sold," writes Adv. Assaf Harel

AI is becoming an integral part of our professional environment, as more and more companies, and their employees, are using generative AI tools, such as ChatGPT and CoPilot, in their daily work. The advantages of using those tools are unparalleled, increasing productivity, saving costs and creating new opportunities. However, with great power comes great responsibility, and irresponsible use of AI, especially in startups, can have far-reaching consequences, impacting the company's valuation and its ability to be sold.
In the case of the use of generative AI tools, associated risks include concerns related to intellectual property, confidentiality, privacy, and quality. For example, use of generative AI in the development of software could raise questions as to the company’s ability to claim ownership over such software. Moreover, employees may inadvertently disclose sensitive confidential information of the company to third parties, through feeding such information as prompts for generative AI tools. Similarly, inputting personal information into generative AI tools could violate applicable privacy laws, exposing the company to significant fines and legal action. Furthermore, generative AI tools, while capable of generating content at scale, may produce inaccurate or misleading content that, if not subjected to proper human oversight, could create defects in the company’s products or services, ultimately harming those that rely on them. Recognizing these risks, many companies, including major players like Apple, Samsung, and Verizon, have taken steps to restrict the use of AI by their employees, aiming to mitigate potential challenges and promote responsible AI practices.
1 View gallery
Assaf Harel
Assaf Harel
Assaf Harel.
(Yehonatan Blum)
Questions you will be asked on the company’s use of AI
As investors are becoming increasingly aware of the legal, reputational and other risks associated with the use of AI, they understand that the manner in which a company handles such risks, and in turn, its potential exposure to costly legal complications, can impact the value of the company. Thus, the manner in which you utilize AI will inevitably be scrutinized in pre-investment and exit due diligence. Investors will demand assurance that your organization employs AI responsibly. They will want to know that you have implemented a policy that restricts employee use of generative AI, addressing concerns such as confidentiality, privacy, intellectual property, and quality. If your business provides AI-based services or creates AI tools, you will also be expected to demonstrate accountability, transparency, fairness, and safety in your operations. Additionally, you will need to showcase the measures taken to comply with emerging AI laws.
In that regard, the regulation of AI is no longer a distant possibility but an imminent reality. The European Union's comprehensive AI Act is expected to be approved by the end of this year. In the United States, lawmakers are engaged in heated discussions on the need for AI regulation, with several states already having adopted laws that restrict AI usage. Moreover, Big-Tech companies already have AI policies and AI-related codes of conduct in place, that they consider when making investment decisions. Notably, Google CEO Sundar Pichai recently announced that Google would collaborate with other companies on a voluntary basis to ensure that AI products are developed safely and responsibly ahead of formal regulation of AI.

Mitigating risks through the implementation of an AI policy
Consequently, responsible AI use has the potential to affect the valuation of your startup and requires the adoption of a proactive approach. A key step for addressing risks emanating from employee use of generative AI is to develop a comprehensive policy that encompasses guidelines for employees. Such a policy should clarify what uses of generative AI are prohibited or restricted (e.g., inserting confidential or personal information in prompts). The policy should also impose internal documentation requirements regarding content that was created with the use of generative AI and define a process for assuring the quality of such content. Additionally, the policy should address risks emanating from the use of generative-AI by vendors providing services to the company, and measures to mitigate such risks (e.g., imposing contractual restrictions on the vendor’s use of generative-AI in providing the services). Furthermore, if your business creates AI tools or provides AI-based services, you will also need to implement a policy designed to ensure accountability, transparency, fairness, and safety in your operations, including through the regular assessment and improvement of AI algorithms, adherence to ethical guidelines, and steps to avoid biases. But adopting a written policy is not enough: it is vital to communicate the policy to employees, to constantly monitor its implementation, and to conduct periodic training to ensure it is effectively applied throughout the company.
In conclusion, the manner in which your company uses AI can affect its valuation and future prospects. If you are a startup seeking investments, there will come a day when you will be asked what steps you have taken to address AI-related risks and to safeguard the company’s value. By embracing responsible AI use, setting robust policies, and complying with emerging AI laws, you can build a strong foundation that will help you prepare for those challenges.
The author is a partner at Gornitzky GNY and leads the Firm's Cyber & Privacy practice.