Blind reliance on AI is not good for healthcare

Artificial intelligence (AI) helps us better understand medical conditions and is already used to research and develop therapies and technology with the Health Ministry’s blessing. However, advocate Matan Menachem, member of the Ethics Committee at Shaare Zedek Medical Center and healthtech consultant, warns against tragic mistakes these sophisticated systems could lead to

Matan Menachem 10:3803.02.22

In his brilliant film, Ex Machina, Alex Garland tells about Nathan, the CEO of the company that offers the world’s most popular search engine – Bluebook (the equivalent of Google). Bluebook used AI to create Ava, a humanoid robot based on breaking into the queries and information in Bluebook users’ smartphones as indicators of the way humans think.


While the film is listed in the sci-fi category, reality illustrates AI is not fictional, especially with digital health. Many health and medical companies are developing AI that leverages patient information to develop healthcare programs and identify possible links between physiological and pathological symptoms to achieve a better diagnosis. AI allows us to predict medical conditions, test the feasibility of new therapies, and develop personalized medicine. For example, a physician who scans the brain of a demented patient, can rely on a program that analyses scans of millions of demented people, identify pathologies in that vast database, and point to the pathology in the patient’s brain.

Adv. Matan Menachem. Photo: Ofir Farkash Adv. Matan Menachem. Photo: Ofir Farkash


Hundreds of thousands of clinical trials are underway today using AI. Even the largest corporations, whose core business is not healthcare, are charging into this area in full force. In other words, the market forces to testify that AI is not the future but the present. The Food and Drug Administration (FDA) identified the trend early on, announcing that it will fund regulatory and innovation trials in 2022 with a particular emphasis on AI developments.


The last two years have shown that the Israeli healthcare system has many benefits in the digital medicine field. The centralization of the healthcare system, once a topic for frequent criticism, proved its strengths in organizing, collecting, analyzing and using the medical information of all Israeli citizens in the implementation of the anti-COVID plans, and the Israeli healthcare system opened a significant gap from the world’s most advanced countries in this regard. For good reason, Pfizer chose Israel as the first country to test the efficacy of the anti-COVID vaccine on the entire population.


These advantages are also evident in the policy adopted by Israel's MoH on secondary use of health information, namely using health information for purposes other than medical care (e.g., research or policy setting). This policy won international recognition in a report published by Open Data Institute that listed Israel as a first-line in the field of secondary use of health information.


Secondary use of patients’ information forms an excellent basis for AI development (precisely like the search inquiries of Bluebook users in Ex Machina.)


Israel’s MoH has not lost sight of the current trend and is adjusting AI to the Israeli healthcare system. Excellent work has been done with ventures such as Psifas and Timna that merge different types of health information into a single integrated system to advance information infrastructure development, support big data research and personalized medicine. Recent reports say MoH has even used AI in its effort to break the chain of infection.


Back to Ex Machina (spoiler alert). The film ends with Ava murdering Nathan, claims her independence and escapes to a better future for her, using the personality she had shaped for herself with AI. In reality, AI will not murder patients, but it might err because of its inherent limitations as a system built on statistical models that data errors may taint. Thus, the input of wrong data, badly-structured fields, imprecise database integration, algorithms with built-in biases towards a specific therapy may all lead to significant errors and result in tremendous damages on the patient or society level. For example, applying wrong policy to fighting an epidemic due to faulty integration/processing of test results.


Therefore, concurrently with the advancement of AI, MoH must be aware of AI limitations, align its implementations to the healthcare system with risks that blind reliance on AI might cause and require healthcare organizations to adopt means and mechanisms that reduce these risks to the extent possible.


These means could include early analysis and mapping of vulnerabilities of implementing the AI system on a specific database to tell in advance which results require an additional review. Alternately, it could require installing a system that monitors and controls the AI system regularly.


External control mechanisms can be implemented too, such as a parallel decision-making process disconnected from the AI system or a medical procedure requiring a human medical decision before using the system to prevent biased medical discretion. Finally, the use of the AI system could be restricted to decision support only.


Means and mechanisms that prevent AI errors and well-considered use thereof in the healthcare system are warranted by reality to protect Israel's population's health.


Advocate Matan Menachem is a partner at Yehuda Raveh and Co. law firm, member of the Ethics Committee at Shaare Zedek Medical Center and Health-Tech consultant.