Click to open contact form.
Your Global Partners in the Business of Innovation

Israeli Court Outlines How AI-Generated Documents Should Be Introduced

Client Updates / January 01, 2025

Written by: Haim Ravia, Dotan Hammer

In a unique decision, the Haifa Magistrate Court in Israel ruled that documents generated by artificial intelligence (AI) may be misleading and could compromise the independent judgment of a court-appointed expert. While the decision is not a binding precedent, it signals Israeli courts’ possible skepticism towards AI-generated evidence, particularly where impartiality is mandated.

The legal dilemma arose in an ordinary road accident lawsuit. The plaintiff, suing the insurance company, complained of physical pain. The court thus appointed an expert to assess the plaintiff’s medical condition. The defendant, Clal Insurance, submitted an AI-generated “Personal Injury Claim Summary” document produced by DigitalOwl, an Israeli AI company. The document analyzed the plaintiff’s medical history, assessed the severity of the injuries, and highlighted key findings. Alongside this summary, the defendant included raw medical records annotated and emphasized by the AI system.

The plaintiff’s full argument to strike the document, that it constituted a biased opinion crafted by the defendant, was rejected. However, the court still prohibited the expert from reviewing the AI-generated document, expressing concerns about the inability to discern the possible bias in AI outputs, the extent of its influence on the document, and the possible influence the defendant exerted on AI output through the prompts given to the system.

The decision ordered that the AI-generated summary be disregarded and that the raw, unaltered medical records be submitted without any annotations or highlights.

Click here to read the full text of the court decision (in Hebrew).

MEDIA HIGHLIGHTS