On February 14, 2025, I will be speaking at the Suffolk Academy of Law’s annual Elder Law Update, addressing current developments in artificial intelligence (“AI”) that are relevant to trusts and estates practice, among other topics.  In preparing for that presentation, I came across a recent Surrogate’s Court, Saratoga County, decision in Matter of Weber, in which the court found that a party’s counsel has a duty to disclose to the court the fact that the party’s hearing evidence has been generated by AI.  I now address the Weber court’s AI-based findings below.

AI has been “defined as being any technology that uses machine learning, natural language processing, or any other computational mechanism to simulate human intelligence, including document generation, evidence creation or analysis, and legal research, and/or the capability of computer systems or algorithms to imitate intelligent human behavior” (Matter of Weber, 220 NYS3d 620, 635 [Sur Ct, Saratoga County 2024]).  It “can be either generative or assistive in nature” (id.).  Generative AI is “artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query) by learning from a large reference database of examples” (id.).  AI “assistive materials are any document or evidence prepared with the assistance of AI technologies, but not solely generated thereby” (id.).

In Weber, the court presided over an accounting hearing, in which the objectant proffered expert witness testimony concerning the calculation of the damages that the objectant sought (id. at 633).  During the expert witness’s testimony, it came to light that the expert had used AI to cross-check his damages calculations (id.).  The expert witness apparently “could not recall what input or prompt he used to assist him” in cross-checking his calculations with AI (id.).  “He also could not state what sources [the AI application that he used] relied upon and could not explain any details about how [the AI program] works or how it arrives at a given output” (id.).

Predictably, at least under those circumstances (and for other reasons as well), the court did not find the expert witness’s testimony to be credible (id. at 633-35).  In discussing the expert witness’s testimony, the court created a rule for the use of evidence that is generated by AI in Surrogate’s Court litigation (id.).  Specifically, the court wrote:

In what may be an issue of first impression, at least in Surrogate’s Court practice, this Court holds that[,] due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues[,] prior to evidence being introduced which has been generated by an artificial intelligence product or system, counsel has an affirmative duty to disclose the use of artificial intelligence and the evidence sought to be admitted should properly be subject to a Frye hearing [to determine whether the evidence is generally accepted in its field] prior to its admission, the scope of which should be determined by the Court, either in a pre-trial hearing or at the time the evidence is offered (id.).

In light of Weber, it appears that counsel representing a client in Surrogate’s Court litigation has an affirmative duty to disclose to the court the fact that evidence upon which counsel will rely at a hearing has been generated by an AI product.  Doing so will allow the Surrogate’s Court to test whether the evidence that resulted from AI use is sufficiently accepted enough to be included in the evidentiary record at the hearing.  I anticipate that, as AI-generated evidence is proffered more in Surrogate’s Court litigation, other Surrogate’s Courts will follow the standard that Weber appears to establish.