California State Bar to craft guidance on AI in the legal profession

One recent survey found legal professionals have confidence in AI's reliability, the American Bar Association has urged lawyers and courts to address the technology's potential pitfalls.

Is allowance instantly strangers applauded

A California state bar committee will take up the hot-button issue of artificial intelligence and make recommendations soon for regulating its use in the legal profession.

Ruben Duran, chair of the bar’s board of trustees, formally asked the Committee on Professional Responsibility and Conduct, better known as COPRAC, to draft guidance for lawyers or potentially an advisory opinion by November.

Regenerative AI “has already changed the way that many lawyers practice law,” Duran said during a regularly scheduled board meeting on May 18. “There are many possible benefits of AI, including efficiencies and improved access to justice under the right conditions.”

“There are also possible risks, including disclosure of confidential information, inaccurate advice, AI hallucinations … not to mention who and how it could be held accountable for client and public harm,” Duran continued. “Given our public protection mission, of course, these are issues we cannot ignore.”

Many lawyers have embraced the idea of regenerative AI as a means of more efficiently completing run-of-the-mill legal tasks. A survey of 800 legal professionals by data analytics firm Outsell found a large majority of respondents consider AI to be “generally reliable” or “extremely reliable.”

Outsell Vice President Hugh Logue told Legaltech News this month that the large number of professionals who described the technology as “extremely reliable” was troubling given its known inaccuracies and potential pitfalls.

“Something that may have been underestimated that the survey shows is that the problem isn’t about attorneys being risk-averse, it’s perhaps a little bit about having too much sort of blind faith in the emerging technology,” Logue said. 

Three years ago, the American Bar Association urged courts and lawyers to address “the emerging ethical and legal issues” tied to the use of AI in the legal profession. In February, the ABA’s House of Delegates adopted a resolution calling on AI developers to ensure their products remain under human control and to take responsibility for any harm they might cause. 

Two legal professionals who have studied the intersection of law and AI said they were unaware of any other state bars issuing guidance on the issue, although they both predicted it’s coming.

“It would not surprise me to see some state ethics opinions, as opposed to rules changes, in the coming year, just as we did when social media came on the scene,” said Maura Grossman, a research professor of computer science and an adjunct law professor at the University of Waterloo in Ontario, Canada.

“One of the risks of generative AI is that it can produce convincing prose that may not always be bound by fact,” said Bennett Borden, a DLA Piper partner and computer scientist. “It is incumbent upon lawyers to ensure that any generative AI output is accurate before they rely on it.”

At a meeting earlier this month, members of COPRAC already discussed drafting a possible ethics alert or advisory opinion on AI for California lawyers. Duran’s directive now gives the panel a definitive date for completing its work.