AI in Law: 5 Essential Precautions for Ethical Practice

Law Office of W.F. ''Casey'' Ebsary Jr

Navigating the Pitfalls of AI in Law: 5 Essential Precautions for Ethical Practice

AI in Law – The legal world is abuzz with the potential of Generative AI. We’ve all seen the headlines promising transformed research and efficiency, but we’ve also seen the cautionary tales of AI “hallucinations” leading to serious professional and judicial consequences. As an attorney committed to utilizing the best tools for my clients, I believe the responsible integration of AI requires unwavering diligence.

How can legal professionals navigate the “desire to please” and the “hallucination” risks inherent in current AI models? 

Here are five non-negotiable precautions.

1. Verify Every Single Citation

I treat every citation provided by an AI like a rumor until I’ve confirmed it myself. I will not trust a single case name, volume, or page number until I have manually cross-referenced it in a trusted, established legal database like Westlaw, LexisNexis, or Fastcase.

Verification doesn’t stop at confirming the citation exists. I am also responsible for reading the full text of the case. AI summaries can be misleading, take quotes out of context, or miss crucial details, such as the fact that a holding has been recently overturned.

2. I Neutralize My Prompts to Prevent “Yes-Man” Responses

Large Language Models (LLMs) are often designed to be helpful, which can manifest as a form of “sycophancy” or a “desire to please.” If I ask an AI, “Find me cases that support [X argument],” the model may inadvertently invent supporting materials just to satisfy the prompt.

To counter this bias, I structure my prompts objectively. I ask the AI to “Find cases relevant to [X legal principle],” or I employ “Devil’s Advocate” prompting, instructing the AI to identify the primary weaknesses in my argument.

3. I Prioritize Tools Using RAG (Retrieval-Augmented Generation)

When conducting legal research with AI, the tool itself matters immensely. General-purpose LLMs are trained on the open internet, increasing the chance of hallucination. I prioritize legal-specific AI platforms that employ Retrieval-Augmented Generation (RAG).

RAG constrains the AI’s responses to a pre-verified, trusted internal library of actual case law, statutes, and legal treatises. This dramatically reduces the likelihood of the AI inventing citations and ensures that any output is grounded in genuine legal authority.

4. I Issue Strict “No-Hallucination” Commands and Follow Court Orders

I begin my interaction with an AI model with explicit system instructions: “Do not provide any legal authority or factual information that you cannot fully verify. If you do not know the answer with 100% certainty, state that you do not know. Under no circumstances should you invent citations.”

Furthermore, I strictly adhere to all standing court orders regarding AI usage. As more judges require “Certificates of AI Disclosure,” I ensure that I can truthfully and confidently sign such a certificate, affirming that a human has diligently reviewed and verified every AI-assisted portion of the filing.

5. I Accept That Ethical Responsibility Stops with Me

This is the most critical precaution. The Duties of Candor and Diligence belong exclusively to the attorney of record, not the software provider. Under the Rules of Professional Conduct, I am ultimately and solely responsible for the accuracy of every word submitted in a court filing.

I cannot “blame the computer” or an “AI hallucination” for presenting false information. The technology is a tool, not a replacement for my legal judgment and professional obligations. At the end of the day, the buck stops with me.

Common Questions About AI in Legal Practice

What exactly is a “hallucination” in the context of legal AI?

An AI hallucination occurs when a model generates information that sounds highly confident and authoritative but is entirely fabricated. In a legal setting, this often results in the creation of non-existent case names, fake citations, or fictional judicial opinions. Because these models predict the next likely word rather than searching a database, they can “dream” up citations that look perfect but don’t exist.

Why does AI sometimes try to “please” the attorney with its answers?

This behavior, often called sycophancy, happens because AI models are trained to be helpful and follow user instructions closely. If an attorney asks for cases that specifically support a certain argument, the AI may prioritize satisfying that request over sticking to the facts. This can lead the model to ignore contradictory law or even invent supportive precedents to give the user what it thinks they want.

How does Retrieval-Augmented Generation (RAG) make AI safer for law firms?

RAG is a framework that grounds the AI by forcing it to retrieve information from a specific, trusted set of documents—like a verified database of Florida statutes—before generating a response. Instead of relying solely on its internal training data, the AI uses these real documents as its primary source of truth. This significantly reduces the risk of hallucinations because the model is limited to the actual law provided in the search results.

Can I be disciplined for a mistake made by an AI tool?

Yes, under the Rules of Professional Conduct, the responsibility for any court filing rests entirely with the signing attorney. Ethical obligations, such as the Duty of Candor toward the tribunal, are non-delegable and cannot be shifted to a software provider. Judges have already sanctioned attorneys for submitting AI-generated briefs with fake citations, emphasizing that technology is not an excuse for failing to verify your work.

Are there specific court rules I need to follow when using AI?

Many jurisdictions and individual judges have issued “Standing Orders” that require lawyers to disclose AI was used in the preparation of a filing. These orders typically require a signed certification stating that a human has personally verified the accuracy of all citations and legal arguments. It is vital to check the local rules for every court where you practice to ensure you are in full compliance with these evolving transparency requirements.

In the Thirteenth Judicial Circuit (Hillsborough County), Judge Jessica G. Costello has a specific “Procedures/Preferences” order.

Judge Jessica G. Costello’s AI Standing Order

Under the “Procedures/Preferences” section for her division, Judge Costello has implemented a mandatory disclosure and certification requirement for any filing that contains AI-generated content.

The Requirement:
“If any attorney or pro se party submits to the court any filing or submission containing AI generated content, that attorney or pro se party must disclose the use of artificial intelligence on the face of the document and also must include a certification that the attorney or pro se party has personally reviewed and verified the accuracy of the content.”

Sample Language of Judge’s Standing AI Order

CASE LAW, LEGAL AUTHORITY, AND ARTIFICIAL INTELLIGENCE:

Please provide case law and any legal authority to the court at least two (2) business days prior to a scheduled hearing. 

An attorney may ethically utilize artificial intelligence “AI” technologies but only to the extent that the lawyer can reasonably guarantee compliance with the lawyer’s ethical obligations. Attorneys must comply with Florida law and the applicable Rules Regulating the Florida Bar. (See Florida Bar Ethics Opinion 24-1 (Jan. 19, 2024)).

If any attorney or pro se party submits to the court any filing or submission containing AI generated content, that attorney or pro se party must disclose the use of artificial intelligence on the face of the document and also must include a certification that the attorney or pro se party has personally reviewed and verified the content’s accuracy. Failure to include this certification or comply with this verification requirement will be grounds for sanctions, as permitted by law.

AI Disclosure

We utilized artificial intelligence tools to assist in the writing and overall layout of this article. Similarly, the infographic graphic was designed and generated using AI technology. However, an attorney has fully reviewed and edited both the text and the visual elements to ensure their accuracy and appropriateness.

Client Reviews

Probably one of the best and humblest attorneys I have ever spoken to. Took the time on his own free will to speak to me about my particular situation. He was able to offer some peace in regards to the matter I was going through. Highly recommended, friendly and very knowledgeable about his scope of...

Juancho Amaya

Excellent very knowledgeable attorney who goes above and beyond to care for his clients. I would recommend him to family members. He gave me some of the best advice I have ever had from attorneys. He is also very reasonably priced.

Derrick Dupré

Very personable, empathetic, sympathetic, knowledgeable, and experienced attorney. He does his best to reach for the stars and represent you. Based off personal experience, I recommend this attorney to anyone who is in any sort of trouble 10 out of 10 stars for him if I can give it to him, but it...

Joshua Goldberg

Casey was very professional and knowledgeable. Answered every question we had. He helped me and my family sort out a situation that had been extremely complicated. I'm going to give him five stars but if I could give him six, I would.

Dalia

Casey was amazing, helped inform me on alot of information surrounding my case and helped put me on the correct path to take legal action

Shane Dai

Contact Us

  1. 1 Free Consultation
  2. 2 No Fees Unless We Win
  3. 3 Available 24/7
Fill out the contact form or call us at (813) 933-6807 to schedule your free consultation.

Leave Us a Message