Generative AI tools like ChatGPT and Claude have changed how lawyers draft documents, research case law, and communicate with clients. These tools save time and reduce costs. But they also create serious ethical questions. Can you rely on AI-generated legal research? What happens if the AI invents a case citation? Who is responsible when something goes wrong?
You need to understand the rules before you use these tools. Bar associations across the country have issued guidance on AI use. Courts have sanctioned lawyers who submitted briefs with fake citations generated by AI. Your duty to your clients, the court, and your profession does not disappear just because you used technology. This guide explains the ethical boundaries you must respect when using generative AI in your legal work.
Your Professional Duty Still Applies
The American Bar Association’s Model Rules of Professional Conduct apply to every tool you use, including AI. Rule 1.1 requires you to provide competent representation. That means you must understand the technology you rely on. You cannot blame the AI if you file a motion with fabricated case law.
Rule 1.6 protects client confidentiality. When you input client information into a generative AI tool, you may be sharing that data with a third party. Many AI platforms use your inputs to train their models. That creates a confidentiality breach unless you have client consent or use a platform with proper data protections.
Rule 5.3 requires you to supervise nonlawyer assistants. Courts and ethics boards increasingly treat AI tools as assistants you must supervise. You carry full responsibility for any output the AI produces. If the AI makes an error, you own that error.
Several states have clarified these duties. New York’s ethics opinion 2024-03 states that lawyers must verify all AI-generated content before using it. Florida’s opinion 24-1 adds that lawyers should inform clients when AI plays a substantial role in their case. California reminds attorneys that technological competence is now part of the duty of care.
Where Generative AI Creates Real Risks
Generative AI can produce text that looks accurate but contains completely false information. This problem, called hallucination, has led to embarrassing and costly mistakes.
In 2023, a New York lawyer submitted a brief citing six fake cases generated by ChatGPT. The court sanctioned him and required him to pay a fine. He later said he did not know the AI could invent citations. That excuse did not matter. The court held him fully accountable.
AI tools also struggle with nuance. They cannot assess the credibility of a witness, interpret ambiguous contract language, or apply subjective legal standards. They generate text based on patterns in their training data, not on legal reasoning or judgment.
Confidentiality breaches present another major risk. If you paste client emails, contracts, or case details into a public AI tool, you may violate attorney-client privilege. Some AI vendors retain and analyze your inputs. Others share data with third parties for model training. You must read the terms of service before using any tool.
Bias is another concern. Generative AI models learn from existing data, which often reflects historical bias. An AI trained on past sentencing data may suggest harsher penalties for certain demographic groups. An AI trained on employment disputes may favor employers over workers. You must review AI outputs critically and correct for bias.
Best Practices for Ethical AI Use
You can use generative AI responsibly if you follow clear guidelines. Start by choosing the right tools. Look for platforms designed specifically for legal work. These platforms often include citation verification, confidentiality protections, and audit trails.
Always verify AI-generated content. Run every citation through Westlaw, LexisNexis, or another trusted database. Read the cases yourself. Check that the AI quoted them accurately and applied them correctly. Do not rely on the AI’s summary.
Avoid inputting sensitive client information into public AI tools. If you must use AI for drafting or research, redact names, case numbers, and identifying details. Better yet, use a platform with enterprise-level data protections and a contract that prohibits data retention or sharing.
Disclose your AI use when appropriate. Some courts now require lawyers to certify that they verified all citations. Some clients want to know if AI played a role in their case. Transparency builds trust and protects you from claims of deception.
Document your process. Keep records of which AI tools you used, what prompts you entered, and how you verified the outputs. If a question arises later, you can show that you acted responsibly.
Train your team. Everyone in your firm who uses AI should understand the ethical rules and the risks. A clear legal tech adoption plan helps your firm stay compliant and avoid mistakes.
Situations Where AI Use Is Appropriate
Generative AI works well for certain tasks. You can use it to draft routine documents like demand letters, discovery requests, or client intake forms. The AI can produce a solid first draft that you then review and edit. This saves time without creating significant risk.
Research is another area where AI can help. You can ask the AI to summarize a legal concept, identify relevant statutes, or suggest search terms. But you must verify everything. Treat the AI as a research assistant who needs close supervision.
Client communication is a third area. You can use AI to draft emails, FAQs, or educational materials for clients. Again, you must review the content carefully. Make sure it reflects your voice and accurately states the law.
Some platforms now offer AI tools specifically designed for lawyers. These tools include built-in citation checking, confidentiality protections, and ethical guardrails. They cost more than general-purpose AI, but they reduce your risk.
Situations Where AI Use Is Risky
Avoid using generative AI for tasks that require judgment, strategy, or client interaction. Do not let the AI draft a complaint, a brief, or a settlement agreement without your close involvement. These documents require legal analysis and strategic thinking that AI cannot provide.
Do not use AI to communicate directly with clients, courts, or opposing counsel. The AI cannot understand context, tone, or the nuances of your professional relationships. A poorly worded email can damage your case or your reputation.
Do not rely on AI for legal advice. You can use it to explore ideas or generate options, but the final decision must come from you. Your clients hired you for your judgment, not the AI’s output.
Avoid using AI for tasks involving sensitive data unless you have strong confidentiality protections in place. If you work with trade secrets, medical records, or financial information, you need a platform that guarantees data security.
How Courts and Regulators Are Responding
Courts are starting to address AI use directly. Some judges now require lawyers to certify that they verified all citations in their briefs. Others ask lawyers to disclose whether they used AI. A few courts have issued standing orders on AI use.
Bar associations have issued ethics opinions in more than a dozen states. Most opinions follow a similar pattern. They allow AI use but require lawyers to verify outputs, protect client confidentiality, and maintain competence. None of the opinions ban AI outright.
Regulatory agencies are also paying attention. The Federal Trade Commission has warned AI vendors about deceptive practices. The Department of Justice has prosecuted cases involving AI-generated fraud. As AI becomes more common, expect more regulation.
Some legal organizations are developing AI ethics guidelines. The American Bar Association’s Standing Committee on Ethics and Professional Responsibility is working on a formal opinion. State bars are hosting CLE programs on AI ethics. Legal automation ethics is now a core topic in professional responsibility training.
What to Do If You Make a Mistake
If you discover an error in AI-generated work, act quickly. Correct the mistake and notify anyone who received the flawed document. If you submitted a brief with fake citations, file a motion to correct the record and inform the court immediately.
Apologize if appropriate. Take responsibility for the error. Do not blame the AI. Courts and clients expect you to supervise your tools.
Review your processes to prevent future mistakes. Ask yourself what went wrong. Did you skip verification? Did you use the wrong tool? Did you lack training? Fix the gap.
Report the issue to your malpractice insurer if the error could lead to a claim. Get advice from an ethics attorney if you think you may have violated professional conduct rules.
Moving Forward with Confidence
Generative AI offers real benefits for legal professionals. It can save time, reduce costs, and improve client service. But it also creates risks you must manage carefully.
Your ethical duties have not changed. You must still provide competent representation, protect client confidentiality, and supervise your tools. The difference is that AI makes it easier to make mistakes and harder to catch them before they cause harm.
Start small. Use AI for low-risk tasks like drafting routine documents or generating research ideas. Verify everything. Build systems to catch errors before they reach clients or courts. Train your team. Stay current on ethics guidance from your state bar.
You do not need to avoid AI. You need to use it responsibly. Understand the risks, follow the rules, and put your judgment above the machine’s output. When you do, AI becomes a helpful tool rather than an ethical minefield.
If you have questions about AI use in your practice, consult your state bar’s ethics hotline or speak with a legal ethics attorney. General information like this article cannot replace professional advice tailored to your specific situation. But it can help you ask the right questions and make informed choices about the technology you use.

