You’re a lawyer. You have client files, deadlines, research to do, and court filings piling up. Technology promises to make your life easier. AI tools claim they can draft contracts, review documents, predict case outcomes, and answer legal questions in seconds.
But here’s the tension: you’re required to stay competent in the tools you use. That’s not a suggestion. It’s an ethical duty. And when you bring AI into your practice, that duty gets more complicated. You need to understand what these tools actually do, how they can fail, and what risks they create for your clients.
What Technological Competence Actually Means
The American Bar Association updated Model Rule 1.1 years ago to clarify that competence includes understanding the benefits and risks of relevant technology. Most state bars have adopted similar language. This means you can’t just install software and hope for the best.
You need to know how the tools work. You need to understand their limitations. You need to recognize when they might produce errors, bias, or confidentiality breaches. And you need to supervise their use the same way you’d supervise a junior associate.
This duty applies whether you’re using AI to draft a motion, analyze discovery documents, or predict litigation outcomes. You’re responsible for the final work product. The fact that a machine helped create it doesn’t lower the bar.
Why AI Legal Tech Raises New Competence Questions
Traditional legal software is rules-based. It follows clear instructions. AI tools are different. They use machine learning, natural language processing, and statistical prediction. They don’t follow rules. They identify patterns in data and make probabilistic guesses.
This creates real risks. AI tools can hallucinate facts, cite cases that don’t exist, or miss critical legal issues. They can produce confident-sounding answers that are completely wrong. And they do this without warning you.
You can’t treat AI like a research assistant who knows what they’re doing. You need to verify everything. That takes time, skill, and judgment. It also requires you to understand enough about how the tool works to know when verification is critical.
Consider a lawyer who uses an AI tool to draft a brief. The tool cites three cases that seem perfect. The lawyer files the brief without checking. Later, opposing counsel points out that none of the cases exist. The lawyer faces sanctions, embarrassment, and potential discipline. This has already happened in multiple jurisdictions.
Your duty of competence means you need to catch these errors before they harm your client or damage your reputation.
How to Vet AI Tools Before You Use Them
You can’t meet your competence duty if you don’t understand the tools you’re using. That means doing real due diligence before you adopt any AI legal tech.
Start by asking basic questions. What does this tool actually do? What data was it trained on? How does it handle client information? What are its known limitations? Who built it, and what qualifications do they have?
You should also ask about accuracy rates, error types, and whether the tool has been tested in real legal contexts. If the vendor can’t answer these questions clearly, that’s a red flag.
Read the terms of service carefully. Many AI tools retain rights to use your input data for training or other purposes. That can create serious confidentiality issues. You need to know exactly how your client’s information will be handled and protected.
Consider whether the tool has been endorsed or reviewed by legal professionals you trust. Independent reviews, bar association guidance, and peer recommendations can help you make informed choices.
For a deeper look at how to assess vendors properly, see this guide on vetting legal tech vendors for ethical compliance.
Client Confidentiality and Data Security Risks
Your duty to protect client confidentiality doesn’t disappear when you use AI. In fact, it gets harder to satisfy. AI tools often send data to external servers, store information in the cloud, or use client inputs to improve their models.
You need to know where client data goes, who can access it, and how long it’s retained. You also need to ensure that the tool uses encryption, access controls, and other security measures that meet professional standards.
Some AI tools are built specifically for law firms and include strong confidentiality protections. Others are general-purpose tools that were never designed with attorney-client privilege in mind. You need to know the difference.
Before you input any client information into an AI tool, ask yourself whether doing so violates your confidentiality obligations. If you’re not sure, don’t use the tool until you get clarity.
In some cases, you may need to get informed client consent before using AI tools that involve their data. This is especially true if the tool could expose sensitive information or if the client has expressed concerns about technology use.
To better understand how to safeguard client information when using these tools, review this resource on protecting client privacy with legal tech tools.
Avoiding Bias and Ensuring Fairness
AI tools learn from data. If that data reflects historical biases, the tool will reproduce them. This can create serious fairness problems in legal practice.
For example, an AI tool trained on past sentencing data might recommend harsher outcomes for defendants from certain demographic groups. A contract review tool might flag clauses differently based on biased assumptions about parties. A legal research tool might prioritize results that reflect outdated or discriminatory legal standards.
You can’t assume that AI tools are neutral. You need to ask whether the tool has been tested for bias, what steps the vendor took to address fairness concerns, and whether independent audits have been conducted.
When you use AI to make recommendations that affect clients, you’re still responsible for ensuring those recommendations are fair, accurate, and appropriate. Your competence duty includes recognizing when a tool might be producing biased results.
Supervising AI Output Like You’d Supervise a Junior Lawyer
You wouldn’t let a first-year associate file a motion without reviewing their work. The same principle applies to AI. You need to supervise, verify, and exercise independent judgment over anything an AI tool produces.
This means reading every document the tool generates. Checking every citation. Confirming every factual claim. Reviewing every legal conclusion. You can’t outsource your judgment to a machine.
Supervision also means understanding when the tool is operating outside its competence. If you ask an AI tool to analyze an area of law it wasn’t trained on, the results will be unreliable. You need to recognize those situations and adjust your approach.
Some lawyers worry that this level of supervision defeats the purpose of using AI. But competence isn’t optional. If you can’t supervise the tool effectively, you shouldn’t be using it.
Staying Current as AI Technology Changes
AI legal tech is changing fast. New tools launch constantly. Existing tools get updated with new features, new training data, and new risks. Your duty of competence requires you to stay informed about these changes.
This doesn’t mean you need to become an AI expert. But you do need to understand the basics of how these tools work, what risks they create, and how to use them responsibly.
Attend CLE programs on legal tech and AI. Read guidance from your state bar. Follow credible sources that cover legal technology developments. And talk to colleagues about their experiences with different tools.
You should also revisit your tech choices regularly. A tool that was safe and effective last year might have changed in ways that create new risks. Periodic review helps you catch problems before they harm clients.
What Happens If You Fall Short
Failing to meet your duty of technological competence can result in serious consequences. You could face malpractice claims if your misuse of technology harms a client. You could face disciplinary action if you violate confidentiality, competence, or supervision rules.
You could also face sanctions in court if you file documents that contain AI-generated errors. Judges are increasingly alert to these problems and less forgiving of lawyers who claim they didn’t know better.
The good news is that most of these problems are preventable. If you take the time to understand the tools you use, verify their output, and protect client information, you can meet your competence duties while still benefiting from technology.
Balancing Efficiency with Responsibility
AI legal tech can make you more efficient. It can help you handle larger volumes of work, identify patterns you might miss, and free up time for higher-value tasks. But efficiency can’t come at the expense of competence.
You need to find the right balance. Use AI where it makes sense, but don’t let it replace your judgment. Verify its output carefully. Protect client information rigorously. And stay informed about how the technology works and where it falls short.
For a broader perspective on meeting your ethical duties when using these tools, this article on legal tech tools and ethical obligations provides useful context.
Moving Forward with Confidence
Your duty of technological competence isn’t about avoiding AI. It’s about using it responsibly. That means asking hard questions, doing real due diligence, and never outsourcing your professional judgment to a machine.
If you take these steps seriously, you can use AI legal tech in ways that serve your clients well, protect their interests, and keep you in compliance with your ethical obligations. The technology will keep changing. Your responsibility to stay competent won’t.

