Criminal Law Has an AI Trust Gap of Its Own
Criminal attorneys are adopting AI tools faster than their clients trust them. Learn 4 strategies to bridge the AI trust gap and build client confidence.

Trust is like glass: It takes time, skill, and care to create, and just one wrong move to destroy. This is perhaps the single biggest brake on the AI boom, especially in law. Many legal pros and clients aren’t sure they can trust AI, so they stick to what they know. But by playing it safe, they’re also falling behind.
Trust plays a critical role in AI adoption, and there are no shortcuts to creating it. AI service providers will need to continue to deliver results and build out the technology’s legal bona fides. If they do, legal AI adoption will likely keep trending up. In the meantime, though, research suggests that something unexpected is happening in criminal law.
While criminal attorneys and, to a lesser extent, criminal law firms are embracing AI tools, their clients aren’t so sure. There’s a trust gap here that has to be addressed.
Criminal law’s AI trust gap
In a recent survey of nearly 3,000 legal pros, the Federal Bar Association found that “AI adoption rates among legal professionals indicate steady interest,” attributing the gradual growth to “slow law firm adoption and restrictive law firm AI policies.” They also noted that adoption rates vary significantly across practice areas and firm sizes.
Amid all that variance, the survey found that criminal law is one of the top practice areas for generative AI adoption. 28% of individual criminal law practitioners—and 18% of firms—have integrated AI tools into their workflows. And among respondents who said they used AI for legal work, 45% said they did so on a daily basis.
So we can see that AI adoption rates are climbing (if not skyrocketing) and that the tools are gaining traction, especially with criminal attorneys. But when we examine similar research into prospective criminal law clients, an interesting tension reveals itself.
In another recent survey, shared by Robin AI and conducted by Perspectus Global, 69% of US and UK citizens said they would prefer “a traditional lawyer”—which is an interesting way of saying one who doesn’t use AI. 27% said they would rather have an attorney using AI tools for support, and just 4% said they would trust AI alone.
That tells us clients are starting to come around on AI, but they overwhelmingly want a “human in the loop.” As in the case of the Federal Bar Association survey, though, Robin AI’s results reveal that client trust in AI varies widely across practice areas. And here, criminal law stands out again—in the wrong way.
According to the survey, criminal defense was the legal task people trusted AI to handle least. Only 11% said they would be open to AI defending them in a criminal matter, and 61% said they wouldn’t even trust it to assist.
What can criminal attorneys do?
Here, a serious problem emerges. Without adopting AI tools, attorneys risk falling behind prosecutors who are already using advanced AI to get an edge earlier in the discovery process. So, how can criminal attorneys get their clients’ buy-in on AI in the face of such a severe trust disparity?
Closing criminal law’s AI trust gap is a communication challenge that must be confronted directly. Here are a few strategies criminal attorneys can adopt to set AI-skeptic clients at ease.
1. Establish a track record
It may seem obvious, but clients can’t fully appreciate AI tools if they don’t know what they’re capable of. It’s important to keep track of any AI-assisted wins your firm secures and to convey the role AI played in those achievements to your clientele.
Showcasing positive results enabled by AI can be difficult—in fact, confidentiality typically takes this strategy off the table. And unless your firm publicizes its use of AI tools, or it becomes material in court, any AI-assisted wins will remain invisible.
The witness testimony discrepancy you caught in real time, the hours of jail calls you plucked a crucial quote from, the case strategy you made time to map out—if AI helped make such triumphs possible for your firm, then your clients need to know it. Legal AI use cases, grounded in detailed real-world outcomes, can accomplish far more than empty rhetoric alone.
Your clients hired you, not your tools, so it’s your job to demonstrate how using AI helps you protect their rights and build their defense.
2. Embrace AI education
The same data that illustrated criminal law’s AI trust gap also reveals another key to closing it: AI education. Remember, Robin AI’s survey showed that 27% of prospective clients are open to attorneys using AI tools for support. That’s a solid foundation to build on, and by exhibiting AI literacy, you can help convince skeptical clients.
But a key number here is 82—the percentage of respondents who said they wanted attorneys to take a safety or compliance course before using AI tools. This tells us that the overwhelming majority of prospective clients are, in fact, open to AI but need certain security assurances before they’re willing to act on that openness.
Don’t just say you know AI—invest in learning how to use it well, and help your clients understand it, too. Earning your clients’ trust means delivering results they can see. Building your team’s knowledge and showing clients how these tools strengthen their defense are practical steps that foster confidence.
3. Emphasize human oversight
The numbers are even clearer when it comes to keeping a human in the loop. Remember, just 4% of respondents to the Robin AI survey said they would trust an AI acting alone. That number speaks volumes about the importance of human oversight to legal AI adoption and should heavily inform how criminal firms talk to clients about AI.
As long as your firm’s AI-assisted workflows have a human in the loop, your clients don’t have to put their full trust in AI. They just have to put their trust in you. Your firm’s attorneys and support staff are the ones who leverage AI tools—and who are held responsible for their outputs.
AI tools can be unpredictable, but knowing there’s an experienced human around who’s trained to act as a procedural failsafe mitigates that risk. And emphasizing this informed oversight means being realistic about the shortcomings of certain AI solutions while also ensuring they’re accounted for, prioritizing your clients’ protection.
4. Communicate clearly
Clear communication is paramount to any successful attorney-client relationship, and that’s especially true when legal pros are moving to adopt AI tools. The more transparency you can provide around your firm’s AI-augmented workflows, the easier it will be to secure your clients’ buy-in. It’s just that simple.
The legal world is so packed with jargon, there’s a dedicated word in the dictionary for it all. Avoid AI-related legalese wherever you can, because talking over your clients’ heads doesn’t accomplish anything. Explain what AI brings to the table for your firm in plain language, and how your clients stand to benefit.
Setting clear expectations is one thing—delivering on them is another. Don’t overpromise what AI tools can deliver in terms of efficiency, reduced costs, or legal outcomes, but don’t sell the tech short, either. And be as proactive as you can in clearing up any confusion. If you feel an AI question around the corner, go ahead and address it.
The future of criminal law
AI tools have immense potential to reshape the criminal justice system for the better, from solving the digital evidence bottleneck to ensuring more just outcomes for all. But before that better future can be realized, criminal attorneys and their clients must be willing and able to take the AI adoption leap—and trained to do so responsibly.
New AI tools for attorneys are launching every day, while others like Rev have been around for years. Regardless of which solutions your firm chooses to adopt, building trust in their use requires a clear, informed approach. It’s up to you to guide your clients through these changes while keeping their defense your top priority.














