How Robin AI Is Making AI Safe For Lawyers
Richard Robinson, cofounder and CEO of Robin AI, has a fascinating resume: he was a corporate lawyer for high-profile firms in London before founding Robin in 2019 to bring AI tools to the legal profession. Using a mix of human lawyers and automated software expertise, Robin predates the big generative AI boom that kicked off when ChatGPT launched in 2022.
While the tools his company initially built were based on what we would have called “machine learning” a few years ago, the recent explosion of powerful models has allowed Robin AI to expand its ambitions. It’s moving beyond just using AI to parse legal contracts into what Robinson envisions as an entire AI-powered legal services business.
However, AI can be unreliable, and in the legal world, unreliability is a critical failure. We've all seen headlines about lawyers misusing ChatGPT, citing nonexistent cases in their filings. These attorneys have faced scathing rebukes from judges and even fines and sanctions.
In this conversation, Robinson discusses the problem of AI hallucinations, how his professional debate background informs his work, and how new technologies are forcing us to reevaluate the difference between facts and truth.
This interview has been lightly edited for length and clarity.
Introducing Robin AI: The AI Lawyer
Interviewer: Richard Robinson, founder and CEO of Robin AI. Tell me, what is Robin AI? What’s the latest?
Richard Robinson: We’re building an AI lawyer, and we’re starting by helping solve problems for businesses. Our goal is to essentially help businesses grow because one of the biggest impediments to business growth is not revenue, and not about managing your costs — it’s legal complexity. Legal problems can actually slow down businesses. So, we exist to solve those problems.
We’ve built a system that helps a business understand all of the laws and regulations that apply to them, and also all the commitments that they’ve made, their rights, their obligations, and their policies. We use AI to make it easy to understand that information and easy to use that information and ask questions about that information to solve legal problems. We call it legal intelligence. We’re taking the latest AI technologies to law school, and we’re giving them to the world’s biggest businesses to help them grow.
Interviewer: A year and a half ago, your description was a lot heavier on contracts. It sounds like you’re more firmly in that direction now.
Richard Robinson: Yeah, that’s correct. We’ve always been limited by the technology that’s available. Before ChatGPT, we had very traditional AI models. Today we have, as you know, much more performant models, and that’s just allowed us to expand our ambition. You’re completely right, it’s not just about contracts anymore. It’s about policies, it’s about regulations, it’s about the different laws that apply to a business. We want to help them understand their entire legal landscape.
How Robin AI Delivers Legal Intelligence
Interviewer: Give me a scenario here, a case study. Recently, Robin amped up your presence on AWS Marketplace. How is that kind of hyperscaler cloud platform potentially going to open up the possibilities for you?
Richard Robinson: We help solve concrete legal problems. A good example is that every day, people at our customers’ organizations want to know whether they’re doing something that’s compliant with their company policies. Those policies are uploaded to our platform, and anybody can just ask a question that historically would’ve gone to the legal or compliance teams. They can say, “I’ve been offered tickets to the Rangers game. Am I allowed to go under the company policy?” And we can use AI to intelligently answer that question.
Every day, businesses are signing contracts. That’s how they record pretty much all of their commercial transactions. Now, they can use AI to look back at their previous contracts, and it can help them answer questions about the new contract they’re being asked to sign.
Interviewer: Are you taking away the work of the junior lawyers? How is it changing the work of the entry-level law student or intern?
Richard Robinson: With AI, they can handle more work internally, so they don’t have to send as much to their law firms as they used to. You’re right, the work is shifting, no doubt about it. For the most part, AI can’t replicate a whole job yet. It’s part of a job. So, we’re not seeing anybody cut headcount from using our technologies, but we do think they have a much more efficient way to scale, and they’re reducing dependence on their law firms over time.
AI goes first, basically, and that’s a big transformation. Their hands are still on the driving wheel. They have AI go first, and then people are being used to check. We make it easy for people to check our work with pretty much everything we do. We include pinpoint citations, references, and we explain where we got our answers from. So, the role of the junior or senior lawyer is now to say, “Use Robin first.” Then, their job is to make sure that it went correctly.
Tackling the AI Hallucination Problem in Law
Interviewer: How are you avoiding the hallucination issue? We’ve seen these mentions in the news of lawyers submitting briefs to a judge that include stuff that is completely made up.
Richard Robinson: Yeah, there is. It’s the number one question our customers ask. I do think it’s a big part of why you need specialist models for the legal domain. To answer your question directly, we include citations with very clear links to everything the model does. So, every time we give an answer, you can quickly validate the underlying source material.
The second thing is that we are working very hard to only rely on external, valid, authoritative data sources. We connect the model to specific sources of information that are legally verified, so that we know we’re referencing things you can rely on.
The third is that we’re educating our customers and reminding them that they’re still lawyers. It doesn’t matter which tool you use to get there. It’s on you as a legal professional to validate your sources before you send them to a judge or even before you send them to your client. Some of this is about personal responsibility because AI is a tool.
Interviewer: Is a major AI use case combing through old contracts to understand rights and wiggle room, especially as global relationships and tariffs change?
Richard Robinson: That’s exactly right. Any type of change in the world triggers people to want to look back at what they’ve signed up for. The most topical is the tariff reform, which is affecting every global business. People want to know, “Can I get out of this deal?” That’s very similar to what we saw during covid. We’re seeing the same thing now, but this time we have AI to help us. We are absolutely seeing global business customers trying to understand what the regulatory landscape means for them. That’s going to happen every time there’s regulatory change.
Beyond Facts: AI and the Search for Truth
Interviewer: Let's get a little philosophical. It seems to me that the two steps at the core of this are how do we figure out what’s true, and how do we figure out what’s fair?
Richard Robinson: I think it is. It’s increasingly difficult because there are so many competing facts and so many communities where people will selectively choose their facts. But you’re right, you need to establish the reality and the core facts before you can really start making decisions. I do think AI helps with all of these things, but it can also make it more difficult. It’s not obvious to me that we’re going to get closer to establishing the truth now that we have AI.
Interviewer: I think you’re touching on something interesting right off the bat, the difference between facts and truth.
Richard Robinson: Yes, that’s right. It’s very difficult to really get to the truth. Facts can be selectively chosen. I’ve seen spreadsheets and graphs that technically are factual, but they don’t really tell the truth. So, there’s a big gap there.
Interviewer: How do we deal with that at a time when these models are designed to be convincing, regardless of whether they’re creating truth or something else?
Richard Robinson: I think that you observe confirmation bias throughout society with or without AI. People are searching for facts that confirm their prior beliefs. AI is going to make it much easier for people who are looking for facts that back them up. It’s going to give you the world’s most efficient mechanism for delivering information of the type that you choose.
I don’t think all is lost because I also think that we have a new tool in our armory for people who are trying to provide truth. My hope is that the right side wins, that people in search of truth can be more compelling now that they’ve got a host of new tools available to them, but only if they learn how to use them.
Interviewer: How does a product like Robin AI lead all of this in a better direction?
Richard Robinson: I think a lot of this comes down to validation. The algorithms that power most of our social media platforms are what AI practitioners call “misaligned AI at scale.” These are systems where the AI models are not actually helping achieve goals that are good for humanity. They’ve been optimized to get our attention. I think you need platforms that find ways to combat that. In our context, we use citations. We’re saying don’t trust the model, test it. It’s going to give you an answer, but it’s also going to give you an easy way to check for yourself if we’re right or wrong.
The Art of Debate in the Age of AI
Interviewer: To me, debate is gamified truth search. Do we need a new model of debate in the AI era?
Richard Robinson: I think that’s what we should be doing. What we’ve observed over the last five or six years is less debate actually. People are in their communities, real or digital, and are getting their own facts. They’re actually not engaging with the other side. We need these systems to do a really robust job of surfacing all of the information that’s relevant and characterizing both sides. We now have AI systems that could give you a live fact check or a live alternative perspective during a debate. Wouldn’t that be great for society?
Interviewer: Tell me about the debate environment you grew up in and what that did for you intellectually.
Richard Robinson: My family was arguing all the time. It really helped me to develop a level of independent thinking because there was no credit for just agreeing with someone else. You really had to have your own perspective. It made me value debate as a way to change minds as well, to help you find the right answer, to come to a conversation wanting to know the truth and not just wanting to win the argument. For me, those are all skills that you observe in the law. Law is ambiguous. I think people think of the legal industry as being black and white, but the truth is almost all of the law is heavily debated.
The New Challenges Created by AI
Interviewer: What problems are we creating with the solutions that we’re bringing to bear?
Richard Robinson: We’re definitely creating new problems. I’d point to three things with AI. Number one, we are creating more text, and a lot of a lot of it is not that useful. People may just read less because it’s harder to sift through the noise to find the signal.
The second thing I’ve observed is that people are losing writing skills because you don’t have to write anymore, really. What I observe is that people’s ability to sit down and write something coherent... is actually getting worse because of their dependence on these external systems. To me, writing is deeply linked to thinking.
The final thing I would point to is that we are creating this crisis of validation. When you see something extraordinary online, I, by default, don’t necessarily believe it. I assume things aren’t true, and that’s pretty bad actually.
Interviewer: Richard Robinson, founder and CEO of Robin AI, thank you for joining me.
Richard Robinson: Thank you very, very much for having me.