First Published: December 12, 2024
Last Revised: NA
<aside>
🔉
Prefer listening to reading?
Implementing AI in Access to Justice Organizations.wav
AI-generated podcast courtesy of NotebookLM. May contain inaccuracies.
</aside>
Legal services organizations in the U.S. are exploring large language models (LLMs) as a transformative tool to bridge the justice gap. These AI models can streamline routine tasks and amplify the capacity of nonprofits, clinics, and pro bono programs. In fact, the Legal Services Corporation (LSC) reports that 50 million low-income Americans receive inadequate help for 92% of their civil legal problems–a gap AI has a “generational opportunity” to help close by improving efficiency and productivity. This guide offers strategic insights for adopting LLMs, focusing on practical, high-impact use cases and adoption best practices for U.S. access-to-justice organizations.
Prioritizing Use Cases
Start with the problem, not the technology. Begin by identifying pressing pain points or bottlenecks in your organization’s work. Common candidates are tasks that are resource-intensive, repetitive, or cause delays, which makes them ripe for automation. For example, many legal aid groups struggle with client intake backlogs, where overwhelmed staff and long wait times delay services. If intake is a major bottleneck, a viable use case might be an AI-assisted intake chatbot to handle basic inquiries, collect information, and route eligible cases for review. The key is to choose one well-scoped use case that will yield quick wins – think “low-hanging fruit” tasks that can be improved with minimal risk.
High-Impact LLM Applications in Legal Aid: When brainstorming use cases, consider areas where LLMs have already shown promise in legal services:
- Document Automation: LLMs can draft and fill documents based on prompts or data, reducing the time attorneys spend on forms, letters, and pleadings. For instance, generative AI can review hundreds of pages or contracts in minutes and extract key information for a first draft. This means automating routine paperwork (e.g. assembly of court forms or standard motions) so staff can focus on higher-level advocacy.
- Client Communication (Chatbots & Email Drafts): AI chatbots can answer common questions, provide legal information, and even help users complete guided interviews in plain language. For example, Legal Aid of North Carolina’s “LIA” chatbot (built with a legal tech partner) helps people navigate issues like domestic violence and landlord-tenant disputes by providing resources and next steps. LLMs can also draft personalized emails or letters to clients, translating “legalese” into understandable terms. These uses improve responsiveness and accessibility for clients who might otherwise wait days for an answer.
- Legal Triage and Intake: LLMs excel at analyzing narratives and spotting issues, making them useful for triaging client applications. An AI can parse a client’s description of their problem and suggest the likely legal issue and urgency, helping staff decide who qualifies for services. In one project, a hybrid intake system in Missouri combined a scripted interview with an LLM that analyzed free-text answers and generated follow-up questions in real time. This AI-assisted triage was able to predict with about 84% accuracy which cases would meet a legal aid program’s criteria (with GPT-4 performing best). Importantly, it rarely misclassified eligible clients as ineligible, instead asking additional questions when unsure. Such results suggest LLMs can expedite intake without sacrificing accuracy, directing limited staff attention to the most urgent cases first.
- Legal Research Assistance: An LLM-powered research assistant can save attorneys hours by quickly summarizing case law, statutes, or documents. For example, Lone Star Legal Aid developed Juris, a document-reading chatbot that lets staff query a library of legal documents and get answers or summaries, streamlining their research tasks. Likewise, some innocence projects use AI to sift through trial transcripts and evidence, helping identify useful facts for appeals in a fraction of the time. (One organization noted that an AI tool for reviewing records could have freed a wrongfully convicted client a decade earlier if they’d had it sooner.) These tools act like a junior researcher – speed-reading through materials and highlighting relevant points – but still under attorney supervision for accuracy.
- Summarizing and Explaining Documents: LLMs can generate concise summaries of complex legal texts for both attorneys and clients. This might mean summarizing a lengthy lease or court order into key bullet points, or translating dense legal language into a reader-friendly explanation. By automating summaries, attorneys can more quickly digest new case documents, and clients receive information in clearer terms. For instance, an AI tool in New York helps tenants understand their rights to repairs by walking through housing code provisions and explaining next steps. In practice, a lawyer could use an LLM to summarize an evidence packet or client story before a meeting, ensuring nothing important is overlooked.
Prioritize the use case that offers the best mix of impact and feasibility. Weigh factors like expected time saved, improvement in client service, and technical complexity. It’s often wise to start with internal-facing applications (e.g. automating document drafts or research memos) before client-facing ones, so you can refine the technology in-house. Also consider where existing data or content is available to train or prompt the model – for example, do you have a trove of form letters, intake transcripts, or FAQs the AI can learn from? If so, those domains are good starting points. As one expert advises, target a workflow that “is resource intensive, repetitive or causing bottlenecks, and that you’d prefer to automate”. By focusing on a well-defined use case, you set a clear goal and avoid trying to “boil the ocean” with AI all at once.
Evaluating Tools and Partners
Not all AI solutions are created equal. Evaluate LLM tools and vendors carefully to find a fit for your needs while protecting client data and ensuring reliability. Start by deciding whether to build or buy:
- Off-the-shelf vs. custom: If you have tech capacity, you might integrate an open-source LLM or use a platform (like Docassemble with an AI plugin) to build a tailored solution. For many legal nonprofits, though, partnering with an established vendor or using a ready-made tool can jump-start the project. For example, Legal Aid of North Carolina partnered with the legal tech company LawDroid to build its LIA chatbot, and Housing Court Answers teamed with the platform Josef to create a tenant self-help tool. These partnerships brought in technical know-how and pre-built AI frameworks, saving the nonprofits from reinventing the wheel.
- Quality and domain relevance: Choose professional-grade AI tools that have been tested for accuracy with legal content. An LLM that’s “good enough” for casual use may not be reliable for legal work. Evaluate whether the model has access to authoritative legal sources or if it can be fine-tuned on laws and content relevant to your practice areas. Many general models (like GPT-4) perform impressively, but you may get better results with a version or add-on trained on legal texts. Always ask vendors for validation results or case studies in the legal domain.
- Privacy and confidentiality: Client confidentiality is paramount – you must ensure any tool complies with privacy standards. Check where data is stored and who has access. For cloud-based models, prefer offerings that don’t retain or use your data for training. Many vendors now offer “no retention” guarantees or on-premise solutions for sensitive data. If using a public service (e.g. a free AI chatbot) avoid inputting real client facts unless you have explicit client consent, as doing so could violate ethical duties. It’s often worth paying for an enterprise plan or engaging a partner that signs a confidentiality agreement. Also verify the tool’s security measures, especially if it will integrate with case management systems or client databases.
- Cost and sustainability: Consider the budget and ongoing costs. Some LLM services charge per use or require subscriptions. Factor in not just initial development but maintenance (model updates, bug fixes, etc.). Seek out grants or pilot funding (LSC’s Technology Initiative Grants or private foundations) to support AI projects. Also explore free or discounted offerings for nonprofits – several AI companies and consultancies offer pro bono assistance to legal aid groups. If partnering with a vendor, clarify pricing for scaling up usage after a pilot.