• 5 Jan 2026

AI, law firms, and legal spend: The questions clients should ask in 2026

Apperio blog

In 2025, many firms recorded their best financial performance in years and are now operating with clearer commercial discipline. Pricing, staffing, and technology decisions are being made earlier in the workflow, long before clients see the effect.

AI is now part of that early decision-making. It accelerates initial drafting, changes when review happens, and brings pricing questions forward. These choices all influence legal spend, yet they sit inside processes that remain largely out of sight for in-house teams.

For legal leaders, this adds to the tension in how spend is understood. Speed in one area does not always reduce effort in another. Tools need supervision. Review still anchors quality. And when the first view of spend appears at billing, it becomes harder to steer conversations about value. Leaders are also seeing rising expectations from CFOs: legal must anticipate costs earlier rather than justify them later. 

This article draws on our recent webinar with senior leaders from two major firms to outline the questions legal teams should prioritize in 2026 as AI becomes more embedded in legal work.

🎧Watch the full webinar here: Across the Aisle: AI and the Price of Progress

Apperio blog

What this article covers
  • How firms organize their AI programmes and what that means for cost
  • Whether lawyers are equipped to use AI reliably, and how firms manage training and review
  • Whether AI actually reduces cost, and how firms define efficiency in practice
  • The level of visibility clients should expect around AI use and oversight
  • Whether AI will change which work stays in-house and how teams should prepare
  • The questions legal leaders should prioritise in 2026 when assessing AI legal spend


Q1. How are firms structuring their AI programs, and what should clients ask about them?

Large firms are now embedding AI to automate research, document review and more. All of this is great. Greater efficiency. Greater speed.

But the decisions that impact AI legal spend, from approved use cases, supervision requirements, risk controls, and training, depend on how firms organize their internal programs. These structures influence cost early, long before billing shows the effect. And this differs from firm to firm. So, how are external law firms managing AI?

In our recent conversation with senior leaders from two major firms, both described formal governance models designed to guide these decisions with discipline. This mirrors what many GCs now expect: AI cannot sit in isolated experiments; it needs defined owners, clear rules, and visible accountability.

Each firm shared how they ensure AI is used in best practice:

Taft, a full-service US firm with a national footprint, introduced a coordinated approach early. Lindsay Capeder explained why the firm needed a structured model rather than dispersed experimentation:

“A holistic approach was absolutely necessary for how we were looking at GenAI, and what our strategy was going to be.”

AI touches too many areas for decisions to sit within individual practice groups. Security, training, client expectations, and ethical oversight all require alignment.

Debevoise & Plimpton, an international firm known for transactional and disputes work, set up a similar model. William Sadd described the breadth of stakeholders involved:

“You want the general counsel’s office represented. You want to have IT and infosec. You want to make sure that partners who are leading AI advisory efforts are represented. And you want your executive stakeholders represented so that they can be stewards among the partnership.”

Both speakers noted that these teams meet often. Questions about tool use, review needs, and client-specific requirements develop quickly, and the governance structure must adapt with them. That agility—being able to adjust controls as tools evolve—is becoming a differentiator for firms.

For in-house teams, this has a direct impact. Governance determines how AI is used, how supervision works, and which assumptions shape pricing.

Understanding a firm’s internal approach gives clients a more grounded view of how AI-related decisions influence spend before billing arrives.

Apperio blog

Q2. Are lawyers equipped to use AI correctly, and what should clients ask about training and review?

The reliability of AI output depends heavily on how lawyers use the tools and how firms supervise the work. Even when AI accelerates early drafting or analysis, the quality of the output varies. That inconsistency creates new training needs and new demands on reviewers, and it directly influences AI legal spend.

Inside firms, expectations pull in different directions. Lawyers must follow strict client requirements while also responding to growing interest in new tools. Capeder described this tension:

“We constantly see the pendulum swinging… client expectations and guidelines… or what I call the shiny object syndrome.”

Her point was that enthusiasm for AI does not remove the need for careful supervision. The tools behave unpredictably. Two users can ask the same question and receive different answers, which means lawyers must learn both how to prompt and how to assess whether the output can be trusted.

Basic technical training is not enough; lawyers need to understand where AI can help, where it introduces new risk, and how much review is required before anything is sent externally.

From the reviewer’s perspective, the challenge is just as significant. Sadd explained that AI can produce polished text while still missing something important:

“Artificial intelligence can do a 10-page brief that’s absolutely perfect, and then make a mistake on page 11 that’s absolutely baffling.”

This is why firms are strengthening their review frameworks and training senior lawyers to identify where AI is reliable and where closer scrutiny is needed. Review remains essential. The output can look complete long before it has been appropriately evaluated. 

For in-house teams, the implication is tied directly to spend. AI may speed up early work, but it does not remove the need for thorough review. Without strong training and oversight inside firms, the risk of rework increases, and cost follows that pattern.

Apperio blog

Q3. Does AI actually reduce cost, and what should clients ask about value?

For many in-house teams, the expectation is simple: if early work moves faster, a drop in spend should follow suit. Inside firms, the reality is more complicated. AI can speed up the process of drafting or research, but it also introduces new review needs that influence cost. The output may arrive sooner, but it often requires more oversight, and that effort appears later in the workflow.

Clients regularly start from the assumption that faster early work should produce savings. As Capeder explained:

“We have clients who are looking at AI as an opportunity for cost savings and want to understand what we're utilizing, how we're utilizing it, how it's going to save them money.”

Her point was that even when AI reduces effort in one stage, firms still have responsibilities that cannot be bypassed. Lawyers must check the full output, validate reasoning, and confirm accuracy. Review becomes more important, not less, which affects how AI legal spend develops over time.

Sadd addressed the same issue from an operational perspective. Firms can see efficiency gains, but translating those gains into pricing is still an open question:

“It’s a little premature… a lot of firms and clients are studying this very closely. I think we are all trying to understand it.”

Efficiency gains do exist, but they appear unevenly across the workflow. Some phases compress. Others expand to absorb new review needs. And because those changes look different across matter types, teams, and tools, firms and clients are still working out which differences actually influence cost. Until firms and clients agree on how to measure those changes, value will remain a conversation rather than a simple calculation.

So, AI may improve parts of the workflow, but it does not automatically reduce cost. Value depends on how the work is organized, how review is handled, and how openly firms explain where efficiency appears and where more oversight is needed.

Q4. What visibility should clients expect, and how openly are firms explaining their use of AI?

Visibility around AI use is becoming a core expectation for in-house teams. They want to understand how firms choose tools, how decisions are supervised, and how AI influences the development of spend. The problem is that these decisions happen inside the workflow, long before billing shows the effect.

In the discussion, both speakers noted that clients are now testing firms more directly. They ask about approved tools, how the firm controls risk, and how supervision works when AI is introduced. Sadd described what many clients are trying to uncover:

“I think they want to try to get a better understanding of which firms are taking this seriously… are you setting aside these sorts of high-risk use cases that impose new risks to the matter and don't add value, and are you seeking out those sorts of valuable opportunities to work with us?”

Capeder flagged how client policies influence what firms can share. Alignment between a firm’s internal approach and a client’s requirements is no longer optional:

“We want to make sure that our applications, our processes for evaluating these tools, onboarding them, are in alignment with what our clients are requiring.”

This creates a practical challenge for the in-house teams evaluating value and cost. AI affects how early work is produced, how review time develops, and how decisions shape cost weeks before an invoice appears. Without structured visibility, clients cannot see how early choices influence spend. This is why more GCs are asking for clearer explanations upfront—simply to understand how cost is forming while the work is underway.

The strongest message from the conversation was that transparency is now part of value. Clients do not need every internal detail, but they do need clarity about how AI is supervised, how review standards work, and how the firm ensures reliability before anything is delivered.

Apperio blog

Q5. What should clients ask next, and how can firms demonstrate responsible use of AI?

As AI becomes part of routine legal delivery, we’ve seen already in this article that clients are starting to probe how their firms are managing the practical risks behind it. The questions coming through RFPs and ongoing reviews now go beyond tool lists. They focus on supervision, quality control, and the extent to which AI influences legal spend.

From the firm's side, Sadd explained what many clients are trying to assess:

“They want to try to get a better understanding of which firms are taking this seriously… and are seeking out valuable opportunities to work with us”

And the emphasis is moving toward governance: who is overseeing decisions, how review works, and how firms ensure the output meets the standard the client expects. Capeder flagged that this relies on alignment between a firm’s internal approach and each client’s requirements:

“…making sure that our applications, our processes for evaluating these tools, onboarding them are in alignment with our clients' requirements”

For in-house teams, this points to three areas worth prioritizing in 2026:

  • Decision-making: Who approves the use of AI, and on what basis?
  • Supervision: How senior reviewers assess AI-supported work before it is shared externally.
  • Accuracy: How the firm validates output and documents the process.

 

Across the discussion, one thing kept popping up: responsible use of AI becomes visible through a firm’s ability to explain how people, process, and review come together. Clients do not need every internal detail, but they do need to know that decisions are deliberate, review is active, and the approach will stand up when costs or outcomes are scrutinized.

Apperio blog

Q6. Will AI change what work stays in-house, and how should clients prepare?

AI is prompting many legal departments to reconsider which parts of their workload they can originate internally. Faster drafting, cheaper experimentation, and wider access to tools have encouraged some teams to test whether early-stage work can begin in-house before external counsel becomes involved. So, the question is whether these early gains translate into genuine efficiencies or simply push the effort into later review.

Against that backdrop, both speakers acknowledged that some clients are already using AI to generate starting drafts. Capeder noted that these experiments often reveal the limits of the tools involved:

“We have had instances where clients have utilized GenAI to draft a starting point… and it has been an eye-opening experience for the client and for our attorneys because of some of the errors that were created in the tools the clients have used.”

Rather than reducing reliance on outside counsel, this development creates a new need for coordination. Firms must understand how a client’s internal tools work, what checks were applied, and how the starting materials should be interpreted. Without that clarity, early AI-assisted output can lead to additional rework or risk that only becomes visible later.

On the other hand, Sadd suggested that, in some cases, AI may have more of a positive use:

“There might even be opportunities to deepen your relationship with your strategic client… stuff that we might have done in the past that was too expensive… this may actually represent an opportunity to grow the relationship.”

One thing is for sure: AI may influence how early tasks are created, but it does not replace the need for specialist oversight. Instead, it requires a clearer operating model around when internal tools are used, how external counsel engages with that output, and how spend is managed when early work accelerates but review work intensifies.

Apperio blog

What this means for in-house teams in 2026

AI is introducing new ways of producing early legal work, but it is also changing how cost develops inside firms. Decisions around staffing, tool use, review time, and quality control now take shape much earlier, long before any billing shows the effect. For in-house teams, this creates a practical requirement: understanding cost while the work is still in motion (not when the invoice lands).

Firms are investing in governance, training, and supervision, yet many of the decisions that influence spend remain inside their internal workflows. AI may accelerate some steps, but review effort, corrections, and coordination often increase elsewhere. Without visibility into how this unfolds, legal leaders are left interpreting invoices without the context behind them.

This is where structured engagement and a live view of legal spend become a must-have. PERSUIT sets expectations early through clear scope and pricing, while Apperio shows a continuous view of work as it progresses. This means in-house teams get a way to see how cost builds, how staffing and review decisions take shape, and where efficiency appears or requires further checking.

As AI becomes more embedded in legal work, this level of visibility is becoming essential. Legal teams need confidence that efficiency is supporting the outcomes they expect, and a way to understand the work as it develops rather than after the spend has accrued.

If you want clearer insight into how continuous spend visibility and structured engagement can support your approach to outside counsel, we can help. Book a demo.

Author:

Dom Aelberry

Dominic Aelberry

CEO