When AI isn’t the bottleneck, knowledge is

Inside Practice • February 3, 2026

Most firms will arrive in 2026 with no shortage of AI tooling. The harder problem is what sits underneath it: whether your firm’s knowledge foundations, governance, and operating model are strong enough to make AI reliable, scalable, and defensible in practice.


That’s the premise of AI x KM 2026: positioning Knowledge Management as infrastructure, not support, and treating the “knowledge backbone” as the real differentiator in the next phase of legal transformation

Hosted by Inside Practice as a one-day, in-person forum, AI x KM 2026 is built for senior KM, innovation, and operational leaders who are done with abstract AI talk, and ready to compare notes on what it actually takes to move from experimentation to sustained capability.


The conversations to watch


  1. Adoption isn’t a rollout. It’s alignment.
    Featuring 
    Barbara Taylor and Elisabeth Cappuyns of DLA Piper our day opens with a direct attack on the #1 execution risk: tools that don’t stick because workflows, roles, and incentives never changed. “Driving Adoption in Law Firms: Aligning Knowledge, Workflows, and AI” is framed around selection, implementation, feedback loops, and change management, so AI becomes a usable part of practice, not another tab attorneys ignore.
  2. KM “lifecycle” models are being replaced by living systems.
    Moderated by Katya Linossi
     of Clearpeople, “From Lifecycle to Living System: Reimagining Knowledge in the AI Era” pushes on a key tension: the moment AI reshapes how knowledge is created and reused, static models break. Expect a discussion centered on capture, governance, reuse, culture, and continuous transformation.
  3. The vendor question is really a taxonomy and curation question.
    AI success hinges on what you feed it, how you structure it, and who decides. “KM, Lawyers, and Vendors Working Together: Partnering with AI to Power Law Firm Expertise” is oriented around building effective taxonomies, deciding which assets become AI-ready, and translating outputs into lawyer-ready playbooks and workflow artifacts, done well, this is where KM becomes a multiplier. Panelists come from 
    Ballard Spahr.
  4. Pilot design is strategy, because vendor evaluation is now a core capability.
    “Designing Pilots with Purpose: Turning AI Vendor Evaluations into Scalable KM Solutions” is a case study Led by 
    Tom Trujillo of McGuireWoods LLP, built around the unglamorous work: vetting, selecting, proving value, planning financially, and avoiding adoption theater. 
  5. Future-proofing means planning for tool depreciation and governance debt.
    AI stacks won’t stand still. The “Future-Proofing Your AI Strategy” led by 
    Morgan Llewellyn of HIKE2 tackles total cost of ownership, the speed of tooling change, and why strong knowledge/data foundations are the only sustainable hedge.


The afternoon then turns to where KM strategy becomes visible to users: search, UX, and matter-level intelligence:


  • “Aligning Knowledge Strategy with AI” with Michael Korn of Paul Hastings LLP
  • “Reimagining Enterprise Search” with Nicole Bradick of Factor Law
  • “Transforming Legal Knowledge into AI-Driven Case Intelligence” with Scott Kaiser of Mayer Brown
  • (And yes, there’s a networking reception to close.)


Key details at a glance


Date: Wednesday, April 29, 2026

Time: 9:00 AM – 5:30 PM (EST)

Location: New York

Early bird: Confirm by February 13, 2026 to save $300 (code AIKMNYC)


Bottom line: If your firm’s AI story is starting to sound like “we bought tools,” AI x KM 2026 is designed to pull the conversation back to what actually determines outcomes: knowledge design, governance, adoption, cost, and the operating model required to make AI trustworthy in practice. 

By Inside Practice February 3, 2026
The UK legal market enters 2026 at an inflection point where “data strategy” stops being a modernization project and becomes an operating philosophy. The event framing is blunt: data isn’t back-office, it’s the organizing principle for how legal work is priced, delivered, governed, and measured. Inside Legal Data: London (Apr 9, 2026) leans into that reality with a full-day agenda that connects the dots: institutional intelligence, architecture, governance, AI readiness, and the commercial outcomes leaders actually care about.
By Inside Practice January 16, 2026
Inside Legal AI Brief: 01/16/26 For the past two years, the legal industry has treated AI the way a toddler treats a marker: with wide-eyed wonder, frantic enthusiasm, and absolutely no plan for what happens when it ends up on the walls. Because legal AI has mostly been framed as a question of possibility: What can it do? How fast is it improving? Which tools should firms test? And to be clear: that phase is done. It’s over. The “AI as a fun side quest” era has ended. Because across the legal market, vendors, law firms, courts, regulators, professional bodies - AI is no longer being treated like an experiment. It’s being absorbed into the infrastructure of legal work. Which is a huge change, because “experiment” is where you can shrug and say, “Well, that didn’t work,” like you tried making sourdough in 2020. “Infrastructure” is where, if it fails, society starts emailing you like, “Hi, I can’t access my rights today, is the system down?” And that’s what changes the risk profile entirely. This is no longer a story about whether AI works. It is now a story about whether legal institutions are prepared to control it, govern it, and then, this is the real kicker, live with it . Because it’s one thing to invite AI into the building. It’s another thing to realize you’ve given it keys, a badge, and a desk right next to the confidential files. The Inflection Point We’re Already Past Here’s one of the clearest signals something fundamental has shifted: AI has crossed an internal threshold inside institutions. Vendors are no longer selling discrete tools, little “helpful” add-ons you can trial and then quietly abandon like a gym membership. They’re selling agentic systems , often paired with pricing models that are basically designed to encourage mass adoption. Litera’s rapid growth after launching no-cost access to agentic capabilities is a perfect example: remove friction, and adoption accelerates. ( Business Wire ) And courts? Courts are not just watching from a safe distance, stroking their judicial chins and murmuring, “Hmm.” U.S. judges are reportedly already using AI to summarize filings and assist with drafting. ( Wall Street Journal ) Which is… significant! Because once judges start using AI as a convenience layer, that doesn’t stay a “nice-to-have.” That changes how lawyers write, how they brief, what they emphasize, and what they assume will be “picked up.” And institutional change rarely arrives with fireworks. It arrives through normalization: one more workflow, one more “assist,” one more quiet assumption that the machine is just… part of the process now. That moment has already passed. The End of AI Euphoria and the Rise of Scrutiny Now, investment in AI hasn’t slowed. But it has sobered up. The market is moving beyond the “try everything” phase, where everyone acted like they were at a tech buffet piling shrimp onto their plate with both hands, and into a phase of discipline. Spending is being redirected toward what you need if AI is going to operate at scale without turning your firm into a liability factory: governance and oversight security and data control reliability and auditability integration with knowledge, data, and workflow systems and ROI you can actually explain to leadership and clients without sounding like a cult member And this shift mirrors what enterprise leaders are saying more broadly. McKinsey’s State of AI points to organizations focusing less on experimentation and more on operationalizing AI in ways that survive scrutiny. ( McKinsey ) Which makes sense. Because AI is no longer “innovation theater”, where everyone gets to wear a headset and say “synergy” and then go home. It is entering the same governance conversation as billing systems, pricing models, and client data platforms. Meaning: boring, consequential, and full of meetings. The true nightmare. Governance Is Rising, but Fragmenting If this were a neat story, regulation would be converging smoothly behind adoption. It is not a neat story. In the United States, AI governance is emerging through state-level activity, which means we’re heading toward the regulatory equivalent of being attacked by a flock of birds: not one clean problem, but dozens of small, frantic ones coming from different angles. You’ve got legislative developments in Texas ( Texas Legislature Online ), policy direction in California ( Governor of California ), and the likely future is layered compliance rather than one national framework. And in the UK, the signal is different but still consequential: professional bodies like the Law Society are issuing increasingly urgent guidance on pace and risk, even while enforceable standards remain fluid. ( Law Society ) So if you’re a firm operating across jurisdictions, the risk is not “no regulation.” It’s fragmentation . Which forces governance decisions now , before standards stabilize, meaning you’ll be building the plane while multiple governments are simultaneously arguing about what counts as a wing. The Quiet Risk: Reliability and Security Here’s the part that tends to get buried under shiny demos: as AI systems become more autonomous, their failure modes become less forgiving. Agentic systems introduce new attack surfaces, new manipulation risks, and new questions about containment. And the OWASP GenAI project has started formalizing these concerns, highlighting that agentic architectures demand fundamentally different security assumptions. ( OWASP GenAI ) And in legal contexts, those security and reliability questions aren’t abstract “tech issues.” They are privilege issues. Confidentiality issues. Professional liability issues. Client trust issues. Because if a system has memory, persistence, and the ability to take delegated action, the question isn’t “Can it draft a clause?” The question is: What happens when it drafts the clause, stores the clause, reuses the clause, and confidently cites the clause in a context where it absolutely should not? The industry’s focus on capability has obscured a much harder question: Can these systems be trusted, audited, and governed inside real institutions, the kind with malpractice exposure and ethical duties and clients who do not accept “the robot did it” as a defense? Workforce Redesign Is Beginning - Without a Map AI is no longer just augmenting legal work. It is starting to reshape how time, effort, and value are defined. Some firms are making that shift explicit. Ropes & Gray’s decision to credit associates for AI-related effort is a tangible signal that work design is changing. ( City A.M. ) And that’s actually a big deal, because it acknowledges the reality: if AI changes the workflow, you can’t keep measuring productivity using the same old yardstick and then act surprised when everyone’s miserable. Meanwhile, broader workforce signals, burnout, dissatisfaction, suggest institutions are redesigning work faster than they are redesigning incentives, evaluation models, and professional identity. So the risk is not “people will resist AI.” The risk is that work changes, and the definition of success doesn’t. And that creates a professional environment where the rules are unclear, the expectations are inconsistent, and everyone feels like they’re being judged on metrics designed for a different century. The Core Insight: Capability Is No Longer the Constraint Put all of this together and you get one conclusion: The limiting factor for legal AI is no longer technological capability. It is institutional readiness. Readiness to govern. Readiness to secure. Readiness to explain AI-mediated decisions to clients, courts, and regulators. Readiness to redesign work without eroding trust or professional judgment. Because the next phase of legal AI will not be won by the firms with the most tools. It will be won by the firms that treat AI as infrastructure , not magic , and invest like it. Which, unfortunately, is the least glamorous kind of investing. It’s not “Look at our futuristic demo.” It’s “We built controls, audit trails, policy, security, training, and accountability.” In other words: not a fireworks show. A foundation. And in law, the thing you build on matters a lot more than the thing you show off. The Inside Legal AI Brief, is the newly reformatted Inside Legal AI Newsletter. For more information on Inside Legal AI: www.insidelegalai.com For more information: www.insidepractice.com This brief was developed using AI tools in conjunction with proprietary internal research, expert inputs, and established editorial processes. For more information please reach out to us at contact@insidepractice.com 
By Inside Practice December 29, 2025
Knowledge discipline and architectural control now matter more than models.
Legal AI: London unites law firm leaders how to use AI in a  faster & more strategic way.
By Inside Legal AI August 12, 2025
Join Legal AI: London (Dec 3–4) to explore how law firms can use AI to boost knowledge, operations, and client service for smarter, faster outcomes.
Setting the Scene: Legal AI in the UK | Gen AI
By Inside Legal AI July 25, 2025
Generative AI is now vital for UK legal firms, with usage doubling, widespread workflow changes, and one-third budgeting specifically for GenAI licenses.
Legal AI Pathfinders Chicago
By Inside Legal AI March 12, 2024
Legal AI Pathfinders Chicago
Ai Pathfinder NYC
By Inside Legal AI January 19, 2024
As part of the program at the Legal AI Pathfinder's Assembly: New York chair, Damien Riehl posed a series of questions to attendees, focusing on their current perspectives and future outlooks regarding AI.
By Inside Legal AI December 19, 2023
Whose time is it to shine? Given the potential for generative AI to automate a significant portion of routine legal tasks, let’s pose the question – how might AI fundamentally redefine the conventional law firm structure, billing model, and value proposition?
By Inside Legal AI December 14, 2023
2023 has been a landmark year for the team at Inside Practice.
By Inside Legal AI December 14, 2023
If you missed Inflection Point: The Legal #AI Revolution Part 2 - we have just made the keynote session available - "What Does it Mean to be a Lawyer in an AI-Enabled World? "