Charting Legal AI's Future: Key Takeaways from the Legal AI Pathfinder's Assembly NYC

Inside Practice • December 12, 2023

The Legal AI Pathfinder’s Assembly – New York took place last week at the offices of Simpson Thacher on December 7th.

Developed as a think tank and peer-to-peer exchange for law firms charting their respective AI roadmaps, the program took a deep dive into the evolving role and impacts of generative AI within law firms – examining how several firms are leaning in, proactively experimenting, and invested in the strategic integration of AI into both the practice and business of law.


Conference chair, Damien Riehl, VP, Solutions Champion at vLex, welcomed everyone in attendance to the program, with a retrospective look at the major AI milestones and events that have taken place over the past year.


In the opening session titled, Making Sense of 2023: Trajectory and the Speed of Change, we heard from Daniel Katz (Chicago Kent College of Law, 273 Ventures), as he was essentially interviewed by co-hosts at The Geek in Review Podcast, Greg Lambert (Jackson Walker LLP) and Marlene Gebauer (Mayer Brown).

 

The discussion was largely anchored by two independent arcs of maturity – focusing on the underlying models themselves, as well as additional engineering enhancements. Professor Katz guided attendees across the arc from out-of-the-box, “zero-shot” capabilities along a progression from basic functionalities to more advanced integrated applications, incorporating more complex elements.

 

Dan equated this progression and journey to a slow and systemic climb – as opposed to summiting Mount Everest overnight (which a lot of the hype would have us believe). Clearly, a lot of fine-tuning has yet to take place moving forward, as these models require an exceptional “information diet” – along with a need to balance ethical walls and information security.

 

Next, we heard from Joe Green who shared Gunderson Dettmers story of building its own internal generative AI chat application, “ChatGD” – including why they decided to build it, how it’s used, and what the firm has learned in the process.

 

Joe emphasized the importance of setting realistic expectations and goals for AI adoption in law firms, focusing on flexibility and value rather than perfect replacement of human attorneys – perhaps minimizing the type of work that clients don't particularly value while reframing (in the client’s mind) what it is that they are happy to pay law firms for.

 

Following Joe, we heard from the IncuBaker team at BakerHostetler – featuring Katherine Lowry, Leigh Zeiser, and Elaine Dick. This comprehensive, deep dive into Baker Hostetler's AI journey offered valuable insights into the strategic and disciplined approach necessary for successful AI integration in law firms – emphasizing the importance of building a strong data foundation and not rushing into AI without proper assessment and evaluation.


The IncuBaker team covered an incredible amount of ground within their allotted time, sharing a number of guiding principles that help to assess use case feasibility and business value and more broadly navigate the complexities of AI adoption.

 

Troutman Peppers Jason Lichter and Andrew P. Medeiros were up next as they delved into the firm's AI Task Force and its development and strategic implementation—emphasizing the importance of identifying AI aptitude across practice areas, forming a cohesive strategy for AI integration, and ensuring widespread expertise within the firm.

 

The Troutman team also shared a live demo of “Athena,” the firm’s ChatGPT application developed to help attorneys and business professionals safely and effectively use generative AI to improve workflows and enhance overall user and client experiences. Andrew and Jason fielded a number of audience questions and led a very lively discussion.

 

After lunch, we heard from the team at Gibson Dunn , as Meredith Williams-Range, Lawrence Baxter, and Jeff Saper led the audience through a discussion centered on the critical role of data in driving AI transformation – with a particular focus on governance, security, and privacy.

 

This session highlighted the interconnectedness of data collection, governance, and analytics as foundational pillars for effective AI implementation – emphasizing the importance of cross-functional teams, the collective ownership of data, and the role of stewardship in managing and maintaining data.

 

“Unintentional data breaches can be just as dangerous as intentional ones.”

 

Up next, we had Philip Bryce, Amol Bargaje, and Marlene Gebauer from Mayer Brown leading attendees through a discussion centered around the firm’s journey through this year’s wild AI hype cycle.

 

The team at Mayer Brown highlighted their journey from the initial exploration of emerging GenAI tools and tech and the need for adaptability and strategic planning – to addressing client inquiries about AI’s impact.

 

Mayer Brown's approach to AI adoption is focused on creating a balance between innovation and responsible use, emphasizing the importance of setting expectations and ensuring that AI tools are evaluated for their feasibility, business value, and ethical implications.

 

This was a fantastic conclusion to the program’s “firm spotlight sessions” leading us to the closing Futurecasting panel, featuring Centari CEO Kevin Walker, James Ding, Co-founder & CEO at DraftWise, Emily Colbert, Senior Vice President, Product Management at Thomson Reuters, and moderated by the day’s conference chair, Damien Riehl.

 

This closing discussion aspired to break down AI’s capability into discrete terms, illuminating opportunities that exist between pain points and what the technology can do, where these applications fit within the value chain, and more broadly what these visions of the future mean for law firms today in terms of tech strategy, data readiness, business models, talent and training.

 

Each of our esteemed panelists shared their own foresight and projections, sparking some very compelling dialogue with attendees. These projections were largely limited to the next 12-to-24 months, which seemed a reasonable time horizon given all that has transpired in the past year.

 

  • Will there be a shift from experimentation with GenAI to practical use within the next year?
  • Should we expect AI to become a regular part of lawyers' workflows, with less reliance on manual review and more focus on using AI for edge and efficiency?


 

Exercising human judgment is likely always going to be a part of the equation. While AI can and will expedite certain workflow components, most in attendance last week felt that a combination of humans and relevant data is still going to produce the best results for your clients.

 

At the same time, clients are expecting law firms to keep up with the pace of change and evolve and improve in step with emerging tech capabilities. And therein lies the challenge.

 

Strategic and practical innovation increasingly demands refreshed skillsets, proper training, and education enabling an ability to work with and alongside emerging tools and tech to a degree where we are not merely adapting to change, but actively driving it—taking a hands-on, developmental approach to shaping the tools that will no doubt reshape our profession.

 

Last week’s conference proved to be a fascinating and comprehensive exploration of an emerging blueprint for data-driven transformation that balances practical and realistic near-term expectations with longer-term aspirational goals—emphasizing the importance of education, experimentation, innovation, collaboration, and data literacy.

 

A huge thanks to Oz Benamram, Stephanie Cano , and Simpson Thacher & Bartlett LLP for hosting and to event partners, DraftWise, Centari, Thomson Reuters, and vLex for helping to bring this event to life. And of course, a tremendous amount of gratitude to all of our speakers and spotlight firms!

 

We hope you will consider joining us as we continue the discussion next year at the Legal AI Pathfinder’s Assembly—Chicago this March 6-7.


More details can be found here: Legal AI Pathfinder’s Assembly (insidepractice.com)


By Inside Practice February 3, 2026
Most firms will arrive in 2026 with no shortage of AI tooling. The harder problem is what sits underneath it: whether your firm’s knowledge foundations, governance, and operating model are strong enough to make AI reliable, scalable, and defensible in practice. That’s the premise of AI x KM 2026 : positioning Knowledge Management as infrastructure, not support, and treating the “knowledge backbone” as the real differentiator in the next phase of legal transformation 
By Inside Practice February 3, 2026
The UK legal market enters 2026 at an inflection point where “data strategy” stops being a modernization project and becomes an operating philosophy. The event framing is blunt: data isn’t back-office, it’s the organizing principle for how legal work is priced, delivered, governed, and measured. Inside Legal Data: London (Apr 9, 2026) leans into that reality with a full-day agenda that connects the dots: institutional intelligence, architecture, governance, AI readiness, and the commercial outcomes leaders actually care about.
By Inside Practice January 16, 2026
Inside Legal AI Brief: 01/16/26 For the past two years, the legal industry has treated AI the way a toddler treats a marker: with wide-eyed wonder, frantic enthusiasm, and absolutely no plan for what happens when it ends up on the walls. Because legal AI has mostly been framed as a question of possibility: What can it do? How fast is it improving? Which tools should firms test? And to be clear: that phase is done. It’s over. The “AI as a fun side quest” era has ended. Because across the legal market, vendors, law firms, courts, regulators, professional bodies - AI is no longer being treated like an experiment. It’s being absorbed into the infrastructure of legal work. Which is a huge change, because “experiment” is where you can shrug and say, “Well, that didn’t work,” like you tried making sourdough in 2020. “Infrastructure” is where, if it fails, society starts emailing you like, “Hi, I can’t access my rights today, is the system down?” And that’s what changes the risk profile entirely. This is no longer a story about whether AI works. It is now a story about whether legal institutions are prepared to control it, govern it, and then, this is the real kicker, live with it . Because it’s one thing to invite AI into the building. It’s another thing to realize you’ve given it keys, a badge, and a desk right next to the confidential files. The Inflection Point We’re Already Past Here’s one of the clearest signals something fundamental has shifted: AI has crossed an internal threshold inside institutions. Vendors are no longer selling discrete tools, little “helpful” add-ons you can trial and then quietly abandon like a gym membership. They’re selling agentic systems , often paired with pricing models that are basically designed to encourage mass adoption. Litera’s rapid growth after launching no-cost access to agentic capabilities is a perfect example: remove friction, and adoption accelerates. ( Business Wire ) And courts? Courts are not just watching from a safe distance, stroking their judicial chins and murmuring, “Hmm.” U.S. judges are reportedly already using AI to summarize filings and assist with drafting. ( Wall Street Journal ) Which is… significant! Because once judges start using AI as a convenience layer, that doesn’t stay a “nice-to-have.” That changes how lawyers write, how they brief, what they emphasize, and what they assume will be “picked up.” And institutional change rarely arrives with fireworks. It arrives through normalization: one more workflow, one more “assist,” one more quiet assumption that the machine is just… part of the process now. That moment has already passed. The End of AI Euphoria and the Rise of Scrutiny Now, investment in AI hasn’t slowed. But it has sobered up. The market is moving beyond the “try everything” phase, where everyone acted like they were at a tech buffet piling shrimp onto their plate with both hands, and into a phase of discipline. Spending is being redirected toward what you need if AI is going to operate at scale without turning your firm into a liability factory: governance and oversight security and data control reliability and auditability integration with knowledge, data, and workflow systems and ROI you can actually explain to leadership and clients without sounding like a cult member And this shift mirrors what enterprise leaders are saying more broadly. McKinsey’s State of AI points to organizations focusing less on experimentation and more on operationalizing AI in ways that survive scrutiny. ( McKinsey ) Which makes sense. Because AI is no longer “innovation theater”, where everyone gets to wear a headset and say “synergy” and then go home. It is entering the same governance conversation as billing systems, pricing models, and client data platforms. Meaning: boring, consequential, and full of meetings. The true nightmare. Governance Is Rising, but Fragmenting If this were a neat story, regulation would be converging smoothly behind adoption. It is not a neat story. In the United States, AI governance is emerging through state-level activity, which means we’re heading toward the regulatory equivalent of being attacked by a flock of birds: not one clean problem, but dozens of small, frantic ones coming from different angles. You’ve got legislative developments in Texas ( Texas Legislature Online ), policy direction in California ( Governor of California ), and the likely future is layered compliance rather than one national framework. And in the UK, the signal is different but still consequential: professional bodies like the Law Society are issuing increasingly urgent guidance on pace and risk, even while enforceable standards remain fluid. ( Law Society ) So if you’re a firm operating across jurisdictions, the risk is not “no regulation.” It’s fragmentation . Which forces governance decisions now , before standards stabilize, meaning you’ll be building the plane while multiple governments are simultaneously arguing about what counts as a wing. The Quiet Risk: Reliability and Security Here’s the part that tends to get buried under shiny demos: as AI systems become more autonomous, their failure modes become less forgiving. Agentic systems introduce new attack surfaces, new manipulation risks, and new questions about containment. And the OWASP GenAI project has started formalizing these concerns, highlighting that agentic architectures demand fundamentally different security assumptions. ( OWASP GenAI ) And in legal contexts, those security and reliability questions aren’t abstract “tech issues.” They are privilege issues. Confidentiality issues. Professional liability issues. Client trust issues. Because if a system has memory, persistence, and the ability to take delegated action, the question isn’t “Can it draft a clause?” The question is: What happens when it drafts the clause, stores the clause, reuses the clause, and confidently cites the clause in a context where it absolutely should not? The industry’s focus on capability has obscured a much harder question: Can these systems be trusted, audited, and governed inside real institutions, the kind with malpractice exposure and ethical duties and clients who do not accept “the robot did it” as a defense? Workforce Redesign Is Beginning - Without a Map AI is no longer just augmenting legal work. It is starting to reshape how time, effort, and value are defined. Some firms are making that shift explicit. Ropes & Gray’s decision to credit associates for AI-related effort is a tangible signal that work design is changing. ( City A.M. ) And that’s actually a big deal, because it acknowledges the reality: if AI changes the workflow, you can’t keep measuring productivity using the same old yardstick and then act surprised when everyone’s miserable. Meanwhile, broader workforce signals, burnout, dissatisfaction, suggest institutions are redesigning work faster than they are redesigning incentives, evaluation models, and professional identity. So the risk is not “people will resist AI.” The risk is that work changes, and the definition of success doesn’t. And that creates a professional environment where the rules are unclear, the expectations are inconsistent, and everyone feels like they’re being judged on metrics designed for a different century. The Core Insight: Capability Is No Longer the Constraint Put all of this together and you get one conclusion: The limiting factor for legal AI is no longer technological capability. It is institutional readiness. Readiness to govern. Readiness to secure. Readiness to explain AI-mediated decisions to clients, courts, and regulators. Readiness to redesign work without eroding trust or professional judgment. Because the next phase of legal AI will not be won by the firms with the most tools. It will be won by the firms that treat AI as infrastructure , not magic , and invest like it. Which, unfortunately, is the least glamorous kind of investing. It’s not “Look at our futuristic demo.” It’s “We built controls, audit trails, policy, security, training, and accountability.” In other words: not a fireworks show. A foundation. And in law, the thing you build on matters a lot more than the thing you show off. The Inside Legal AI Brief, is the newly reformatted Inside Legal AI Newsletter. For more information on Inside Legal AI: www.insidelegalai.com For more information: www.insidepractice.com This brief was developed using AI tools in conjunction with proprietary internal research, expert inputs, and established editorial processes. For more information please reach out to us at contact@insidepractice.com 
By Inside Practice December 29, 2025
Knowledge discipline and architectural control now matter more than models.
Legal AI: London unites law firm leaders how to use AI in a  faster & more strategic way.
By Inside Legal AI August 12, 2025
Join Legal AI: London (Dec 3–4) to explore how law firms can use AI to boost knowledge, operations, and client service for smarter, faster outcomes.
Setting the Scene: Legal AI in the UK | Gen AI
By Inside Legal AI July 25, 2025
Generative AI is now vital for UK legal firms, with usage doubling, widespread workflow changes, and one-third budgeting specifically for GenAI licenses.
Legal AI Pathfinders Chicago
By Inside Legal AI March 12, 2024
Legal AI Pathfinders Chicago
Ai Pathfinder NYC
By Inside Legal AI January 19, 2024
As part of the program at the Legal AI Pathfinder's Assembly: New York chair, Damien Riehl posed a series of questions to attendees, focusing on their current perspectives and future outlooks regarding AI.
By Inside Legal AI December 19, 2023
Whose time is it to shine? Given the potential for generative AI to automate a significant portion of routine legal tasks, let’s pose the question – how might AI fundamentally redefine the conventional law firm structure, billing model, and value proposition?
By Inside Legal AI December 14, 2023
2023 has been a landmark year for the team at Inside Practice.