Inflection Point: The Legal AI Revolution – Part 2

Inside Practice • September 18, 2023

Bold predictions, cautionary advice, and guidance

This past Thursday, September 14, Inside Practice LLC hosted a full-day virtual conference – Inflection Point: The Legal AI Revolution – Part 2 (following up on a June 20 conference of the same name), where more than 150 attendees tuned in for an in-depth exploration focused on how law firms are navigating opportunity and risk – exploring the implications of the exponential maturity of #AI tools for the business and practice of law. 

Attendees heard from more than 20 legal industry leaders and luminaries as they addressed themes ranging from justification in capital investment to the unification of talent and toolsets, the development of use-policy and governance, and client perspectives, among many other strategic considerations.


The conference began with a table-setting keynote as Danielle Benecke (Baker McKenzie) helped to frame where we are within the AI hype cycle while busting some myths and unpacking the transformational impact of AI on the profession. Danielle focused on strategy, capability, and risk as she highlighted her own firm's story arc and tech progression that led them to place bets years ago on a coming paradigm shift that is impossible to ignore today. 


From there, the conference pivoted to lively panel discussions, beginning with a deeper dive into AI's business implications. Led by Ken Crutchfield (Wolters Kluwer and flanked by a distinguished panel comprised of David Cunningham (Reed Smith LLP), David Wang Wilson Sonsini Goodrich & Rosati and Nicole Bradick (Theory and Principle rinciple), tackling the timely topic of justifying investments in AI, emphasizing the precarious balance firms must strike between seizing AI's potential and managing the many inherent risks for early adopters. 


Next up, attendees heard from members of Troutman Pepper's Generative AI Task Force Alison GroundsChristopher ForstnerJessica Kozlov Davis, and Andrew Medeiros shared the firm's approach to consolidating and leveraging the collective skills and expertise of the firm's attorneys and other allied professionals, unifying talent and resources to help pilot, vet, guide, and educate around both the internal/operational, and external, client-facing and advisory challenges and opportunities that generative AI brings to bear. 


AI's trustworthiness was another central theme of the conference as a panel of legal luminaries including David Perez (Perkins Coie LLP), Jacqueline Schafer (Clearbrief), Scott Rechtschaffen (Littler), Natalie Pierce (Gunderson Dettmer) and Christian Lang (Lega) discussed building robust policies and frameworks to harness the power of generative AI safely, bridging the gap between risk, security, and tech innovation.

The conference also offered attendees a glimpse at two standout practical product demos. James Ding, (DraftWise), demonstrated how AI is being used to ignite data-driven decisions – giving rise to a new era of contracts and negotiations. Samuel Smolkin, (Office & Dragons), showcased an innovative solution to manage revisions to hundreds of documents seamlessly.


The focus then shifted to corporate legal departments as Ed Walters (vLex) moderated a candid and insightful discussion, debunking myths and exploring real-world applications of AI within in-house legal teams with a lively panel featuring Jason Barnwell (Microsoft), Jenn McCarron (Netflix , CLOC (Corporate Legal Operations Consortium), and Joël Roy (Novartis). 


The conference concluded with a fascinating discussion and debate examining the long-term viability of the traditional law firm model amidst the rapid emergence and proliferation of generative AI technologies and the concurrent rise of new players in the legal services market.

Led by the day's conference chairperson, Anand Upadhye (Modern Lawyer Strategies), this closing discussion energized everyone in attendance as Jae Um (Six Parsecs), Jennifer Leonard (Creative Lawyers) and Jordan Furlong (Law21) delved deep into the challenges and opportunities that await traditional law firm models.


Inflection Point: The Legal AI Revolution – Part 2 was another fantastic series of discussions with bold predictions, cautionary advice, and guidance. Hearing from our esteemed speakers and panelists, it is evident that the merging of law and AI presents both remarkable opportunities and daunting challenges.


The recurring themes surfaced throughout the day, emphasizing the necessity of bridging technological prowess with human intuition, setting ethical and operational guidelines, and the impending evolution of traditional law firm models in an AI-driven future – underscoring the importance of adaptability, forward-thinking leadership, and informed decision-making.

This event provided attendees with a clearer understanding of the emerging AI landscape and the tools and frameworks necessary to navigate inevitable change with confidence and strategic foresight. 


The full event replay is available on demand. 


Next, we turn our attention to Legal AI Pathfinder’s Assembly: New York.


The Legal AI Pathfinder's Assembly is a peer-to-peer forum and platform for law firms to share progress, experience, observations, and experimentation in an intimate setting as we collectively grapple with the development of strategic roadmaps that will help guide proper investment, education, and deployment of emerging AI tools and tech. 


Hosted at the New York offices of Simpson Thacher, the Legal AI Pathfinder's Assembly will showcase several law firms that are leaning in, proactively experimenting, and investing in the strategic integration of AI into both the firm's practice and business – providing attendees with an opportunity to assess and consider our firm readiness. At the same time, our peers offer an introspective look into the demanding yet profoundly rewarding process of identifying their own North Star in charting their strategic course forward.


Seating is limited. Reserve your spot today! Early bird - Confirm your place by October 11th and save $200

By Inside Practice February 3, 2026
Most firms will arrive in 2026 with no shortage of AI tooling. The harder problem is what sits underneath it: whether your firm’s knowledge foundations, governance, and operating model are strong enough to make AI reliable, scalable, and defensible in practice. That’s the premise of AI x KM 2026 : positioning Knowledge Management as infrastructure, not support, and treating the “knowledge backbone” as the real differentiator in the next phase of legal transformation 
By Inside Practice February 3, 2026
The UK legal market enters 2026 at an inflection point where “data strategy” stops being a modernization project and becomes an operating philosophy. The event framing is blunt: data isn’t back-office, it’s the organizing principle for how legal work is priced, delivered, governed, and measured. Inside Legal Data: London (Apr 9, 2026) leans into that reality with a full-day agenda that connects the dots: institutional intelligence, architecture, governance, AI readiness, and the commercial outcomes leaders actually care about.
By Inside Practice January 16, 2026
Inside Legal AI Brief: 01/16/26 For the past two years, the legal industry has treated AI the way a toddler treats a marker: with wide-eyed wonder, frantic enthusiasm, and absolutely no plan for what happens when it ends up on the walls. Because legal AI has mostly been framed as a question of possibility: What can it do? How fast is it improving? Which tools should firms test? And to be clear: that phase is done. It’s over. The “AI as a fun side quest” era has ended. Because across the legal market, vendors, law firms, courts, regulators, professional bodies - AI is no longer being treated like an experiment. It’s being absorbed into the infrastructure of legal work. Which is a huge change, because “experiment” is where you can shrug and say, “Well, that didn’t work,” like you tried making sourdough in 2020. “Infrastructure” is where, if it fails, society starts emailing you like, “Hi, I can’t access my rights today, is the system down?” And that’s what changes the risk profile entirely. This is no longer a story about whether AI works. It is now a story about whether legal institutions are prepared to control it, govern it, and then, this is the real kicker, live with it . Because it’s one thing to invite AI into the building. It’s another thing to realize you’ve given it keys, a badge, and a desk right next to the confidential files. The Inflection Point We’re Already Past Here’s one of the clearest signals something fundamental has shifted: AI has crossed an internal threshold inside institutions. Vendors are no longer selling discrete tools, little “helpful” add-ons you can trial and then quietly abandon like a gym membership. They’re selling agentic systems , often paired with pricing models that are basically designed to encourage mass adoption. Litera’s rapid growth after launching no-cost access to agentic capabilities is a perfect example: remove friction, and adoption accelerates. ( Business Wire ) And courts? Courts are not just watching from a safe distance, stroking their judicial chins and murmuring, “Hmm.” U.S. judges are reportedly already using AI to summarize filings and assist with drafting. ( Wall Street Journal ) Which is… significant! Because once judges start using AI as a convenience layer, that doesn’t stay a “nice-to-have.” That changes how lawyers write, how they brief, what they emphasize, and what they assume will be “picked up.” And institutional change rarely arrives with fireworks. It arrives through normalization: one more workflow, one more “assist,” one more quiet assumption that the machine is just… part of the process now. That moment has already passed. The End of AI Euphoria and the Rise of Scrutiny Now, investment in AI hasn’t slowed. But it has sobered up. The market is moving beyond the “try everything” phase, where everyone acted like they were at a tech buffet piling shrimp onto their plate with both hands, and into a phase of discipline. Spending is being redirected toward what you need if AI is going to operate at scale without turning your firm into a liability factory: governance and oversight security and data control reliability and auditability integration with knowledge, data, and workflow systems and ROI you can actually explain to leadership and clients without sounding like a cult member And this shift mirrors what enterprise leaders are saying more broadly. McKinsey’s State of AI points to organizations focusing less on experimentation and more on operationalizing AI in ways that survive scrutiny. ( McKinsey ) Which makes sense. Because AI is no longer “innovation theater”, where everyone gets to wear a headset and say “synergy” and then go home. It is entering the same governance conversation as billing systems, pricing models, and client data platforms. Meaning: boring, consequential, and full of meetings. The true nightmare. Governance Is Rising, but Fragmenting If this were a neat story, regulation would be converging smoothly behind adoption. It is not a neat story. In the United States, AI governance is emerging through state-level activity, which means we’re heading toward the regulatory equivalent of being attacked by a flock of birds: not one clean problem, but dozens of small, frantic ones coming from different angles. You’ve got legislative developments in Texas ( Texas Legislature Online ), policy direction in California ( Governor of California ), and the likely future is layered compliance rather than one national framework. And in the UK, the signal is different but still consequential: professional bodies like the Law Society are issuing increasingly urgent guidance on pace and risk, even while enforceable standards remain fluid. ( Law Society ) So if you’re a firm operating across jurisdictions, the risk is not “no regulation.” It’s fragmentation . Which forces governance decisions now , before standards stabilize, meaning you’ll be building the plane while multiple governments are simultaneously arguing about what counts as a wing. The Quiet Risk: Reliability and Security Here’s the part that tends to get buried under shiny demos: as AI systems become more autonomous, their failure modes become less forgiving. Agentic systems introduce new attack surfaces, new manipulation risks, and new questions about containment. And the OWASP GenAI project has started formalizing these concerns, highlighting that agentic architectures demand fundamentally different security assumptions. ( OWASP GenAI ) And in legal contexts, those security and reliability questions aren’t abstract “tech issues.” They are privilege issues. Confidentiality issues. Professional liability issues. Client trust issues. Because if a system has memory, persistence, and the ability to take delegated action, the question isn’t “Can it draft a clause?” The question is: What happens when it drafts the clause, stores the clause, reuses the clause, and confidently cites the clause in a context where it absolutely should not? The industry’s focus on capability has obscured a much harder question: Can these systems be trusted, audited, and governed inside real institutions, the kind with malpractice exposure and ethical duties and clients who do not accept “the robot did it” as a defense? Workforce Redesign Is Beginning - Without a Map AI is no longer just augmenting legal work. It is starting to reshape how time, effort, and value are defined. Some firms are making that shift explicit. Ropes & Gray’s decision to credit associates for AI-related effort is a tangible signal that work design is changing. ( City A.M. ) And that’s actually a big deal, because it acknowledges the reality: if AI changes the workflow, you can’t keep measuring productivity using the same old yardstick and then act surprised when everyone’s miserable. Meanwhile, broader workforce signals, burnout, dissatisfaction, suggest institutions are redesigning work faster than they are redesigning incentives, evaluation models, and professional identity. So the risk is not “people will resist AI.” The risk is that work changes, and the definition of success doesn’t. And that creates a professional environment where the rules are unclear, the expectations are inconsistent, and everyone feels like they’re being judged on metrics designed for a different century. The Core Insight: Capability Is No Longer the Constraint Put all of this together and you get one conclusion: The limiting factor for legal AI is no longer technological capability. It is institutional readiness. Readiness to govern. Readiness to secure. Readiness to explain AI-mediated decisions to clients, courts, and regulators. Readiness to redesign work without eroding trust or professional judgment. Because the next phase of legal AI will not be won by the firms with the most tools. It will be won by the firms that treat AI as infrastructure , not magic , and invest like it. Which, unfortunately, is the least glamorous kind of investing. It’s not “Look at our futuristic demo.” It’s “We built controls, audit trails, policy, security, training, and accountability.” In other words: not a fireworks show. A foundation. And in law, the thing you build on matters a lot more than the thing you show off. The Inside Legal AI Brief, is the newly reformatted Inside Legal AI Newsletter. For more information on Inside Legal AI: www.insidelegalai.com For more information: www.insidepractice.com This brief was developed using AI tools in conjunction with proprietary internal research, expert inputs, and established editorial processes. For more information please reach out to us at contact@insidepractice.com 
By Inside Practice December 29, 2025
Knowledge discipline and architectural control now matter more than models.
Legal AI: London unites law firm leaders how to use AI in a  faster & more strategic way.
By Inside Legal AI August 12, 2025
Join Legal AI: London (Dec 3–4) to explore how law firms can use AI to boost knowledge, operations, and client service for smarter, faster outcomes.
Setting the Scene: Legal AI in the UK | Gen AI
By Inside Legal AI July 25, 2025
Generative AI is now vital for UK legal firms, with usage doubling, widespread workflow changes, and one-third budgeting specifically for GenAI licenses.
Legal AI Pathfinders Chicago
By Inside Legal AI March 12, 2024
Legal AI Pathfinders Chicago
Ai Pathfinder NYC
By Inside Legal AI January 19, 2024
As part of the program at the Legal AI Pathfinder's Assembly: New York chair, Damien Riehl posed a series of questions to attendees, focusing on their current perspectives and future outlooks regarding AI.
By Inside Legal AI December 19, 2023
Whose time is it to shine? Given the potential for generative AI to automate a significant portion of routine legal tasks, let’s pose the question – how might AI fundamentally redefine the conventional law firm structure, billing model, and value proposition?
By Inside Legal AI December 14, 2023
2023 has been a landmark year for the team at Inside Practice.