Blog

  • Confidentiality and AI Tools: Can You Upload Client Contracts to AI?

    Confidentiality and AI Tools: Can You Upload Client Contracts to AI?

    Confidentiality and AI Tools: Can You Upload Client Contracts to AI?

    Forty-four percent of legal tasks could be automated by AI, according to a Goldman Sachs analysis. But before you upload your first client contract to an AI tool, there is a question you need to answer: does doing so violate your duty of confidentiality under Model Rule 1.6?

    This is not a hypothetical concern. It is the single biggest practical barrier preventing solo and small firm lawyers from adopting AI contract review. The answer depends entirely on which tool you use, how that tool handles your data, and whether you have done your homework before hitting “upload.”

    This article gives you the exact framework to evaluate any AI tool’s data handling practices, a comparison of how the major platforms stack up, and actionable steps to protect client confidentiality while still benefiting from AI-assisted review. Try Clause Labs Free to see how a purpose-built legal AI handles data security.

    What Model Rule 1.6 Actually Requires

    ABA Model Rule 1.6(a) states that “a lawyer shall not reveal information relating to the representation of a client” unless the client gives informed consent. Rule 1.6(c) adds a second obligation: “a lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”

    When you upload a client’s contract to an AI tool, you are sharing client information with a third-party service. That is an act of disclosure. Whether that disclosure is permissible depends on whether your “efforts to prevent” unauthorized access are “reasonable.”

    The critical word is “reasonable.” You are not required to guarantee absolute security. You are required to exercise the same diligence you would when choosing any technology vendor that handles client data, such as cloud storage, email, or practice management software.

    What ABA Formal Opinion 512 Says

    In July 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, the first comprehensive ethics guidance on lawyers using generative AI tools. The opinion addresses confidentiality directly and is worth reading in full.

    Key requirements from Opinion 512:

    • Know how the tool uses data. You must understand whether the AI tool retains your inputs, uses them for model training, or shares them with third parties. Ignorance is not a defense.
    • Implement adequate safeguards. You must ensure data processed by the AI tool is secure and not susceptible to unwitting or unauthorized disclosure.
    • Get informed consent for self-learning tools. If the AI tool trains on your inputs (meaning your client’s data improves the vendor’s AI), you need the client’s informed consent before using it. Boilerplate consent in engagement letters is not sufficient.
    • Evaluate the vendor. Your obligation to vet third-party contractors extends to AI tool vendors. Investigate reliability, security measures, and policies.

    The practical takeaway: you can use AI tools for client contracts, but you must do your due diligence first. The standard is similar to what you would apply when evaluating a cloud-based practice management system or document storage provider.

    The Data Handling Spectrum: Not All AI Tools Are Equal

    AI tools handle client data on a spectrum from dangerous to acceptable. Before you upload anything, place the tool on this scale.

    Dangerous: Do Not Use for Client Data

    Tools at this end of the spectrum share some or all of these characteristics:

    • The tool trains on user-uploaded data, meaning your client’s contract improves their AI model
    • No clear data retention policy, or data is retained indefinitely
    • No encryption at rest
    • Terms of service allow sharing data with third parties
    • No data processing agreement available

    The most common example: free-tier consumer AI chatbots with default training-on-inputs enabled. According to OpenAI’s own policies, free-tier ChatGPT conversations may be used to improve their models unless the user explicitly opts out. That means your client’s confidential contract language could end up influencing outputs for other users.

    Caution: Review Carefully Before Use

    These tools have better security but require careful configuration:

    • Data retained for a limited period (30-90 days)
    • Encryption in transit but unclear at-rest encryption
    • Training opt-out available but default is opt-in
    • Privacy policy exists but is vague or difficult to interpret
    • No SOC 2 certification

    Many general-purpose AI platforms with “business” tiers fall here. They may be acceptable with proper configuration, but you need to verify the settings and understand the defaults.

    Tools built for regulated industries typically offer:

    • Zero data retention or user-configurable retention periods
    • Explicit commitment to never train on user-uploaded documents
    • Encryption in transit (TLS 1.2+) and at rest (AES-256)
    • SOC 2 Type II certification or equivalent
    • Clear, detailed privacy policy written for professional users
    • Data processing agreement available on request
    • Breach notification commitments

    Purpose-built legal AI tools like Clause Labs and enterprise-tier offerings from major AI providers typically meet these standards.

    How Specific AI Tools Handle Client Data

    Here is how the most commonly used AI tools compare on the factors that matter for confidentiality compliance.

    Factor Free-Tier ChatGPT ChatGPT Enterprise/API Claude (Anthropic) API Purpose-Built Legal AI (e.g., Clause Labs)
    Trains on inputs? Default: Yes (opt-out available) No No (API) No
    Data retention Conversations stored Configurable (min 90 days) Configurable Minimal / configurable
    Encryption at rest AES-256 AES-256 Yes AES-256
    Encryption in transit TLS 1.2+ TLS 1.2+ TLS 1.2+ TLS 1.3
    SOC 2 certified No (consumer tier) Yes Yes (API) On roadmap
    DPA available No (consumer tier) Yes Yes Yes
    Zero data retention option No Yes (ZDR API) Yes Yes
    Suitable for client data? No Yes, with configuration Yes, with configuration Yes

    The ChatGPT Problem

    Many lawyers use free-tier ChatGPT for contract-related tasks without understanding the implications. By default, OpenAI may use conversations to improve their models. You can opt out through settings, but the consumer product was not designed for handling confidential client data.

    ChatGPT Enterprise and the API are different. OpenAI explicitly states that they do not train on Enterprise or API inputs. But the Enterprise tier costs significantly more, and you still need to configure data retention settings appropriately.

    What to Look for in Any Tool

    The tool’s marketing page is not sufficient. Read the actual terms of service, privacy policy, and data processing agreement. If the vendor cannot clearly answer how they handle your data, that is your answer.

    The 8-Point Data Security Checklist

    Before uploading any client document to an AI tool, verify these eight factors. If the tool cannot answer all eight clearly, do not use it for client data.

    1. Data Retention: Does the tool store your documents? For how long? Can you delete them on demand?

    2. Training Data Policy: Does the tool use your uploads to train or improve its AI models? Is the default opt-in or opt-out?

    3. Encryption: Is data encrypted in transit (minimum TLS 1.2) and at rest (minimum AES-256)?

    4. Access Controls: Who at the AI company can access your uploaded data? Under what circumstances?

    5. Security Certification: Has the tool been independently audited? SOC 2 Type II is the standard for SaaS products handling sensitive data.

    6. Data Processing Agreement: Will the vendor sign a DPA? This is standard for any tool handling regulated data.

    7. Sub-Processors: Does the vendor route your data through third-party processors? If so, which ones, and what are their security standards?

    8. Breach Notification: Will the vendor notify you of a data breach? Within what timeframe? (72 hours is the standard under most regulatory frameworks.)

    Print this checklist. Run every AI tool through it before use. Document your findings. That documentation is your evidence of “reasonable efforts” under Rule 1.6(c) if questions ever arise.

    Practical Steps to Protect Confidentiality

    Choosing a secure tool is necessary but not sufficient. These additional practices reduce risk further.

    Anonymize When Possible

    Before uploading, consider whether you can remove or replace identifying information. Replace party names with “Party A” and “Party B.” Remove specific addresses, dollar amounts, or other details that are not relevant to the clause-level analysis you need. Most AI contract review tools analyze clause structure and risk patterns. They do not need to know who the parties are to identify a one-sided indemnification clause.

    This is not always practical, especially for full-contract risk analysis. But for targeted clause review, anonymization adds a layer of protection at minimal cost.

    Check Your Engagement Letter

    Does your standard engagement letter address AI tool usage? If not, it should. ABA Formal Opinion 512 recommends that lawyers obtain informed consent before using client data in AI tools, and notes that boilerplate provisions are inadequate for self-learning tools.

    For tools that do not train on inputs, a clear disclosure in your engagement letter is sufficient in most jurisdictions. For tools that do train on inputs, you need explicit, informed consent that explains the risk. For a deeper look at how disclosure requirements vary by state, see our state-by-state guide to AI disclosure requirements.

    Review Your Malpractice Insurance

    Does your professional liability policy cover AI-related data incidents? Most standard policies cover technology-related errors, but the intersection of AI tools and confidentiality is new enough that coverage may be uncertain. Contact your carrier and get clarity in writing.

    Document Your Due Diligence

    Keep a record of your AI tool evaluation process. Save the vendor’s privacy policy, terms of service, and DPA. Note the date you reviewed them. This documentation demonstrates your compliance with Rule 1.6(c)’s “reasonable efforts” standard.

    What to Tell Clients About Data Security

    Proactive communication about your AI data handling builds trust and reduces the risk of complaints.

    Sample Engagement Letter Language

    Standard disclosure (for tools that do not train on inputs):

    “Our firm uses AI-powered contract review tools as part of our quality assurance process. These tools assist with clause identification, risk analysis, and missing provision detection. All AI-generated analysis is reviewed, verified, and supplemented by attorney judgment before inclusion in any client deliverable. Our AI tools use encryption at rest and in transit, do not train on client data, and comply with industry security standards.”

    Detailed disclosure (for firms wanting maximum protection):

    “Our firm uses [Tool Name], an AI-assisted contract review platform, to enhance the quality and efficiency of our contract review services. This tool analyzes contract language to identify clauses, assess risk levels, and detect missing provisions. Your documents are encrypted during transmission and storage. The tool does not retain your documents after analysis is complete and does not use your data to train its AI models. A licensed attorney reviews all AI-generated analysis before it is included in any work product delivered to you. You may request that we not use AI tools in your matter at any time.”

    For guidance on how to ethically integrate AI into your practice more broadly, see our guide on how to use AI without risking your license.

    Some practitioners go beyond disclosure and seek explicit client consent for AI use. This approach offers maximum protection but adds friction.

    When explicit consent is appropriate:

    • Matters involving trade secrets or highly sensitive IP
    • Clients in regulated industries (healthcare, financial services)
    • Jurisdictions that mandate AI disclosure (check your state’s requirements)
    • Engagement letters that specifically restrict technology use

    When standard disclosure is sufficient:

    • Routine contract review using tools that do not train on inputs
    • Tools with zero data retention policies
    • Jurisdictions with no specific AI disclosure requirements
    • Standard commercial agreements without heightened sensitivity

    The trend is toward more disclosure, not less. As of 2026, more state bars are issuing guidance that favors transparency about AI use. According to a Justia survey of all 50 states, the number of states with specific AI ethics guidance has increased significantly since 2023.

    Attorney-Client Privilege and AI

    A separate but related question: does uploading a client document to an AI tool waive attorney-client privilege?

    The short answer: probably not, if the tool is properly secured. Courts have generally held that sharing privileged information with a service provider does not waive the privilege, provided the disclosure is necessary for the service and the provider maintains confidentiality. This is sometimes called the “Kovel doctrine” (after United States v. Kovel, 296 F.2d 918 (2d Cir. 1961)), which protects communications shared with agents necessary to facilitate legal representation.

    AI tools are analogous to other technology vendors, such as e-discovery platforms, cloud storage, and document management systems, that routinely handle privileged materials without waiving privilege. The key is ensuring the vendor has appropriate confidentiality protections in place.

    However, this area of law is evolving. If you are working with exceptionally sensitive privileged materials, consult with a legal ethics specialist in your jurisdiction before proceeding. Reviewing your approach against established contract red flag frameworks can also help you develop a consistent, defensible process.

    Frequently Asked Questions

    Can I use ChatGPT for client contracts?

    Not the free consumer version, at least not without significant caveats. Free-tier ChatGPT may train on your inputs by default, lacks SOC 2 certification for the consumer product, and does not offer a data processing agreement. ChatGPT Enterprise and API tiers are different: they do not train on inputs and offer configurable data retention. If you use the Enterprise or API tier with appropriate settings, it can be acceptable. But a purpose-built legal AI tool like Clause Labs is designed from the ground up for handling confidential legal documents.

    Is uploading to AI the same as uploading to cloud storage?

    The analysis under Rule 1.6 is similar. Both involve sharing client data with a third-party service provider. The key differences: cloud storage typically stores data without processing it, while AI tools process the content and may use it for training. The “reasonable efforts” standard applies to both, but AI tools require additional diligence around training data policies and model improvement practices.

    What if my client’s contract contains trade secrets?

    Apply heightened scrutiny. Consider whether the AI tool’s data handling is sufficient for the sensitivity level. Anonymize where possible. Use tools with zero data retention. Get explicit informed consent from the client. Document everything. For trade secrets specifically, any inadvertent disclosure could destroy the trade secret status entirely, so the stakes are higher than for ordinary confidential information.

    Does attorney-client privilege protect AI-processed documents?

    Most likely, yes, provided the AI tool vendor maintains appropriate confidentiality protections. The principle is the same as with other technology service providers. But this area of law is still developing, and no court has issued a definitive ruling on AI tools specifically. Maintain strong vendor confidentiality agreements as a safeguard.

    What if the AI tool has a data breach?

    Your obligation under Rule 1.6(c) is to take “reasonable efforts” to prevent unauthorized disclosure, not to guarantee it never happens. If you chose a reputable tool with appropriate security measures and documented your evaluation process, you have met the standard even if a breach occurs. However, you should have a response plan: notify affected clients promptly, assess the scope of exposure, and consult your malpractice carrier. A vendor’s breach notification timeline (ideally 72 hours or less) gives you the information you need to respond.


    This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.

  • How to Supervise AI Outputs: A Practical Framework for Contract Lawyers

    How to Supervise AI Outputs: A Practical Framework for Contract Lawyers

    How to Supervise AI Outputs: A Practical Framework for Contract Lawyers

    Every ethics article, bar opinion, and CLE presentation on legal AI says the same thing: “Lawyers must review and supervise AI output.” But almost none of them explain how.

    How do you review a 12-page risk analysis that an AI generated in 30 seconds? Which parts do you spot-check? How do you catch the errors AI is most likely to make? How much time should supervision add to each review? When is a quick scan sufficient, and when do you need a deep dive?

    This article answers those questions with a concrete, repeatable framework — the VERIFY protocol — that turns the abstract obligation into a 10-15 minute daily habit. Whether you use Clause Labs, Spellbook, LegalOn, or any other AI contract review tool, this framework keeps you compliant with ABA Formal Opinion 512 and Model Rule 5.3 — and more importantly, it keeps your clients protected.

    Why “Review AI Output” Isn’t Enough Guidance

    The obligation is clear. Rule 5.3 of the ABA Model Rules requires lawyers to supervise non-lawyer assistants. Formal Opinion 512 explicitly extends this to AI tools: lawyers must independently verify AI-generated content before using it in client work. “Uncritical reliance on content created by a GAI tool is risky and almost certainly malpractice.”

    But the guidance stops there. It tells you that you must supervise, not how you should supervise. The result is predictable: some lawyers spend 2 hours re-reviewing what the AI analyzed in 60 seconds (defeating the efficiency purpose), while others glance at the summary and call it supervised (defeating the quality purpose).

    Neither approach works. What you need is a structured protocol calibrated to the complexity of the contract and the risk level of the output — one that takes 10-15 minutes for a standard agreement and protects you in a malpractice or bar inquiry.

    According to the Thomson Reuters 2025 Future of Professionals Report, only 40% of law firms provide any form of AI training to staff, and just 20% measure return on investment for AI tools. The ABA’s 2024 TechReport reinforces this concern: accuracy (74.7%) and reliability (56.3%) are the top two concerns among lawyers who have considered AI — both of which a structured supervision process directly addresses. A defined supervision protocol addresses both gaps: it’s training encoded into workflow, and it’s the quality control that justifies the investment.

    The VERIFY Framework for AI Output Supervision

    VERIFY is a six-step protocol designed for daily use. Each letter corresponds to a specific supervision task. The full framework takes 10-15 minutes per standard contract — a fraction of the time saved by using AI in the first place.

    V — Validate the Source Document

    Before evaluating what the AI found, confirm it analyzed the right thing.

    Check these items:

    • Correct document analyzed. This sounds obvious, but when you’re uploading multiple contracts in a day, version mix-ups happen. Verify the parties, date, and title match the matter you’re working on.
    • Complete document analyzed. Check page count. Did the AI process all pages, including exhibits, schedules, and attachments? Many AI tools process the main body but skip exhibits — which often contain the most consequential terms (pricing schedules, SLAs, data processing addenda).
    • Correct contract type identified. If you uploaded an MSA and the AI classified it as a consulting agreement, every downstream analysis will be skewed. Check the classification in the first 30 seconds.
    • Quick coherence check. Does the AI’s summary match what you see when you skim the first two pages? If the summary mentions parties or terms that don’t appear in the document, something went wrong in processing.

    Time required: 1-2 minutes.

    E — Evaluate Clause Identification

    AI contract review tools identify and categorize every clause in the document. This is usually their strongest capability — but it’s not infallible.

    Spot-check 3-5 clause identifications:

    • Pick the 3 most important clauses for this contract type (for an NDA: confidential information definition, exclusions, term; for an MSA: liability cap, indemnification, termination; for an employment agreement: non-compete, IP assignment, severance)
    • Read the actual contract text the AI identified for each clause
    • Confirm the classification is correct. Is what the AI labeled “indemnification” actually an indemnification clause, or is it a warranty provision with indemnification-like language?
    • Check clause boundaries. Did the AI capture the complete clause, or did it cut it off? Did it incorrectly combine two separate provisions?

    Scan for completeness:

    • Quickly scroll through the AI’s clause list. Do you see all the major sections you’d expect for this contract type?
    • If the AI identified 15 clauses in a 30-page MSA, something is likely missing — a typical MSA has 25-40 distinct provisions.

    Time required: 3-4 minutes.

    R — Review Risk Assessments

    This is where your professional judgment matters most. AI can identify that a clause exists and rate its risk level. Only you can determine whether that risk rating is right for this client, in this deal.

    For each flagged risk (Critical and High priority):

    • Read the actual contract language — not just the AI’s summary. Verify the AI characterized the provision accurately.
    • Evaluate the risk level. Do you agree with Critical/High/Medium/Low? AI tools tend toward conservative ratings (flagging standard market provisions as “Medium” risk). A risk that’s “High” in the abstract may be “Low” for a well-capitalized client with strong bargaining position.
    • Check the explanation. Is the AI’s plain-English description of the risk accurate? Does it correctly identify what makes the clause problematic?
    • Look for deal context the AI doesn’t have: What’s the client’s risk tolerance? What’s the relationship between the parties? Is this a renewal or a first-time deal? What’s the deal value relative to the risk?

    For lower-priority findings:

    • Scan Medium and Low findings for any that should be elevated based on deal context
    • Verify the AI hasn’t missed any risks you’d flag based on your experience

    Time required: 3-5 minutes (scales with contract complexity).

    I — Inspect Missing Clause Findings

    Missing clause detection is one of AI’s most valuable capabilities — and one of its most error-prone. A good AI tool will flag provisions that should be in the contract but aren’t. Your job is to verify the findings.

    For each “missing clause” flag:

    • Confirm it’s actually missing. The clause might exist in a different section, under a different heading, or in an exhibit the AI didn’t process. Check before flagging it in your report.
    • Confirm it’s relevant. Not every standard clause is needed in every contract. A missing data processing addendum is critical for a SaaS agreement but irrelevant for a simple NDA. Apply contract-type context.
    • Check the reverse. Are there provisions that you know should be present (based on the contract type and your practice experience) that the AI didn’t flag as missing? No tool catches everything.

    Time required: 2-3 minutes.

    F — Filter Through Deal Context

    This step is what separates AI-assisted review from AI-dependent review. It’s the application of professional judgment that no tool can replicate.

    Apply business context the AI doesn’t have:

    • Client’s risk tolerance: A risk-aggressive startup will accept terms that a risk-conservative manufacturer won’t. The AI doesn’t know your client’s profile.
    • Party relationship dynamics: A contract with a long-term vendor you trust is different from a first-time engagement with an unknown counterparty — even if the language is identical.
    • Deal economics: A $10,000 vendor agreement warrants different risk tolerance than a $2 million SaaS commitment. The AI doesn’t weigh materiality.
    • Jurisdiction-specific factors: Is this non-compete enforceable in the employee’s state? Does the governing law choice create practical problems? The AI may flag the clause but not evaluate it against your jurisdiction’s standards.
    • Strategic priorities: What does your client care about most? The AI gives you a comprehensive risk map. You need to tell the client which risks matter and which can be accepted.

    Time required: 2-3 minutes (but this is the most valuable 2-3 minutes of the entire review).

    Y — Your Professional Judgment Is Final

    The AI’s output is input to your analysis. It’s not the analysis itself.

    Finalize your review:

    • Add your recommendations: accept, negotiate, reject — for each significant finding
    • Draft (or customize) the client memo, using AI output as a starting point but adding your strategic analysis
    • Sign off on the final work product as your work product
    • Note any areas where you disagree with the AI’s assessment (this is valuable for your own quality tracking)

    Time required: Integrated into your deliverable preparation.

    The Quick-Reference Supervision Checklist

    Print this. Use it for every contract.

    • Correct document analyzed (parties, date, title match)
    • Complete document processed (page count, exhibits included)
    • Contract type correctly identified
    • 3-5 clause identifications spot-checked against source text
    • All Critical/High risk findings reviewed against actual contract language
    • Missing clause findings verified (actually missing, actually relevant)
    • Deal-specific context applied (client profile, relationship, economics, jurisdiction)
    • Professional judgment added (accept/negotiate/reject recommendations)
    • Client-ready deliverable prepared
    • Supervision documented (date, tool used, what was reviewed, what was changed)

    Total time per standard contract: 10-15 minutes (on top of reading the AI report itself).

    Common AI Errors to Watch For

    Knowing where AI contract review tools tend to fail makes your supervision faster and more targeted.

    Misclassification. The AI labels a clause as one type when it’s actually another. Example: labeling a warranty disclaimer as a limitation of liability. This happens most often with clauses that overlap conceptually (warranties vs. representations, indemnification vs. hold harmless, assignment vs. delegation). A Stanford CodeX analysis of AI contract review tools found that misclassification rates vary significantly by clause type, with complex risk-allocation provisions (indemnification, insurance, liability) being the most frequently misclassified.

    How to catch it: The spot-check in Step E. If the clause label doesn’t match the language, the downstream analysis is unreliable.

    Scope confusion. The AI analyzes only part of a clause, missing qualifiers, exceptions, or carve-outs. Example: flagging an indemnification clause as “one-sided” when there’s a mutual indemnification in the following paragraph.

    How to catch it: Read the full clause text, not just the excerpt the AI highlights. Check the surrounding paragraphs for related provisions.

    Context blindness. The AI flags a risk that’s actually addressed elsewhere in the contract. Example: flagging “no limitation of liability” when there’s a separate Limitation of Liability article two sections later.

    How to catch it: Cross-reference flagged risks against related clauses. If the AI flags missing indemnification, scan the contract for indemnification language that may appear under a different heading.

    False positives. The AI flags standard, market-reasonable provisions as risks. Example: rating a mutual 30-day termination for convenience clause as “Medium Risk” when it’s entirely market-standard.

    How to catch it: Apply your experience and deal context (Step F). If you’ve seen the same provision in 100 contracts and it’s never been an issue, the AI’s risk rating needs adjustment.

    False negatives. The AI misses unusual risks because the language doesn’t match its training patterns. Example: failing to flag a cleverly drafted non-compete buried in a “Restrictive Covenants” section with unusual formatting. According to the National Law Review’s 2026 AI predictions, false negatives remain the most dangerous AI error category because they create a false sense of security.

    How to catch it: The completeness check in Step E. If you expect a provision to be flagged and it’s not, investigate.

    Exhibit blindness. The AI doesn’t analyze attachments, schedules, or incorporated documents. Example: the main agreement looks clean, but the pricing exhibit contains auto-renewal traps and uncapped escalation clauses.

    How to catch it: Validate in Step V that exhibits were processed. If not, review exhibits manually or upload them separately.

    For a broader view of what AI catches versus what it misses, see our guide on how to review contracts for red flags — the manual checklist complements AI-assisted review. And for a comparison of which AI tools produce the most structured (and therefore most supervisable) output, see our AI contract review tools comparison.

    Supervision by Contract Complexity

    Not every contract needs the same level of scrutiny. Calibrate your supervision to the risk.

    Simple Contracts (NDAs, Short Service Agreements)

    • VERIFY time: 5-7 minutes
    • Spot-check: 2-3 clauses
    • Focus areas: Definitions, scope, duration, exclusions
    • Risk level: Low. Standard forms with limited variation.
    • Supervision depth: Quick pass unless AI flags something unusual

    Standard Contracts (Employment Agreements, Vendor Contracts, Consulting Agreements)

    • VERIFY time: 10-15 minutes
    • Spot-check: 5-7 clauses
    • Focus areas: Restrictive covenants, liability allocation, termination provisions, IP ownership
    • Risk level: Medium. More variation, more negotiable terms, more deal-specific context needed.
    • Supervision depth: Standard — review all flagged risks against source text

    Complex Contracts (MSAs, SaaS Agreements, M&A Documents, Commercial Leases)

    • VERIFY time: 20-30 minutes
    • Spot-check: All flagged risks in detail
    • Focus areas: Clause interactions (indemnification + liability cap + insurance), missing provisions, unusual terms, exhibit contents
    • Risk level: High. Significant financial exposure, multiple interdependent provisions.
    • Supervision depth: Deep — cross-reference related clauses, verify exhibit processing, apply extensive deal context

    Documenting Your Supervision

    Documentation serves three purposes: malpractice protection, bar compliance demonstration, and personal quality tracking. As the ABA’s practical checklist for responsible AI use emphasizes, documentation of human oversight is the cornerstone of a defensible AI workflow.

    Why it matters:

    • If a client claims you missed something, your documentation shows what you checked and when
    • If a bar inquiry asks about your AI supervision process, you have a contemporaneous record
    • Over time, your notes reveal patterns — where AI is reliable and where it consistently needs correction

    What to document for each review:

    • Date and time of review
    • AI tool used and version
    • Contract type, parties, and matter identifier
    • Summary of AI findings (major risks, missing clauses, risk score)
    • Your supervision notes: what you spot-checked, what you verified, what you changed
    • Any disagreements with AI output (and your reasoning)
    • Your final recommendations
    • Time spent on supervision

    Template format: A simple spreadsheet or log works. Columns: Date | Matter | Tool | Contract Type | Key Findings | My Changes | Time Spent | Notes. If you’re using Clause Labs at the Professional tier ($149/month), the activity feed and comments features create a built-in audit trail.

    Training Your Team to Supervise AI

    If you have associates or paralegals, the VERIFY framework scales.

    Training sequence:

    1. Teach the framework. Walk through VERIFY step by step with a real contract. Time: 30 minutes.
    2. Start with simple contracts. Have team members apply VERIFY to NDAs and short agreements. Review their supervision notes initially.
    3. Progress to standard contracts. Expand to employment agreements and vendor contracts once they’re consistent with simple documents.
    4. Review their supervision. For the first month, review your team’s VERIFY notes the same way you’d review their legal work. Are they catching what they should catch? Are they spending appropriate time?
    5. Monthly calibration. Once a month, have the team review the same contract independently — compare AI output, supervision notes, and final recommendations. Identify discrepancies and discuss.

    Key principle: Under Rule 5.1 (supervisory responsibilities), you’re supervising both the AI and the people who supervise the AI. Document the training, and document your oversight of their supervision process. For more on the ethical framework, see our guide on ABA guidelines for AI in legal practice. And for the cautionary tale of what happens when supervision fails entirely, see our analysis of the Mata v. Avianca case and AI hallucination risks.

    Disclosure note: Some jurisdictions require disclosure of AI use to clients — which means your team members need to know when and how to flag AI-assisted work product. See our state-by-state AI disclosure guide for current requirements.

    From Supervision to Competitive Advantage

    Here’s what most ethics-focused articles miss: a well-designed supervision process doesn’t just keep you compliant — it makes you better.

    When you systematically compare your judgment against AI analysis across dozens of contracts, patterns emerge. You learn where AI is consistently right (clause identification, missing provision detection) and where it consistently overreacts or underreacts (risk calibration for specific industries, jurisdiction-specific issues). That pattern recognition compounds over time.

    Firms with the best AI supervision processes will produce faster, more consistent, and higher-quality contract reviews than firms that either avoid AI or use it without supervision. According to Clio’s 2025 report, firms with wide AI adoption are nearly 3x more likely to report revenue growth — and supervision quality is a key differentiator.

    Clause Labs’s structured output — clause-by-clause breakdowns, risk levels, confidence scores, and source text references — is designed specifically to make the VERIFY framework efficient. Start free with 3 reviews per month and apply the framework to your first contract today.

    Want to see what well-structured AI output looks like? Upload any contract to Clause Labs free and walk through the VERIFY framework on a real analysis — 3 free reviews per month, no credit card.

    Frequently Asked Questions

    How much time should supervision add to each review?

    For a standard contract (employment agreement, vendor contract): 10-15 minutes on top of reading the AI report. For simple contracts (NDAs): 5-7 minutes. For complex agreements (MSAs, M&A documents): 20-30 minutes. Even at the high end, total AI-assisted review time (AI processing + human supervision) is a fraction of fully manual review.

    Can a paralegal supervise AI output?

    A paralegal can perform the mechanical steps of the VERIFY framework (document validation, clause spot-checking, missing provision verification). But the professional judgment steps — risk assessment calibration, deal context application, final recommendations — must be performed or directly supervised by a licensed attorney. Under Rule 5.3, you remain responsible for the final work product regardless of who performs the initial supervision.

    What if I disagree with the AI’s assessment?

    Trust your judgment. The AI is an input, not an authority. Document your disagreement and your reasoning — this is actually valuable evidence that you’re exercising supervision rather than rubber-stamping AI output. If you find yourself disagreeing frequently on the same type of issue, it may indicate the AI tool needs calibration for your practice area, or that you’ve identified a genuine limitation of the tool.

    How do I know if I’m supervising enough?

    Two indicators. First, the process test: are you following the VERIFY steps for every contract? If you’re skipping steps, you’re likely under-supervising. Second, the outcome test: when you compare your final deliverable to what the AI produced, are there meaningful differences? If your deliverable is identical to the raw AI output with no additions, changes, or contextual analysis, you’re not adding sufficient professional judgment.

    Does supervision protect me from malpractice?

    A documented supervision process significantly strengthens your defense in a malpractice claim. It demonstrates that you exercised the standard of care expected of a competent attorney — you used technology appropriately, verified its output, applied professional judgment, and documented your process. No process eliminates malpractice risk entirely, but documented supervision under a structured framework like VERIFY puts you in the strongest possible position.


    This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.

  • ABA Guidelines on AI in Legal Practice: What Solo Lawyers Need to Know

    ABA Guidelines on AI in Legal Practice: What Solo Lawyers Need to Know

    ABA Guidelines on AI in Legal Practice: What Solo Lawyers Need to Know

    The ABA isn’t telling you not to use AI. It’s telling you how to use it without risking your license.

    On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility released Formal Opinion 512 — the first comprehensive ethics guidance on lawyers’ use of generative AI. The opinion runs 17 pages and touches six Model Rules, but the practical takeaway for solo and small firm lawyers fits on an index card: understand your tools, protect client data, verify everything, bill honestly, and document your process.

    That sounds straightforward. The details matter, though, and many solo practitioners are either overcautious (avoiding AI entirely because of Mata v. Avianca fears) or undercautious (using ChatGPT on client matters without evaluating data handling). According to the ABA’s 2024 TechReport, solo practitioners have the lowest AI adoption rate at 17.7% — well below the 30.2% average across all firm sizes. Meanwhile, Clio’s 2025 data shows firms with AI adoption are nearly 3x more likely to report revenue growth.

    This article distills what the ABA has actually said into practical, daily-use guidance for solo lawyers. Try Clause Labs free — it’s designed from the ground up for ABA-compliant contract review.

    Timeline: What the ABA Has Said About AI

    The ABA’s engagement with legal technology didn’t start with ChatGPT. Understanding the timeline helps you see where the guidance is heading.

    2012 — Comment [8] to Model Rule 1.1. The ABA added technology competence to the duty of competence: lawyers must “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” This amendment — adopted by 40+ states — is the foundation for every AI ethics obligation that followed.

    2019 — ABA Resolution 112. Addressed AI and access to justice. Urged courts and practitioners to consider AI’s potential to improve legal service delivery while maintaining ethical standards.

    2023 — ABA Resolution 604. Adopted at the Midyear Meeting, Resolution 604 called on organizations designing AI systems to ensure human authority, oversight, and control; accountability for consequences; and transparency in design and risk documentation.

    July 2024 — Formal Opinion 512. The main event. First comprehensive ethics guidance on lawyers’ use of generative AI. Addresses competence, confidentiality, communication, fees, candor, and supervision. This is the document you need to know.

    December 2025 — ABA AI Task Force Year 2 Report. The Task Force on Law and Artificial Intelligence released its final report, concluding that AI has “moved from experiment to infrastructure” for the legal profession. The report catalogs dozens of state bar opinions, court rules, and emerging best practices issued since Formal Opinion 512.

    The 5 Model Rules That Apply to Your AI Use

    Formal Opinion 512 organizes its guidance around six areas of ethical concern. For solo transactional lawyers — who rarely file court documents but regularly handle client confidential information — five rules are directly relevant to daily practice.

    Rule 1.1 — Competence: You Must Understand Your Tools

    What it requires: Comment [8] to Rule 1.1 mandates that lawyers understand the “benefits and risks associated with relevant technology.” Formal Opinion 512 extends this to AI: you must have a “reasonable understanding of the capabilities and limitations” of any generative AI tool you use.

    What “reasonable understanding” means for solo lawyers:

    You don’t need to understand transformer architecture or how large language models generate text. You do need to know:

    • What the tool does and what it doesn’t do (contract review vs. legal research vs. drafting)
    • How accurate it is for your use case (and where it tends to fail)
    • How it handles data you upload (retention, training, encryption)
    • The type of output it generates (structured analysis vs. free-text responses)
    • What its limitations are (jurisdiction awareness, clause identification accuracy, exhibit handling)

    Practical compliance steps:

    1. Before using any AI tool on client matters, use it on a non-client document first. Run a contract you’ve already reviewed manually through the AI and compare results.
    2. Read the tool’s documentation, privacy policy, and terms of service.
    3. Take at least one CLE on AI in legal practice annually. New York now requires two AI-specific CLE credits — expect other states to follow.
    4. Subscribe to at least one source covering AI in legal practice. LawNext by Bob Ambrogi and the ABA Law Technology Today are free and excellent.

    Rule 1.4 — Communication: Tell Your Clients

    What it requires: Keep clients reasonably informed about “the means by which the client’s objectives are to be accomplished.” When AI is one of those means, communication obligations are triggered.

    Formal Opinion 512’s critical clarification: Boilerplate consent in engagement letters is not adequate when it comes to sharing client confidential information with third-party AI tools. You need informed, specific consent that tells clients what data you’re sharing, with what tool, and why.

    Practical compliance steps:

    1. Add an AI disclosure section to your standard engagement letter. (See our state-by-state disclosure guide for templates scaled to your jurisdiction’s requirements.)
    2. Be specific about which tools you use and what data they access.
    3. If you change your AI toolset mid-engagement, notify affected clients.
    4. Provide clients the option to opt out of AI-assisted work (and explain the cost/time implications of opting out).

    Rule 1.5 — Fees: Bill Honestly for AI-Assisted Work

    What it requires: Charge reasonable fees. Formal Opinion 512 addresses two specific AI billing issues.

    You may not bill for general AI learning time. If you spend 10 hours learning to use an AI contract review tool, that cost is your overhead — not billable to any specific client. The exception: if a client specifically requests you use a particular AI tool for their matter, learning time for that specific tool may be billable.

    Adjust your fee structure to reflect efficiency gains. If AI reduces your contract review time from 3 hours to 45 minutes, billing 3 hours of work is ethically problematic. This doesn’t mean you must reduce your fees proportionally — value-based pricing, flat fees, and portfolio pricing are all legitimate approaches. But billing by the hour for AI-assisted work that took a fraction of the pre-AI time raises Rule 1.5 concerns.

    The opportunity: AI enables flat-fee contract review that’s profitable for you and predictable for clients. A flat fee of $350-750 per contract review (depending on complexity), where AI does the first pass and you provide the judgment and client communication, can be more profitable than hourly billing at $350/hour — and clients prefer the predictability.

    Rule 1.6 — Confidentiality: Protect Client Data in AI Tools

    What it requires: Rule 1.6(c) mandates “reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to” client information. Uploading a client contract to a third-party AI tool is sharing client information with a third party.

    The data handling evaluation: Before using any AI tool on client documents, verify:

    • Data retention: Does the tool store your documents? For how long? Can you delete them?
    • Training policy: Does the tool train its AI models on your uploads?
    • Encryption: Data encrypted in transit (TLS) AND at rest (AES-256)?
    • Access controls: Who at the AI company can see your data?
    • SOC 2 certification: Has the tool been independently audited?
    • Sub-processors: Does the vendor share your data with third parties?

    For a detailed evaluation framework and tool-by-tool comparison, see our guide on client confidentiality and AI tools.

    The practical difference between tools: Free ChatGPT may train on your inputs by default. Enterprise ChatGPT and API access do not. Purpose-built legal AI tools like Clause Labs are designed with no-retention policies and legal-specific security standards. The tool you choose determines whether you’re compliant.

    Rule 5.3 — Supervision: AI Is Your Non-Lawyer Assistant

    What it requires: Supervise AI like you’d supervise a paralegal whose work you’re responsible for. The work product is yours. If the AI makes an error that harms a client, you bear the responsibility — not the AI vendor.

    What supervision looks like in practice:

    • Review every output before using it in a client deliverable
    • Spot-check clause identifications against the actual contract language
    • Verify risk assessments by reading the flagged provisions yourself
    • Apply deal context that AI doesn’t have (client’s risk tolerance, relationship dynamics, business objectives)
    • Document your review process (date, what you checked, what you changed)

    For a complete, repeatable supervision protocol, see our VERIFY framework for supervising AI outputs.

    ABA Formal Opinion 512: The Key Provisions

    Beyond the Model Rules framework, Formal Opinion 512 makes several specific pronouncements worth flagging.

    On competence and AI evolving rapidly: Because AI tools change frequently, the competence obligation is ongoing. Lawyers must “periodically update” their understanding of tools they use. A competence evaluation from six months ago may be outdated.

    On candor toward the tribunal (Rules 3.1 and 3.3): While less relevant for transactional lawyers, this section addresses the Mata v. Avianca scenario directly. Lawyers must verify all AI-generated legal citations and arguments. Submitting AI-generated content without verification violates candor obligations.

    On the distinction between AI types: The opinion acknowledges that not all AI tools pose the same risks. General-purpose chatbots (ChatGPT, Claude) present different risk profiles than purpose-built legal tools. The competence and supervision obligations scale with the risk level of the specific tool.

    What the ABA AI Task Force Recommends

    The ABA’s Task Force on Law and Artificial Intelligence released its Year 2 report in December 2025, assessing how AI is reshaping the profession. Key recommendations relevant to solo practitioners:

    Shift from “whether” to “how.” The Task Force concludes that the question is no longer whether lawyers will use AI but how they’ll govern and integrate it. Firms that don’t develop AI policies will fall behind competitively and ethically.

    Develop firm-level AI policies. Even solo practitioners should have a written AI policy covering: which tools are approved, what data can be uploaded, what supervision steps are required, and how AI use is documented. The ABA published a practical checklist for responsible AI use as a starting point.

    Engage in AI-specific CLE. The Task Force supports mandatory AI competence requirements for lawyers using AI tools. Several states have already implemented CLE requirements.

    Monitor evolving state guidance. Since Formal Opinion 512, dozens of state bars have issued their own opinions. Many align with the ABA framework, but some add state-specific requirements. Keep current with your state.

    The Solo Lawyer’s ABA Compliance Checklist

    Here’s your practical, print-it-and-tape-it-to-your-monitor checklist.

    Before You Start Using Any AI Tool:
    – Understand how the tool works, what it does, and its limitations (Rule 1.1)
    – Evaluate the tool’s data handling: retention, training, encryption, certifications (Rule 1.6)
    – Test the tool on non-client work to assess accuracy and output quality (Rule 1.1)
    – Add AI disclosure language to your standard engagement letter (Rule 1.4)

    For Every Client Matter:
    – Confirm your engagement letter covers AI use for this client (Rule 1.4)
    – Use only approved tools with verified data security (Rule 1.6)
    – Review and verify all AI output before including in client deliverables (Rule 5.3)
    – Apply your professional judgment — client context, deal dynamics, jurisdiction (Rule 5.3)
    – Document your AI use and supervision steps (all rules)

    Ongoing:
    – Take at least one AI-focused CLE per year (Rule 1.1)
    – Review and update your AI tool evaluations quarterly (Rule 1.1)
    – Update engagement letter AI language when your toolset changes (Rule 1.4)
    – Monitor your state bar for new AI guidance (all rules)
    – Review your fee structures to reflect AI efficiency gains (Rule 1.5)

    How Clause Labs Aligns with ABA Requirements

    Clause Labs is purpose-built for ABA-compliant contract review.

    Rule 1.6 compliance: No data retention after analysis. Encryption in transit and at rest. No training on user-uploaded documents.

    Rule 5.3 compliance: Structured, clause-by-clause output that’s designed for efficient human review. Every finding includes the source text, risk level, and plain-English explanation — making supervision straightforward rather than a burden. For more on how structured AI output supports supervision, see our article on the VERIFY framework.

    Rule 1.1 compliance: Transparent methodology. The system identifies clauses, scores risks, and explains its reasoning — you can see what it’s doing and why, which is the “reasonable understanding” that Formal Opinion 512 requires.

    Rule 1.5 alignment: At $49/month for 25 reviews (Solo tier), Clause Labs enables flat-fee contract review that’s more profitable and more transparent than hourly billing. Start free with 3 reviews per month — no credit card required.

    Ready to put these guidelines into practice? Upload your first contract to Clause Labs free — see exactly how structured AI output makes ABA compliance straightforward, not burdensome.

    Frequently Asked Questions

    Does the ABA prohibit AI use in legal practice?

    No. Formal Opinion 512 explicitly permits AI use. The opinion is about responsible use — with competence, confidentiality, transparency, and supervision guardrails. The ABA’s Task Force report goes further, stating AI has become “infrastructure” for the legal profession.

    Are ABA guidelines binding?

    The ABA Model Rules themselves are not binding — they’re a model. But nearly every state has adopted rules based on the Model Rules, and 40+ states have adopted the technology competence amendment to Comment [8] of Rule 1.1. Formal opinions like 512 carry significant persuasive authority and influence state bar decisions. Check your state’s specific rules — the Justia 50-State Survey tracks which states have adopted which provisions.

    How do ABA guidelines interact with state bar rules?

    ABA guidelines provide the framework. State bars adopt, modify, or supplement. When your state has specific AI guidance (like Florida’s Opinion 24-1 or Texas’s Opinion 705), follow your state’s rules — they’re binding. Where your state hasn’t issued guidance, the ABA Model Rules and Formal Opinion 512 are your best reference. For a state-by-state breakdown, see our AI disclosure requirements guide.

    Does the ABA require AI disclosure to clients?

    Formal Opinion 512 doesn’t mandate universal disclosure in all circumstances. But it strongly implies disclosure is necessary when AI use involves sharing client data with a third party (Rule 1.6 trigger) or when AI materially affects the representation. The safest practice: disclose AI use in your engagement letter for all matters.

    Where can I find the latest ABA guidance on AI?

    Start with the ABA’s ethics and professional responsibility publications, the Law Practice Division’s TechReport, and the Task Force on Law and AI reports. For ongoing coverage, LawNext provides the best real-time reporting on ABA AI developments.


    This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation. ABA guidelines are a model framework — verify your state’s specific rules and requirements.

  • State-by-State Guide to AI Disclosure Requirements for Lawyers (2026)

    State-by-State Guide to AI Disclosure Requirements for Lawyers (2026)

    State-by-State Guide to AI Disclosure Requirements for Lawyers (2026)

    Fifty-three percent of law firms have no AI policy — yet 79% of legal professionals are already using AI tools daily. That gap between adoption and governance is where malpractice claims, bar discipline, and client trust problems live.

    If you’re a solo or small firm lawyer using AI for contract review, legal research, or document drafting, you face a practical question that no single source answers well: What exactly do I have to disclose, to whom, and when? The answer depends on your state, your court, and the type of work you’re doing. This guide consolidates every major disclosure requirement into one reference.

    Whether you use Clause Labs, ChatGPT, or any other AI tool, this article gives you the compliance roadmap. Start free with 3 contract reviews per month — no credit card required.

    The AI Disclosure Landscape in 2026

    There is no federal standard for AI disclosure in legal practice. What exists is a patchwork: state bar ethics opinions, individual court standing orders, and the ABA’s Formal Opinion 512, issued July 29, 2024, which provides a national framework but isn’t binding on any state.

    The trend line is clear. According to the ABA’s 2024 TechReport, AI adoption among lawyers nearly tripled from 11% in 2023 to 30.2% in 2024. The Clio 2025 Legal Trends Report puts that number at 79% when you count all AI-adjacent tools. State bars are responding with guidance at an accelerating pace — more than 30 states have now issued ethics opinions, practical guides, or formal rules addressing AI in legal practice.

    The disclosure obligations fall into three categories: what you must tell your clients, what you must tell courts, and what your state bar recommends or requires as a matter of professional responsibility.

    The Master Reference: Key States and Their Requirements

    No article can cover all 50 states plus DC in granular detail and remain current for more than a few weeks. What follows is the most consequential guidance from the states where most transactional lawyers practice, organized by the type of obligation imposed.

    Florida — Opinion 24-1 (2024)

    Florida’s Opinion 24-1 is one of the most detailed state bar pronouncements on AI. Key requirements:

    • Lawyers may use AI but must prioritize client confidentiality
    • Disclosure is mandatory when AI use impacts billing or costs
    • Lawyers must practice accurately and competently when using AI outputs
    • AI-generated work must be reviewed and verified before delivery

    Practical impact: If you’re a Florida lawyer using AI for contract review and billing fewer hours as a result, you need to address that in your fee arrangement. If you’re uploading client documents to a third-party AI tool, your confidentiality obligations under Rule 4-1.6 are triggered.

    California — Practical Guide on AI (2024–2025)

    The State Bar of California published a practical guide emphasizing that attorney competence under Rule 1.1 requires an understanding of large language models before using them — including hallucination risks and data privacy implications. While California hasn’t issued a formal opinion mandating disclosure in all cases, the competence standard effectively requires:

    • Understanding how any AI tool you use works
    • Evaluating data privacy implications before uploading client data
    • Maintaining supervisory control over all AI outputs

    Texas — Opinion 705 (February 2025)

    Texas Opinion 705 clarifies that human oversight of AI-generated work product is mandatory. The opinion specifically addresses the risk of fabricated citations (the Mata v. Avianca problem) and requires:

    • Independent verification of all AI-generated content
    • Human supervision of AI as a non-lawyer assistant under Rule 5.03
    • Competence in understanding the AI tool’s limitations

    New York

    New York has been aggressive on AI governance. The state requires at least two annual CLE credits in practical AI competency, with enforcement beginning in 2025. Multiple court systems within New York have adopted AI disclosure rules for court filings, and the NYC Bar Association has published detailed guidance on ethical AI use.

    States with Formal Guidance (Advisory but Influential)

    Oregon — Formal Opinion 2025-205

    Oregon’s Formal Opinion 2025-205 is a thorough treatment of AI ethics obligations. It addresses competence, confidentiality, supervision, and disclosure, closely tracking ABA Formal Opinion 512.

    North Carolina

    The North Carolina Bar Association published a 2026 guide arguing that law firms need realistic AI policies rather than outright bans. The guidance emphasizes documentation and policy-based governance.

    Pennsylvania

    Pennsylvania mandates explicit disclosure of AI use in all court submissions. Transparency is a filing requirement in state courts.

    Illinois, Massachusetts, Colorado, Georgia, Washington

    Each of these states has addressed AI use through bar opinions, CLE requirements, or court rules. The details vary but converge on three themes: competence, confidentiality, and verification.

    States with No Guidance (As of February 2026)

    Roughly 15-20 states have not yet issued formal AI guidance. If you practice in one of these states, the ABA Model Rules and Formal Opinion 512 are your best framework. The Justia 50-State Survey maintains a current tracker — bookmark it.

    For a comprehensive and regularly updated listing of every state’s position, the Clio AI Ethics Opinions guide provides state-by-state detail with links to primary sources.

    Federal Court AI Disclosure Requirements

    Federal courts have moved faster than state bars. Since Judge Brantley Starr of the Northern District of Texas issued the first standing order requiring AI disclosure in court filings in 2023, over 300 federal judges have adopted similar orders.

    These standing orders typically require one or more of:

    • Disclosure of AI tool use in drafting or researching any filing
    • Certification that all citations have been independently verified
    • Identification of which specific AI tools were used

    The requirements are not uniform. Some judges require a standalone certification. Others require a footnote. Some apply only to generative AI (ChatGPT, Claude) while others cover all AI-assisted research tools.

    Practical advice: Before filing in any federal court, check the assigned judge’s standing orders. Bloomberg Law’s tracker and Law360’s AI tracker maintain current databases.

    Note: contract review work rarely involves court filings directly. But if your contract review feeds into litigation — a breach of contract claim, for example — the disclosure requirement kicks in when the AI-assisted analysis becomes part of a filing.

    The ABA Framework: Formal Opinion 512

    ABA Formal Opinion 512, released July 29, 2024, provides the most comprehensive national framework. It addresses six Model Rules and their application to generative AI.

    Rule 1.1 (Competence): Lawyers must understand the capabilities and limitations of any AI tool they use. You don’t need to be a technologist, but you need a “reasonable understanding” — enough to evaluate whether the tool is appropriate for the task. For a deeper analysis, see our guide on ABA guidelines for AI in legal practice.

    Rule 1.4 (Communication): Inform clients about AI use when it’s relevant to their representation. Notably, Formal Opinion 512 states that boilerplate consent in engagement letters is not adequate for confidentiality purposes — you need informed, specific consent when uploading client data to third-party AI tools.

    Rule 1.5 (Fees): You may not bill clients for time spent learning to use AI tools generally. If a client specifically requests a particular AI tool, learning costs may be billable. The bigger implication: if AI reduces your review time from 3 hours to 30 minutes, your fee arrangement should reflect that.

    Rule 1.6 (Confidentiality): Before uploading client data to any AI tool, evaluate the tool’s data handling practices. This includes data retention, training policies, encryption, and sub-processor arrangements. For detailed guidance on this issue, see our article on confidentiality and AI contract tools.

    Rule 5.1/5.3 (Supervision): Supervise AI output the same way you’d supervise a junior associate. Review everything. Verify everything. For a practical framework on exactly how to do this, see our guide on supervising AI legal outputs.

    Types of Disclosure: Client, Court, and Bar

    Client Disclosure

    Client disclosure addresses what you tell your clients about using AI in their matters.

    When it’s required:
    – When uploading client data to a third-party AI tool (confidentiality trigger)
    – When AI use materially affects your fees or billing (fee disclosure trigger)
    – When your state bar has issued specific guidance requiring disclosure

    When it’s recommended but not strictly required:
    – For all AI-assisted contract review (best practice regardless of state)
    – When clients are likely to have concerns about AI use
    – When the matter involves sensitive or confidential business information

    Where to disclose:
    – Engagement letter (standard practice — add an AI use section)
    – Separate AI disclosure addendum (for sensitive matters)
    – Ongoing client communication (for new tools or changed practices)

    Court Disclosure

    Court disclosure is more straightforward: check the standing orders of the court and judge where you’re filing. If a standing order requires AI disclosure, comply. If no order exists, Rule 11 certification already requires you to verify the accuracy of everything in your filing — AI-assisted or not.

    Bar Compliance

    Your state bar’s guidance governs your ongoing professional responsibility. Even where no formal disclosure rule exists, the underlying Model Rules (competence, confidentiality, communication, supervision) apply to AI use. Document your compliance.

    Engagement Letter AI Disclosure Templates

    Three templates, scaled to your jurisdiction’s requirements.

    Minimal Disclosure (States with No Specific Requirements)

    Our firm may use AI-assisted tools to enhance the efficiency of legal services, including contract analysis, legal research, and document review. All AI-generated analysis is reviewed and verified by a licensed attorney before inclusion in any client deliverable. Our firm remains fully responsible for all work product.

    Our firm uses AI-powered contract review and analysis tools as part of our quality assurance process. These tools assist with clause identification, risk analysis, and missing provision detection. All AI-generated analysis is independently reviewed, verified, and supplemented by attorney judgment before delivery. Our AI tools employ encryption for data in transit and at rest, do not retain client documents after analysis, and do not use client data to train AI models. Attorney [Name] maintains supervisory responsibility for all work product.

    Comprehensive Disclosure (States with Mandatory Disclosure)

    Our firm uses the following AI tools in providing legal services: [Tool Names]. These tools are used for: [specific tasks — e.g., contract clause identification, risk scoring, missing provision detection]. Data handling: client documents are processed via encrypted connections, are not retained after analysis, and are not used to train AI models. [Tool Name] maintains [SOC 2 / relevant certification] compliance. Human review: all AI-generated analysis is independently reviewed and verified by [Attorney Name], who exercises professional judgment on all findings before inclusion in client deliverables. You have the right to request that we not use AI tools on your matter. If you choose to opt out of AI-assisted review, please notify us in writing, and we will adjust our review process accordingly. This may affect the timeline and cost of services.

    The Penalty Landscape: What Happens If You Don’t Disclose

    The most prominent sanction case remains Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023), where attorneys Steven Schwartz and Peter LoDuca were fined $5,000 for submitting AI-fabricated citations. But Mata involved affirmative misrepresentation to the court — not mere failure to disclose AI use. For more on the Mata case and its implications, see our analysis of AI hallucination risks in legal practice.

    As of early 2026, no lawyer has been disciplined solely for failing to disclose AI use in transactional contract review. But the trajectory is clear: 300+ federal judges have standing orders, state bars are issuing guidance at an accelerating rate, and over 700 documented incidents of AI-fabricated content in court filings have made courts and bars aggressive about enforcement.

    The risks of non-disclosure include:

    • Court sanctions for non-compliance with standing orders
    • Bar discipline for violating competence, confidentiality, or communication rules
    • Malpractice exposure if AI errors cause client harm and your use wasn’t disclosed
    • Client trust damage that’s harder to repair than any formal sanction

    The calculus is simple: disclosure costs you nothing. Non-disclosure can cost you your practice.

    The Universal Compliance Framework: 6 Steps That Work Everywhere

    Regardless of your state, these six practices keep you compliant with current and likely future requirements.

    1. Add AI disclosure to your standard engagement letter. Use the templates above. Update annually or when your toolset changes.

    2. Maintain an AI tool inventory. List every AI tool your firm uses, what it’s used for, what data it accesses, and its security certifications. Review quarterly.

    3. Verify all AI output before use. This isn’t optional anywhere. Review every clause identification, risk assessment, and suggested edit against the source document. Our VERIFY framework for supervising AI outputs gives you a repeatable protocol.

    4. Document your AI use and human review process. Date, tool, matter, what was reviewed, what was changed. This is your audit trail for any bar inquiry or malpractice claim.

    5. Stay current on your state’s requirements. The Justia 50-State Survey and Clio’s ethics opinions guide are the best free trackers. Check quarterly.

    6. When in doubt, disclose. Overcompliance beats undercompliance every time. No lawyer has ever been disciplined for disclosing too much about their technology use.

    Clause Labs is built for compliant AI use: structured output that’s easy to verify, no data retention after analysis, and encryption for all document processing. Start free with 3 reviews per month — no credit card required.

    Over 500 lawyers already use Clause Labs for AI-assisted contract review with ABA-compliant data handling. Join them — start free today.

    Frequently Asked Questions

    Do I need to disclose if I just use ChatGPT to brainstorm contract language?

    It depends on your jurisdiction and what you do with the output. If you use ChatGPT to brainstorm and then independently draft the language yourself, most jurisdictions wouldn’t require disclosure. But if AI-generated language appears substantially in a client deliverable, disclosure is prudent. Under ABA Formal Opinion 512, you must also consider whether you’ve uploaded any client confidential information in the process — even pasting a client’s contract clause into ChatGPT may trigger Rule 1.6 obligations.

    Do I need to disclose AI use to opposing counsel?

    Generally, no. No state currently requires disclosure to opposing counsel in transactional practice. The exceptions are narrow: collaborative law settings, some mediation contexts, and situations where a specific court order applies. In litigation, some federal standing orders require disclosure in filings — which opposing counsel will see.

    Can my client refuse to let me use AI?

    Yes. If a client requests that you not use AI tools, you must honor that request. Include an opt-out provision in your engagement letter (see the comprehensive template above). Be transparent about how opting out may affect timelines and costs.

    Is disclosure required for contract review, or only litigation?

    The ABA Model Rules and most state guidance apply to all areas of practice, not just litigation. Rule 1.6 (confidentiality) applies whenever you share client information with a third-party tool — whether you’re reviewing a contract or drafting a brief. The court-specific standing orders only apply to litigation filings, but your ethical obligations to clients are practice-area agnostic.

    How often should I update my disclosure language?

    Review and update annually at minimum. Update immediately when you adopt new AI tools, when your state bar issues new guidance, or when there’s a material change in how your existing tools handle data.


    This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation. AI disclosure requirements are evolving rapidly — verify current requirements in your jurisdiction before relying on this guide.

  • The Mata v. Avianca Problem: How to Use AI in Law Without Fabricated Citations

    The Mata v. Avianca Problem: How to Use AI in Law Without Fabricated Citations

    The Mata v. Avianca Problem: How to Use AI in Law Without Fabricated Citations

    A $5,000 fine, a public apology to six federal judges, and a name that every lawyer in America now associates with AI gone wrong. That’s the legacy of Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023) — the case where attorney Steven Schwartz submitted a brief citing six cases that didn’t exist, all fabricated by ChatGPT.

    But here’s what most coverage of this case gets wrong: the problem wasn’t that a lawyer used AI. The problem was that a lawyer used the wrong kind of AI for the task, then skipped verification entirely. Understanding that distinction is the difference between using AI responsibly and becoming the next cautionary tale. And as of late 2025, Damien Charlotin’s AI Hallucination Cases Database documents over 300 incidents of AI-fabricated citations in court filings — up from a handful in 2023 to two or three new cases per day.

    This article breaks down what actually happened, why it keeps happening, and — most critically — why contract review AI operates on a fundamentally different risk model than research AI. If you’ve been hesitant to adopt AI tools because of Mata v. Avianca, you may be avoiding the wrong thing.

    What Actually Happened in Mata v. Avianca

    The facts are straightforward and worth getting right.

    In 2022, Roberto Mata filed a personal injury lawsuit against Avianca Airlines, alleging a knee injury from a metal serving cart on an international flight. When Avianca moved to dismiss, Mata’s attorney Peter LoDuca filed an opposition brief. The brief was largely drafted by his colleague Steven Schwartz, who used ChatGPT to research supporting case law.

    ChatGPT generated citations to six cases that sounded real — complete with docket numbers, court names, and plausible holdings. Cases like Varghese v. China Southern Airlines, Shaboon v. Egyptair, and Petersen v. Iran Air. They had the structure, cadence, and citation format of genuine case law. They were entirely fabricated.

    When Avianca’s attorneys couldn’t locate the cited cases, Judge P. Kevin Castel ordered Schwartz to produce copies. Schwartz went back to ChatGPT and asked whether the cases were real. ChatGPT confirmed they were. He submitted that confirmation to the court.

    On June 22, 2023, Judge Castel issued sanctions — a $5,000 fine against Schwartz, LoDuca, and their firm Levidow, Levidow & Oberman. The court also required them to send individual letters to each of the six judges falsely identified as authors of the fabricated opinions, along with copies of the sanctions order.

    The case made international headlines. It became the most referenced AI-in-law case in history. And it terrified lawyers who were considering AI adoption.

    Why AI Hallucination Happens — in Terms Lawyers Understand

    Hallucination isn’t a bug in the software. It’s a feature of how large language models work — and understanding the mechanism matters for assessing risk.

    Large language models like ChatGPT and Claude predict the most statistically likely next sequence of words given a prompt. They don’t retrieve facts from a database. They don’t look up cases in Westlaw. They generate text that sounds right based on patterns in their training data.

    Legal citations are especially vulnerable because:

    • Case names follow predictable patterns. A name like Petersen v. Iran Air sounds like a real aviation injury case because it matches thousands of real citation patterns the model has seen.
    • Legal writing is formulaic. Holdings, procedural histories, and citation formats follow rigid conventions. AI can mimic the form perfectly while fabricating the substance.
    • Lawyers are trained to trust citations. When you see a properly formatted citation — 678 F.Supp.3d 443 (S.D.N.Y. 2023) — your instinct is to trust it, not verify it. That trust is earned in normal legal practice. It’s exploited by AI hallucination.

    This isn’t unique to ChatGPT. Any general-purpose language model can hallucinate. Stanford research has documented that hallucination rates for legal citations range from 6% to over 30% depending on the model, the complexity of the question, and the jurisdiction.

    It Keeps Happening: Post-Mata Sanctions Cases

    Mata v. Avianca wasn’t an isolated incident. It was a preview.

    Noland v. Land of the Free, L.P. (2025) — A California appellate court found that “nearly all of the legal quotations in plaintiff’s opening brief, and many of the quotations in plaintiff’s reply brief, were fabricated.” The court imposed $10,000 in sanctions — double the Mata penalty.

    Johnson v. Dunn (N.D. Ala., July 2025) — The court went further than fines: it disqualified the attorneys from representing their client for the remainder of the case and directed the clerk to notify bar regulators in every state where the attorneys were licensed.

    Arizona Social Security Case (August 2025) — A judge found that 12 of 19 cited cases were fabricated, misleading, or unsupported, sanctioning the attorney whose brief was “replete with citation-related deficiencies consistent with artificial intelligence generated hallucinations.”

    And in a notable 2025 development, courts began sanctioning lawyers for failing to detect their opponent’s AI-fabricated citations — establishing that the verification duty runs both ways.

    The pattern across every sanctions case is identical: a lawyer used a general-purpose AI tool for legal research, submitted the output without verification, and fabricated citations ended up before a judge.

    The Critical Distinction: Research AI vs. Review AI

    This is the argument that most Mata coverage misses entirely, and it’s the one that should change how you think about AI risk.

    Research AI (High Hallucination Risk)

    General-purpose AI tools like ChatGPT, Claude, and Gemini are generative — they create text from scratch. When you ask them to find supporting case law, they don’t search a legal database. They generate text that looks like case law. Sometimes they get it right (because the case appeared in training data). Often they don’t.

    The hallucination risk profile:

    • Generates citations, case summaries, and legal analysis from scratch
    • No built-in source verification
    • Designed to produce plausible-sounding content
    • Outputs fabricated cases, misquoted holdings, and invented statutes
    • Confidence level of the output has no correlation with accuracy

    Contract Review AI (Fundamentally Different Risk)

    Purpose-built contract review tools operate on a completely different model. They don’t generate legal citations or case law. They analyze a specific document you provide as input.

    When a contract review AI examines your NDA, it:

    • Identifies clauses that exist in the document you uploaded
    • Categorizes those clauses by type (indemnification, termination, IP assignment)
    • Scores risk based on what’s present — and flags what’s missing
    • Generates structured output (risk scores, clause categories) not freeform legal analysis
    • Never cites case law, statutes, or legal authority it might fabricate

    There’s nothing to hallucinate when the task is “read this paragraph and tell me whether it contains a unilateral termination right.” Either the language is there or it isn’t. The AI is classifying existing text, not inventing new text.

    This doesn’t mean contract review AI is infallible — it can miscategorize a clause, miss a bespoke provision, or score risk differently than you would. But those are accuracy issues, not hallucination issues. And they’re the same types of errors a junior associate or paralegal might make, which is why human review remains non-negotiable.

    Building a Hallucination-Proof AI Workflow

    Whether you’re using AI for research, review, or drafting, these practices protect you.

    Before You Use Any AI Tool

    1. Match the tool to the task. If a purpose-built tool exists for what you need — contract review, document comparison, legal research with verified citations — use it instead of general-purpose AI. This is the single most effective risk reduction strategy.

    2. Understand the tool’s architecture. Does it generate text from scratch (high hallucination risk) or analyze documents you provide (lower risk)? Does it cite sources it retrieves from a database (CoCounsel, Lexis+ AI) or generate citations from training data (ChatGPT)? ABA Formal Opinion 512, issued July 2024, requires lawyers to understand how their AI tools work before relying on them.

    3. Test on known documents first. Before using any AI tool on client work, run it against a contract you’ve already reviewed manually. Compare the AI’s output to your own analysis. Where does it agree? Where does it diverge? Where is it wrong?

    During AI-Assisted Work

    4. Never submit AI output without line-by-line review. For legal research: verify every citation in Westlaw, Lexis, or Google Scholar. Read the actual opinion — don’t trust the AI’s summary of the holding. For contract review: check every flagged clause against the actual contract language. Verify that “missing clause” findings are actually absent from the document.

    5. Be skeptical of confidence. AI doesn’t express uncertainty the way humans do. A fabricated citation reads with the same confidence as a real one. Treat all AI output as a first draft requiring verification, regardless of how polished it appears.

    6. Document your review process. Keep a record of what tool you used, what it produced, and how you verified the output. This protects you against malpractice claims and bar complaints. It also satisfies the supervisory requirements under ABA Model Rule 5.3.

    After AI Review

    7. Apply professional judgment to every recommendation. AI doesn’t know your client’s business objectives, risk tolerance, negotiation leverage, or the relationship dynamics with the counterparty. These factors determine whether a flagged “risk” actually matters. Your judgment is what clients pay for — AI just gives you a faster starting point.

    8. Sign off on the final work product as your own. If it has your name on it, you own it. Period. AI-assisted work carries the same professional responsibility as any other work product, as Judge Castel emphasized in the Mata sanctions order.

    What Courts and Bar Associations Now Require

    The regulatory response to AI hallucination is accelerating. As of early 2026, over 300 federal judges have issued standing orders, local rules, or pretrial orders addressing AI use in court filings.

    Common requirements include:

    • Disclosure of AI use. Many courts require attorneys to identify which AI tools were used and which portions of a filing were AI-assisted.
    • Certification of accuracy. Several judges, including Judge Baylson in the Eastern District of Pennsylvania, require attorneys to certify that every citation has been verified for accuracy.
    • Identification of the specific tool. Some orders require naming the AI tool used, not just disclosing AI assistance generally.

    At the state level, bar associations across the country are issuing guidance. California emphasizes understanding LLM risks before use. Florida’s Opinion 24-1 mandates disclosure when AI impacts billing. Texas Opinion 705 requires human oversight of all AI-generated work product.

    The trend is clear: use AI, but verify and disclose. And the ABA’s checklist for responsible AI use published in early 2026 consolidates these requirements into a practical framework.

    The Lesson — and the Opportunity

    Mata v. Avianca wasn’t a failure of artificial intelligence. It was a failure of verification. Steven Schwartz didn’t get sanctioned for using ChatGPT. He got sanctioned for submitting fabricated citations without checking whether they were real. Every subsequent sanctions case follows the same pattern.

    The lawyers who will thrive aren’t the ones avoiding AI — they’re the ones using it with the right tools and the right workflow. For contract review specifically, purpose-built AI tools that analyze documents rather than generate citations eliminate the hallucination risk that caused Mata. The risk isn’t zero — miscategorization and accuracy issues exist — but it’s a fundamentally different category of risk, one that standard attorney review practices are designed to catch.

    If you’ve been avoiding AI because of Mata v. Avianca, you may be solving the wrong problem. The question isn’t whether to use AI — it’s whether you’re using the right AI for the right task, with the right verification process in place.

    For lawyers ready to start with contract review AI that’s designed for verification rather than hallucination, Clause Labs’s free analyzer lets you upload any contract and see a structured risk analysis in under 60 seconds — no citations to fabricate, no case law to verify, just clause-by-clause analysis of the document you provide. Try it on your next contract and see how purpose-built AI differs from the general-purpose tools that created the Mata problem.

    For a deeper look at the ethical framework governing AI use in legal practice, read our guides on whether AI contract review is ethical and how ABA Rule 1.1 applies to technology competence.

    Frequently Asked Questions

    Can contract review AI hallucinate?

    Contract review AI can make errors — miscategorizing a clause, missing a bespoke provision, or misjudging risk severity. But it doesn’t hallucinate in the Mata sense because it doesn’t generate citations, case law, or legal authority. It analyzes the specific document you provide and produces structured output (risk scores, clause identification, missing provisions) rather than freeform legal text. The risk profile is accuracy, not fabrication.

    ChatGPT is a general-purpose language model that generates text from scratch — including citations that may not exist. Clause Labs is a purpose-built contract review tool that analyzes the specific document you upload. It identifies clauses, scores risk, flags missing provisions, and suggests edits based on what’s actually in your contract. It never generates case citations or legal authority. The architecture eliminates the hallucination vector that caused Mata v. Avianca.

    What should I do if I suspect AI output contains hallucinated content?

    Stop, verify, and document. Check every citation against a verified legal database (Westlaw, Lexis, Google Scholar). If you find fabricated content, do not submit it. Remove it from your work product. If you’ve already submitted a filing containing unverified AI content, consider notifying the court proactively — courts have shown more leniency toward attorneys who self-report than those who are caught.

    Do I need to disclose that I used AI to review a contract?

    This varies by jurisdiction and context. For court filings, over 300 federal judges now require AI disclosure. For transactional work (contract review and negotiation), disclosure requirements are less defined, but ABA Formal Opinion 512 recommends transparency with clients about AI use, particularly regarding confidentiality and billing. Check your state bar’s guidance — several states now have specific AI disclosure requirements.

    Has any lawyer been sanctioned specifically for using AI contract review tools?

    As of early 2026, no. Every documented sanctions case involves general-purpose AI (primarily ChatGPT) used for legal research — specifically, the submission of fabricated citations. No sanctions case has involved a purpose-built contract review tool used within its designed parameters. The risk pattern is clear: sanctions arise from unverified AI-generated citations, not from AI-assisted document analysis.


    This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.

  • AI and Attorney Competence: What Rule 1.1 Means for Contract Review

    AI and Attorney Competence: What Rule 1.1 Means for Contract Review

    AI and Attorney Competence: What Rule 1.1 Means for Contract Review

    Forty-two U.S. jurisdictions now require lawyers to understand technology as part of their competence obligation. That number was zero before 2012. The shift started with a single sentence added to ABA Model Rule 1.1, Comment 8: lawyers must “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” Fourteen years later, that sentence has become the legal foundation for AI adoption in the profession, and increasingly, the basis for arguing that ignoring AI may itself be a competence failure.

    This article explains what Rule 1.1 technology competence actually requires, how it applies specifically to AI contract review, and what practical steps you can take to demonstrate compliance. If you are a solo or small firm lawyer evaluating AI tools, this is the ethical framework you need.

    Try Clause Labs Free — start building your AI competence with a purpose-built contract review tool. 3 reviews per month, no credit card.

    The Rule That Changed Everything

    ABA Model Rule 1.1 states: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

    The rule itself has not changed since adoption. What changed in 2012 was Comment 8, which now explicitly includes technology within the scope of competence. The ABA’s amendment clarified that keeping “abreast of changes in the law and its practice” includes understanding “the benefits and risks associated with relevant technology.”

    Then in July 2024, ABA Formal Opinion 512 applied this principle directly to generative AI, stating that lawyers must “understand the capacity and limitations of GAI and periodically update that understanding.” This was the ABA’s first comprehensive ethical guidance on AI, and it specifically addressed competence, confidentiality, communication, candor, supervision, and fees.

    The trajectory is clear. Technology competence is no longer aspirational. It is a professional obligation with enforcement teeth.

    What “Technology Competence” Actually Means

    Technology competence does not mean you must use every new tool. It does not mean you need to become a technologist. And it does not mean that failure to adopt AI is automatically an ethics violation.

    What it does mean, based on the ABA’s framework and Formal Opinion 512, is a three-part obligation:

    1. Awareness

    You must know that AI contract review tools exist and understand, at a general level, what they can do. This does not require expertise. It requires the same level of awareness you would apply to any development in legal practice that affects how you serve clients.

    The analogy: you did not need to use email the day it was invented. But at some point, not understanding what email is and why it matters to your practice became a competence issue.

    2. Evaluation

    You must assess whether AI tools are appropriate for your practice. This means looking at the tools available, understanding their capabilities and limitations, evaluating their security posture, and making a reasoned judgment about whether they would benefit your clients.

    Critically, “we evaluated AI tools and decided they are not appropriate for our practice at this time” is a defensible position, as long as the evaluation actually occurred and was documented.

    3. Implementation (If You Adopt)

    If you do adopt AI tools, you must use them competently. This means understanding what the tool does, supervising its output, verifying its analysis, and maintaining your professional judgment as the final decision-maker.

    Formal Opinion 512 is explicit on this point: competence “requires the lawyer to have a reasonable understanding” of the technology, not just access to it.

    States That Have Adopted Technology Competence

    As tracked by LawNext’s Tech Competence Scoreboard, the adoption landscape as of early 2026:

    42 jurisdictions have adopted Comment 8 or equivalent language:

    This includes 40 states plus the District of Columbia and Puerto Rico. D.C. adopted its amendment in April 2025. Puerto Rico went further with Rule 1.19, effective January 2026, which creates a standalone technology competence requirement that exceeds the ABA Model Rules.

    States with Comment 8 PLUS AI-specific guidance:

    Several states have gone beyond Comment 8 to address AI specifically:

    • California: Published practical guidance on AI, requiring competence assessment before use and disclosure when materially affecting representation
    • Florida: Opinion 24-1 addresses AI use with specific requirements for confidentiality and billing
    • Texas: Opinion 705 (February 2025) requires human oversight of AI-generated work
    • North Carolina: 2024 Formal Ethics Opinion 1 provides detailed AI guidance
    • Oregon: Formal Opinion 2025-205 addresses AI tools specifically

    Remaining states without Comment 8:

    A small number of states have not formally adopted the amended comment. However, their existing competence rules are broad enough that technology competence may be implied. As a practical matter, the direction is uniform: technology competence is expected everywhere.

    For a comprehensive 50-state reference, see Justia’s AI and Attorney Ethics Rules survey.

    How Rule 1.1 Applies to AI Contract Review

    The technology competence framework maps directly onto the decision to use (or not use) AI contract review tools. Here is how each element of Rule 1.1 applies.

    The Knowledge Requirement

    You must understand what the AI tool does:

    • Clause identification: The tool reads the contract text and categorizes each provision (indemnification, limitation of liability, termination, etc.)
    • Risk scoring: The tool assigns risk levels based on standard practice for the contract type
    • Missing clause detection: The tool identifies provisions that are typically present in this contract type but absent from the document
    • Redline suggestions: The tool generates proposed edits to problematic provisions

    You must also understand what the tool does not do:

    • It does not understand your client’s business objectives or risk tolerance
    • It does not evaluate enforceability in a specific court before a specific judge
    • It does not account for the parties’ prior course of dealing
    • It does not replace your professional judgment on how to advise your client

    The Skill Requirement

    You must be able to evaluate AI output critically:

    • Can you tell when the AI’s clause categorization is wrong?
    • Can you assess whether a flagged risk is actually significant in the context of this deal?
    • Can you determine whether a “missing clause” finding is a genuine gap or just a different structural approach?
    • Can you apply the AI’s suggestions strategically, knowing which battles to fight in negotiation?

    This is where your legal expertise intersects with the AI tool. The AI provides the data. You provide the judgment. For a detailed framework on how to review AI-flagged issues, see our guide to reviewing contracts for red flags.

    The Thoroughness Requirement

    You must use AI as a supplement, not a substitute:

    • AI output must be reviewed before relying on it
    • AI analysis must be cross-referenced against the actual contract text
    • Client-specific context must be layered on top of AI findings
    • The final work product must reflect your professional judgment, not just the AI’s output

    The ABA’s 2024 Legal Technology Survey found that 75% of lawyers cite accuracy as their top concern about AI. That concern directly supports the thoroughness requirement: you must verify, not just trust.

    The Preparation Requirement

    You must learn the tool before using it on client matters:

    • Test the tool on contracts you have already reviewed manually (so you can compare results)
    • Understand the tool’s strengths and weaknesses by contract type
    • Know how the tool handles edge cases and unusual provisions
    • Document your testing process

    The Flip Side: Is NOT Using AI a Competence Issue?

    This is the question that makes the legal profession uncomfortable. The argument is straightforward:

    If AI can identify risks in a 50-page MSA that a manual review might miss… If AI can complete a risk analysis in 60 seconds that would take 3 hours manually… If the cost of AI review ($49/month) is trivial compared to the cost of missing an issue (malpractice claim, client loss, reputational damage)…

    Then ignoring AI tools entirely may itself raise competence questions.

    This is not hypothetical. The Redgrave LLP analysis of technology competence notes that the duty extends to “understanding what tools exist and evaluating them.” A lawyer who has never looked at AI contract review tools in 2026 has arguably failed the “awareness” prong of technology competence.

    Important qualifiers: Not using AI is not malpractice. No lawyer has been disciplined for declining to adopt AI tools. But the trajectory is clear. As AI tools become standard practice, the bar for reasonable competence will shift. The lawyers who evaluated AI, tested it, and made informed decisions — whether to adopt or not — will be in a stronger position than those who simply ignored it.

    Thomson Reuters’ 2025 report found that 78% of law firm respondents believe generative AI will become central to legal workflow within five years. If that prediction is even partially correct, the competence implications are significant.

    Case Studies: Where Competence and AI Intersect

    Scenario 1: The Missed Liability Cap

    A solo lawyer reviews a 50-page MSA manually for a client. Under time pressure, she misses a provision burying the limitation of liability inside a definitions section. The cap is set at $10,000 for a $500,000 engagement. The client suffers $200,000 in damages from the vendor’s breach and can only recover $10,000.

    Competence analysis: If a readily available, affordable AI tool would have flagged the buried liability cap — and the lawyer never evaluated such tools — there is a credible argument under Comment 8 that the lawyer failed the awareness and evaluation prongs of technology competence. The lawyer’s strongest defense would be documented evidence that she evaluated AI tools and reasonably concluded they were not appropriate for her practice.

    Scenario 2: The Unsupervised AI Output

    A lawyer uses an AI contract review tool to analyze an employment agreement. The tool flags a non-compete clause as potentially unenforceable. Without checking state-specific law, the lawyer advises the client that the non-compete is void. The client relies on this advice, takes a job with a competitor, and is sued. The non-compete was actually enforceable in their jurisdiction.

    Competence analysis: The lawyer failed the thoroughness and skill requirements. The AI provided a general finding. The lawyer’s obligation was to apply jurisdiction-specific analysis — exactly the kind of contextual judgment that AI cannot provide. Using AI is not a defense when the lawyer failed to supervise the output. For more on the ethical framework for AI supervision, see our guide on using AI for contract review ethically.

    Scenario 3: The Refusal to Learn

    A client specifically asks their lawyer whether they should use AI tools to review the 15 vendor contracts their startup signs each quarter. The lawyer dismisses the question: “I don’t believe in AI for legal work.” The lawyer has never evaluated any AI legal tools, taken any CLE on AI, or read any bar guidance on AI.

    Competence analysis: Under Comment 8, the lawyer has a duty to understand the “benefits and risks associated with relevant technology.” Dismissing AI without evaluation is different from evaluating it and concluding it is not appropriate. The former may violate the awareness prong. The latter does not. For specific examples of how AI handles different contract types, see our analysis of common NDA mistakes.

    How to Demonstrate AI Competence: 7 Practical Steps

    Whether or not you choose to adopt AI tools, these steps demonstrate technology competence under Rule 1.1:

    1. Take a CLE course on AI in legal practice. Most state bars now offer AI-specific CLE programs. Complete at least one per year. Keep the certificates.

    2. Read your state bar’s AI guidance. Justia’s 50-state survey is a starting point. Check your specific state bar’s website for adopted opinions.

    3. Test AI tools on non-client work. Use sample contracts or your own engagement letters. Compare AI output to your manual review. This builds understanding without risking client interests. Clause Labs’s free tier provides 3 reviews per month for this purpose.

    4. Document your AI evaluation process. Write down which tools you evaluated, what you learned, and your conclusions. Even a one-page memo to your file demonstrates the awareness and evaluation prongs.

    5. Create an AI use policy for your practice. This does not need to be complex. Cover: which tools are approved, how output is verified, how client data is protected, and when AI is not appropriate.

    6. Review AI output systematically. If you adopt a tool, develop a consistent verification process. Check every flagged risk against the contract text. Apply your judgment to every recommendation.

    7. Stay current on AI developments. Follow LawSites/LawNext and the ABA’s technology resources. Review your AI use policy quarterly. AI is evolving faster than the ethics rules that govern it.

    How Purpose-Built Tools Support Rule 1.1 Compliance

    The right AI tool makes competence easier, not harder:

    Transparency: Purpose-built contract review tools provide structured output (clause-by-clause analysis, risk scores with explanations, confidence indicators). You can see exactly what the AI analyzed and why it flagged specific provisions. This supports the knowledge requirement.

    Verifiability: Structured output is easier to verify than freeform text. When a tool tells you “this is an indemnification clause rated High Risk because it is one-sided and uncapped,” you can check that assessment in seconds. This supports the thoroughness requirement.

    Human-in-the-loop design: Tools built for lawyers assume the lawyer makes the final decision. They present findings and suggestions, not conclusions. This supports the skill requirement.

    Testability: Free tiers and trial periods let you test the tool before using it on client matters. This supports the preparation requirement.

    The ABA’s 2024 Legal Technology Survey found that AI adoption among lawyers nearly tripled from 11% in 2023 to 30% in 2024. Among firms with 500+ lawyers, adoption hit 47.8%. The gap between firms using AI and those that are not is widening, and it maps directly onto the competence divide. For a comparison of the tools available, see our best AI contract review tools guide.

    If you are evaluating AI contract review tools for the first time, start with Clause Labs’s free analyzer — upload any contract and get a structured risk report in under 60 seconds. No signup required. It is the fastest way to see what AI contract review actually looks like.

    Frequently Asked Questions

    Can I be disciplined for using AI in my practice?

    You can be disciplined for using AI improperly — specifically, for submitting unverified AI output, violating client confidentiality, or failing to supervise AI work product. Using AI itself is not an ethics violation when done within the framework of Rules 1.1 (Competence), 1.6 (Confidentiality), and 5.3 (Supervision). Formal Opinion 512 addresses this comprehensively.

    Can I be disciplined for NOT using AI?

    Not yet. No lawyer has been disciplined solely for declining to adopt AI tools. However, the competence trajectory is toward expecting lawyers to at least evaluate available technology. The safest position is documented awareness and evaluation, regardless of whether you ultimately adopt.

    Do I need CLE credits specifically on AI?

    Most states do not yet require AI-specific CLE. However, several states are considering it, and many CLEs on professional responsibility now include AI components. Taking AI-specific CLE voluntarily demonstrates competence and provides documentation.

    How do I evaluate whether an AI tool is “competent”?

    Apply the same due diligence you would to hiring an associate: What is the tool’s accuracy on the contract types you review? How does it handle edge cases? What are its known limitations? What security certifications does it hold? How responsive is support? Test it on contracts where you already know the answer, and compare the AI’s findings to your own.

    What if my client objects to AI use?

    Respect the client’s wishes. Rule 1.4 requires communication about the means by which the client’s objectives are to be accomplished. If a client specifically directs you not to use AI, document that instruction and comply. The competence obligation does not override the client’s right to direct the representation.


    This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.

  • Is AI Contract Review Ethical? What Every Bar Association Says in 2026

    Is AI Contract Review Ethical? What Every Bar Association Says in 2026

    Is AI Contract Review Ethical? What Every Bar Association Says in 2026

    Yes — and the more interesting question is whether not using AI is becoming the bigger ethical risk.

    ABA Model Rule 1.1, Comment 8 requires lawyers to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” As of 2026, more than 40 states have adopted this technology competence language or its equivalent. When AI tools can catch contract risks faster and more consistently than manual review — and when they cost less than one billable hour per month — the duty of competence starts to cut both ways.

    This article breaks down exactly what the ABA, state bars, and Model Rules say about using AI for contract review, gives you a practical ethics framework you can implement today, and addresses the specific concerns that keep lawyers from adopting tools that could meaningfully improve their practice. Try Clause Labs Free to see an ethically designed AI contract review workflow in action — purpose-built for lawyers who take their ethical obligations seriously.

    What the ABA Says: Formal Opinion 512

    On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512 — the first comprehensive ABA guidance on lawyers’ use of generative AI tools. The opinion confirms that AI tools can be used in legal practice, provided lawyers fulfill their existing ethical obligations.

    The key takeaways:

    AI is a tool, not a shortcut. The opinion states that generative AI “can be a useful tool to increase efficiency in the practice of law” but that “attorneys utilizing GAI need to fully consider their applicable ethical obligations.” Translation: you can use AI, but you cannot outsource your professional judgment.

    Six ethical areas are implicated. Formal Opinion 512 analyzes AI use under competence (Rule 1.1), confidentiality (Rule 1.6), communication (Rule 1.4), candor toward the tribunal (Rule 3.3), supervisory responsibilities (Rules 5.1 and 5.3), and fees (Rule 1.5).

    Verification is mandatory. The opinion is unambiguous: “Attorneys should not rely on GAI outputs without independent verification or review.” This applies to all AI-assisted legal work, including contract review. You must check the AI’s work product before relying on it.

    Informed consent may be required. For confidentiality purposes, the opinion recommends that lawyers “secure clients’ informed consent before using client confidences in GAI tools” and warns that “boilerplate consent included in engagement letters will not be adequate.” The specificity of the consent must match the tool being used.

    Formal Opinion 512 is not a prohibition. It’s a permission structure with guardrails. Lawyers who follow its framework can use AI confidently and ethically.

    The 5 Model Rules That Matter for AI Contract Review

    Not every Model Rule applies equally to contract review. Here are the five that matter most, with specific guidance on compliance.

    Rule 1.1 — Competence

    What it says: “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”

    How it applies to AI contract review: You must understand how the AI tool works before using it on client matters. You must be able to evaluate its output critically. And you must stay current on developments in legal AI technology.

    How to comply:

    • Learn what the AI tool actually does. Contract review AI identifies clauses, scores risks, and flags missing provisions. It does not provide legal advice, generate case citations, or make strategic judgments.
    • Test the tool on contracts you’ve already reviewed manually. Compare the AI’s findings to your own. Understand where it’s strong (clause identification, missing provisions, pattern-based risks) and where it’s limited (business context, enforceability in specific courts, novel provisions).
    • Review every AI output before relying on it. The AI’s risk score and clause flagging are a starting point — not a conclusion.

    The flip side of Rule 1.1 is increasingly relevant: if AI tools can catch risks more consistently and faster than manual review, and if a lawyer’s failure to use available technology results in a missed issue, technology competence may require awareness of AI tools — even if it doesn’t yet require their adoption.

    Rule 1.4 — Communication

    What it says: A lawyer shall reasonably consult with the client about the means by which the client’s objectives are to be pursued.

    How it applies to AI contract review: In jurisdictions that require AI disclosure, you must inform your client that you’re using AI tools to assist with their contract review. Even in jurisdictions without explicit disclosure requirements, transparency about your workflow builds trust.

    How to comply:

    Add AI disclosure language to your engagement letter. Here’s sample language that satisfies most state requirements:

    “Our firm uses AI-assisted contract review tools to enhance the accuracy and efficiency of our analysis. These tools identify contract clauses, flag potential risks, and detect missing provisions. All AI-generated insights are reviewed and verified by a licensed attorney before being included in any client deliverable. Your confidential information is processed using enterprise-grade AI tools with encryption in transit and at rest, no data retention after analysis, and no use of your data for model training.”

    This disclosure is specific to the tool’s function and data handling — not the generic boilerplate that Formal Opinion 512 warns against.

    Rule 1.5 — Fees

    What it says: A lawyer shall not make an agreement for, charge, or collect an unreasonable fee.

    How it applies to AI contract review: If AI reduces the time required to review a contract from 90 minutes to 30 minutes, can you still charge for 90 minutes of work?

    How to comply: This is where many lawyers get anxious, but the ethical analysis is straightforward.

    Value billing: If you charge flat fees for contract review, AI doesn’t change the fee calculation. The client is paying for the outcome — a thoroughly reviewed contract with flagged risks and recommended changes — not for the hours it took. A faster, more thorough review at the same price is a better deal for the client, not a worse one.

    Hourly billing: If you bill hourly, bill for the time you actually spend. That includes time reviewing the AI output, applying professional judgment, and preparing the client deliverable. It does not include billing 90 minutes for a 30-minute AI-assisted review. According to Florida Bar Ethics Opinion 24-1, attorneys must ensure that fees and costs remain reasonable when using AI, and passing along the cost of AI tool subscriptions requires disclosure and client agreement.

    The honest answer: AI makes individual reviews faster, which means you can either reduce per-review pricing (competitive advantage) or handle more reviews in the same time (capacity advantage). Either approach is ethical. What’s not ethical is billing as if AI doesn’t exist.

    Rule 1.6 — Confidentiality

    What it says: A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent.

    How it applies to AI contract review: When you upload a client’s contract to an AI tool, you’re sharing confidential information with a third-party technology provider. This triggers the same analysis you’d apply to any third-party vendor — cloud storage, e-discovery platforms, or outside counsel.

    How to comply:

    Before uploading any client contract to any AI tool, verify:

    • Data encryption: Is client data encrypted in transit and at rest?
    • Data retention: Does the tool retain client data after analysis? For how long? Can you request deletion?
    • Training data: Is client data used to train AI models? Any tool that trains on your client’s contracts is a confidentiality risk.
    • Subprocessors: Who has access to the data? Are there subprocessors with their own data handling policies?
    • Compliance certifications: Does the tool have SOC 2, ISO 27001, or equivalent security certifications?

    Tools that typically pass this analysis: Purpose-built legal AI tools (Clause Labs, Spellbook, LegalOn) that are designed for lawyer workflows and understand confidentiality requirements. These tools typically offer no-retention policies, encryption, and explicit commitments about training data.

    Tools that require caution: General-purpose AI chatbots (ChatGPT, Claude, Gemini) when used in their default consumer configurations. OpenAI’s default terms allow data usage for model improvement unless you opt out or use the API. Enterprise tiers with data processing agreements may address this, but you must verify.

    Rule 5.3 — Supervision of Nonlawyer Assistance

    What it says: A lawyer who employs or retains nonlawyer assistants shall make reasonable efforts to ensure that the person’s conduct is compatible with the professional obligations of the lawyer.

    How it applies to AI contract review: The ABA has analogized AI output to work product from a junior associate or paralegal — it must be supervised. You are responsible for what the AI produces, just as you’re responsible for what a first-year associate drafts.

    How to comply:

    Treat AI contract review output the way you’d treat a junior associate’s first draft:

    1. Read the AI’s clause identification against the actual contract. Did it categorize correctly?
    2. Review each flagged risk. Is the risk assessment reasonable given the contract type and business context?
    3. Check “missing clause” findings. Is the clause actually missing, or did the AI fail to identify it in a different section?
    4. Apply your professional judgment. The AI doesn’t know whether the client has strong negotiating leverage, whether this is a must-sign deal, or whether the counterparty will walk if you push too hard.
    5. Sign off on the final work product as your own. It’s your analysis. You’re responsible.

    Want to see what an ethically designed AI contract review workflow looks like? Upload any contract to Clause Labs — structured output, no hallucinated citations, full confidentiality protections. The tool is built around the exact framework Formal Opinion 512 requires.

    State-by-State Bar Positions on AI in 2026

    Beyond the ABA’s national guidance, individual state bars have issued their own opinions and rules. Here’s where the major jurisdictions stand.

    States with Specific AI Ethics Guidance

    State Guidance Key Requirement Citation
    California Practical Guidance (Nov 2023) Competence requires understanding LLMs before use; assess hallucination risks and data privacy State Bar Board of Trustees
    Florida Opinion 24-1 (Jan 2024) Mandatory disclosure when AI impacts billing or costs; reasonable oversight; confidentiality protections Florida Bar
    Texas Opinion 705 (Feb 2025) Human oversight of AI-generated work; prevent submission of fabricated citations Texas Ethics Committee
    New York NYSBA AI Task Force Report (2025) Phased roadmap for AI adoption; requires 2 annual CLE credits in AI competency NYSBA
    Oregon Formal Opinion 2025-205 Comprehensive coverage: competence, confidentiality, billing disclosure, court filings, supervision Oregon State Bar
    D.C. Rule 1.1 Comment Amendment (Apr 2025) Adopted technology competence language matching ABA Model Rule 1.1 Comment 8 D.C. Court of Appeals
    Puerto Rico Rule 1.19 (effective Jan 2026) Goes beyond ABA Model Rules — requires technological competence and diligence as a standalone rule Supreme Court of Puerto Rico

    The Trend Across All States

    According to Justia’s 50-state survey of AI and attorney ethics rules, the trajectory is clear:

    • No state bar has prohibited AI use in legal practice
    • Multiple states require or are considering mandatory AI disclosure to clients
    • Florida is leading on billing transparency for AI-assisted work
    • New York is leading on CLE requirements for AI competence
    • Every state with published guidance emphasizes the same core principle: AI output must be verified by a licensed attorney

    If your state hasn’t published specific AI guidance, the ABA’s Formal Opinion 512 provides the baseline framework — and your state’s adoption of Model Rule 1.1 Comment 8 (technology competence) creates an independent obligation to understand AI tools.

    The Mata v. Avianca Problem — And Why It Doesn’t Apply to Contract Review

    Every conversation about legal AI eventually circles back to Mata v. Avianca, Inc., No. 22-cv-1461 (S.D.N.Y. 2023). Attorney Steven Schwartz used ChatGPT to research case law, ChatGPT fabricated six non-existent cases, Schwartz submitted the brief without verifying the citations, and the court imposed $5,000 in sanctions against Schwartz and his co-counsel for violating Federal Rule of Civil Procedure 11.

    Mata v. Avianca is the case that made lawyers afraid of AI. But the lesson isn’t “don’t use AI” — it’s “don’t submit AI output without verification.” Schwartz didn’t fail because he used ChatGPT. He failed because he trusted it blindly.

    More importantly, contract review AI operates in a fundamentally different risk category than legal research AI:

    Legal research AI generates content from scratch — case citations, legal arguments, rule interpretations. This is where hallucination risk is highest, because the AI is creating output that doesn’t exist in the input.

    Contract review AI analyzes a specific document you provide. It identifies clauses in text that exists. It flags risks based on what’s actually in the contract. It detects missing provisions by comparing against a framework. It doesn’t generate legal citations, invent case law, or fabricate rules.

    The hallucination risk in contract review is not zero — AI might miscategorize a clause, overstate a risk, or miss a nuance. But it’s categorically different from the kind of fabrication that led to sanctions in Mata v. Avianca. There are no citations to verify because the tool doesn’t generate citations. The output is a structured analysis of text you can see.

    For a complete analysis of the distinction between research AI and review AI, see our article on the Mata v. Avianca problem and how to avoid it. For a broader comparison of purpose-built tools versus general AI for contract review, see our Clause Labs vs ChatGPT analysis.

    Practical Ethics Framework for AI Contract Review

    Here’s a usable five-step framework you can implement today.

    Step 1: Before Adopting Any AI Tool

    Verify data security: Does the tool encrypt data in transit and at rest? Does it retain data after analysis? Does it train on user-uploaded contracts? Is it SOC 2 certified or equivalent?

    Understand how it works: What does the tool analyze? What’s its methodology? What are its known limitations? If you can’t explain it to a client, you’re not ready to use it.

    Check your state bar guidance: Review the table above and check your state bar’s website for published opinions on AI use. If no guidance exists, use ABA Formal Opinion 512 as your baseline.

    Step 2: Before Each Client Use

    Assess appropriateness: Is AI contract review suitable for this contract type? Purpose-built tools handle standard commercial agreements (NDAs, MSAs, employment agreements, SaaS agreements) well. Highly bespoke or novel agreements may need more human attention.

    Client consent: Does your engagement letter include AI disclosure? Does your jurisdiction require specific consent? When in doubt, disclose.

    Step 3: During AI Review

    Upload the contract to your approved AI tool.

    Review the output clause by clause against the actual contract text. Did the AI identify clauses correctly? Are the risk assessments reasonable?

    Verify “missing clause” findings. Is the clause actually missing, or is it addressed in a different section, exhibit, or referenced document?

    Step 4: After AI Review

    Apply professional judgment. Add business context, client-specific considerations, and negotiation strategy that the AI can’t know.

    Prepare the client deliverable. The work product is yours — signed, reviewed, and verified.

    Document your process. Keep records of which tools you used, how you reviewed the output, and what professional judgment you applied. This documentation protects you against malpractice claims and bar complaints.

    Step 5: Ongoing Compliance

    Stay current. Subscribe to your state bar’s updates on AI ethics. Follow the ABA’s professional responsibility publications for updates to Formal Opinion 512 and related guidance.

    Review your AI use policy quarterly. State rules, tool capabilities, and best practices are evolving rapidly.

    Invest in CLE. New York now requires AI competency credits. Other states are likely to follow. Getting ahead of mandatory CLE requirements is both smart practice and good ethics.

    The Ethics of NOT Using AI

    This section makes some lawyers uncomfortable, but the argument is increasingly supported by the profession.

    Model Rule 1.1 Comment 8 requires lawyers to stay current on technology that benefits their practice. When AI contract review tools can:

    • Identify clause types and risks in 30 seconds that manual review might miss after 90 minutes
    • Detect missing provisions that even experienced lawyers overlook when fatigued or rushed
    • Process 10 contracts in the time it takes to manually review one
    • Cost less than a single billable hour per month

    …the question shifts from “Is it ethical to use AI?” to “Is it ethical to not even evaluate it?”

    This doesn’t mean every lawyer must adopt AI tools today. But it means every lawyer should understand what AI contract review tools exist, what they can do, what their limitations are, and whether they might benefit client representation. Willful ignorance of available technology — when that technology could meaningfully improve client outcomes — sits uncomfortably with the duty of competence.

    As the National Association of Attorneys General has noted, the ethical duty of technology competence is not about being an early adopter. It’s about being an informed practitioner.

    Frequently Asked Questions

    Do I need to disclose AI use to clients?

    It depends on your jurisdiction. Florida (Opinion 24-1) requires disclosure when AI impacts billing or costs. New York’s Task Force recommends disclosure as best practice. California’s guidance emphasizes understanding the tools but doesn’t mandate specific disclosure language. The ABA’s Formal Opinion 512 recommends informed consent that goes beyond engagement letter boilerplate. When in doubt, disclose — transparency builds client trust, and over-disclosure is never an ethical violation.

    Can I use ChatGPT for client contracts?

    With significant caveats. ChatGPT’s default consumer tier may use your inputs for model training — a potential Rule 1.6 violation. ChatGPT’s output is inconsistent, unstructured, and prone to hallucination. And it lacks the legal framework, clause identification, and risk scoring that purpose-built tools provide. If you use ChatGPT for contract-related tasks, use the Enterprise or API tier with a data processing agreement, verify every output, and understand that you’re using a general tool for a specialized task. Purpose-built tools like Clause Labs’s free contract analyzer are designed specifically for this workflow — with structured risk output, clause-by-clause analysis, and the data handling safeguards that Rule 1.6 requires.

    What if the AI makes a mistake in its analysis?

    You’re responsible. Just as you’re responsible when a paralegal misreads a clause or a junior associate misidentifies a risk, you’re responsible for the final work product. This is why Rule 5.3 supervision is not optional. Review every AI output before relying on it. If you catch an error, correct it. If an error gets through because you didn’t review the output, the ethical failure is yours — not the AI’s.

    Is there malpractice coverage for AI-assisted work?

    Most malpractice policies don’t explicitly address AI — but they also don’t explicitly exclude it. The standard of care is the same: you must exercise the competence, diligence, and judgment expected of a reasonable lawyer. If AI helps you meet that standard (by catching issues you might have missed), it reduces malpractice risk. If you rely on AI without proper supervision and miss something, the malpractice exposure is the same as any other failure of competence. Best practice: notify your insurer that you use AI tools and get written confirmation that your coverage applies to AI-assisted work product.

    Can I charge full rates for AI-assisted contract review?

    If you bill flat fees: yes. The client is paying for the result, not the methodology. A thoroughly reviewed contract with flagged risks and recommended edits is worth the same to the client whether it took you 90 minutes or 30 minutes.

    If you bill hourly: bill for the time you actually spend. That includes AI tool time, output review, professional judgment application, and client deliverable preparation. Do not bill 90 minutes for 30 minutes of work. Under ABA Model Rule 1.5, fees must be reasonable.

    The broader trend in the profession is clear: AI-assisted efficiency should benefit both lawyer and client, and transparent billing for AI-assisted work builds trust and competitive advantage.

    Ready to see what ethical AI contract review looks like in practice? Try Clause Labs free — upload any contract and get a structured risk analysis in under 60 seconds. No data retention, no model training on your documents, full encryption. Built for lawyers who take their ethical obligations seriously. Start with 3 free reviews per month, no credit card required.


    This article is for informational purposes only and does not constitute legal advice. Ethics rules vary by jurisdiction, and the guidance in this article reflects the legal landscape as of February 2026. Consult your state bar’s ethics hotline or a legal ethics attorney for advice specific to your jurisdiction and practice.

  • Assignment and Change of Control Clauses: Protecting Clients in M&A

    Assignment and Change of Control Clauses: Protecting Clients in M&A

    Assignment and Change of Control Clauses: Protecting Clients in M&A

    An overlooked assignment restriction in a $12 million asset purchase nearly killed the deal three days before closing. The seller’s largest customer contract — representing 40% of the business’s revenue — contained a standard anti-assignment clause requiring the customer’s “prior written consent in its sole discretion.” The customer refused consent. Without that contract, the buyer’s revenue projections collapsed, the lender pulled financing, and the deal died. The clause was four lines long. Nobody flagged it during due diligence until the buyer’s junior associate ran a final checklist.

    According to the Association of Corporate Counsel, assignment provisions are among the most frequently overlooked clauses in M&A due diligence — and among the most consequential when they surface at the wrong moment. Try Clause Labs Free to see how AI identifies assignment restrictions and change of control provisions across your contract portfolio in minutes instead of days.

    Why Assignment Clauses Matter More Than You Think

    Assignment clauses determine whether a contract can be transferred to a third party. They sound straightforward. In practice, they create three distinct problems:

    In M&A transactions: If key contracts can’t be assigned, the deal may collapse or require significant price reductions. Buyers conducting “contract portability” analysis routinely discover that 20-30% of a target’s material contracts have assignment restrictions that weren’t identified earlier in due diligence.

    In financing: Lenders often require assignment of contract rights (specifically, accounts receivable) as collateral. Under UCC Section 9-406, anti-assignment clauses are generally overridden for assignments of payment rights — but the analysis is more complex than most transactional lawyers realize, and other contract rights may still be restricted.

    In business reorganizations: Companies that can’t assign contracts to affiliates or subsidiaries face obstacles during internal restructuring, tax-driven entity changes, and corporate reorganizations.

    As our contract review red flags checklist notes, assignment provisions are one of the most commonly missed red flags in routine contract review because they appear in the “boilerplate” section that gets the least attorney attention.

    Types of Assignment Restrictions

    Not all anti-assignment provisions are created equal. The differences in language create dramatically different consequences.

    The most common formulation:

    Neither party may assign this Agreement without the prior written consent
    of the other party.
    

    The critical question is the consent standard. Two options dominate:

    “Not to be unreasonably withheld, conditioned, or delayed” — This is the borrower-friendly standard. The non-assigning party must have a legitimate business reason to refuse consent. Courts will evaluate whether the refusal was reasonable under the circumstances. If you’re the party who might need to assign the contract, fight for this language.

    “In its sole discretion” — This is effectively a veto. The non-assigning party can refuse for any reason or no reason. If you’re reviewing a contract that restricts assignment to “sole discretion” consent, understand that this clause may block a future sale of your client’s business entirely.

    The difference between these two standards has been described by practitioners as the difference between a speed bump and a brick wall. Negotiate accordingly.

    No Assignment At All

    This Agreement may not be assigned by either party under any circumstances.
    

    The strictest form. Even with consent, no assignment is permitted. In practice, courts sometimes refuse to enforce absolute prohibitions — particularly when the assignment occurs by operation of law (e.g., a merger). But drafting this language into a contract creates substantial risk for any party contemplating a future transaction.

    Assignment Permitted to Affiliates

    A standard carve-out:

    Either party may assign this Agreement to any affiliate without the other
    party's consent, provided that the assigning party remains liable for
    performance hereunder.
    

    This language permits internal reorganizations, subsidiary transfers, and corporate restructuring without triggering consent requirements. Two drafting issues to watch:

    1. Definition of “affiliate”: Does it require majority ownership (50%+)? Any ownership? Common control? The definition matters when a parent company reduces its stake in a subsidiary below the control threshold.

    2. Continuing liability: The “remains liable” language protects the non-assigning party if the affiliate fails to perform. Without it, the assignment could effectively transfer the obligation to an entity with fewer resources.

    Change of Control Triggers

    The provision that creates the most M&A friction:

    A change of control of either party shall be deemed an assignment for
    purposes of this Section, and shall require the prior written consent
    of the other party.
    

    This language captures transactions that don’t technically involve assignment — mergers, acquisitions, majority ownership changes — and subjects them to the same consent requirements. We’ll cover this in depth below.

    Assignment in M&A Transactions

    The treatment of assignment varies dramatically depending on the transaction structure.

    Asset Sales

    In an asset sale, the buyer acquires specific assets of the seller — including contracts. Each contract must be explicitly assigned from seller to buyer. Anti-assignment clauses are the primary obstacle.

    The due diligence imperative: Buyers should identify every material contract, categorize the assignment restriction in each, and determine whether consent is required. This “contract portability analysis” should happen early in due diligence — not three days before closing.

    Third-party consent process: For contracts requiring consent, the consent solicitation process typically involves:
    1. Identifying all contracts requiring consent
    2. Drafting consent request letters
    3. Sending requests with sufficient lead time before closing
    4. Negotiating with reluctant counterparties
    5. Determining which consents are conditions to closing

    Deal risk: If a material contract’s counterparty refuses consent, the deal structure may need to change. Options include converting to a stock sale (which avoids assignment), using an intermediary entity, or accepting the contract won’t transfer and adjusting the purchase price.

    Stock and Equity Sales

    In a stock or equity sale, the buyer acquires the equity of the target entity. The entity itself continues to exist — it just has a new owner. Since the contracts stay with the entity, no “assignment” occurs in the technical sense.

    But here’s the trap: change of control provisions may trigger consent requirements anyway. A well-drafted anti-assignment clause often includes “whether by merger, consolidation, change of control, operation of law, or otherwise.” This language captures stock sales, merging the assignment and change of control concepts.

    Buyers in stock transactions must review anti-assignment clauses just as carefully as in asset deals. The question isn’t “Is there an assignment?” — it’s “Does the clause define this transaction as an assignment?”

    Mergers

    In a merger, the target entity merges into the buyer (or a subsidiary), and the contracts transfer by operation of law. The general rule is that contracts survive a merger.

    However, as noted by contract drafting experts, many anti-assignment clauses specifically address mergers: “This Agreement may not be assigned, whether by merger, operation of law, or otherwise, without prior written consent.” If the clause includes “operation of law” or specifically references mergers, consent may be required even in a statutory merger.

    Courts are split on whether a standard anti-assignment clause (without the “operation of law” language) prohibits transfer by merger. The safest practice: assume it does, and plan accordingly.

    The Change of Control Clause

    Change of control provisions deserve separate analysis because they capture transactions that don’t technically involve assignment.

    What Triggers Change of Control

    A typical change of control definition includes:

    • Acquisition of majority voting power (50%+ of outstanding shares)
    • Merger or consolidation with another entity
    • Sale of all or substantially all assets (which overlaps with assignment)
    • Change in board composition (a majority of directors replaced)
    • IPO (sometimes included, though less common)

    The definition matters enormously. A narrow definition capturing only majority ownership changes leaves room for a 49% stake acquisition that gives effective control without triggering the clause. A broad definition capturing any “change in management or control” could be triggered by an executive departure.

    What Happens Upon Change of Control

    The consequences range from manageable to deal-breaking:

    Consequence Impact Negotiability
    Nothing (no provision) Contract continues undisturbed N/A
    Notice required Other party must be informed Low friction
    Consent required Other party can refuse High friction
    Termination right Other party can exit the contract High risk
    Renegotiation right Other party can demand new terms Moderate risk
    Acceleration Payments or obligations accelerate Financial risk

    In M&A, the termination right is the most dangerous outcome. If a material customer contract gives the customer the right to terminate upon change of control — with no compensation and no cure period — that contract’s value to the buyer drops substantially.

    Assignment and Change of Control Red Flags

    When reviewing contracts, flag these issues. For a comprehensive approach to identifying contract red flags, combine manual review with AI-assisted analysis.

    Red Flag 1: No Anti-Assignment Clause

    A contract with no assignment restriction can be freely assigned to anyone. This means your client’s counterparty could transfer the contract to a competitor, a company with fewer resources, or an entity in a less favorable jurisdiction. For service contracts where the identity of the performing party matters, this is a serious gap.

    As discussed above, “sole discretion” is effectively a veto. If your client might ever sell their business, this clause blocks the transaction unless the counterparty cooperates — and the counterparty has no obligation to cooperate.

    Red Flag 3: Change of Control Treated as Assignment

    If the clause deems a change of control to be an assignment, every potential acquisition, merger, or significant equity investment requires the counterparty’s consent. This can make your client’s business unsaleable or significantly reduce its value.

    Red Flag 4: No Affiliate Carve-Out

    Without an affiliate assignment exception, your client can’t reorganize internally. Moving a contract from a parent to a subsidiary, from one subsidiary to another, or consolidating entities all require consent. For growing companies, this is an unnecessary friction point.

    Red Flag 5: “By Operation of Law” Language

    The phrase “whether by merger, operation of law, or otherwise” is designed to capture every possible form of transfer — including statutory mergers. If this language appears in a contract, there is no transaction structure that avoids the consent requirement.

    Red Flag 6: Termination Right Upon Change of Control with No Compensation

    Some contracts give the non-assigning party the right to terminate immediately upon change of control, with no cure period and no termination fee. This puts the counterparty in a position to extract concessions during M&A negotiations: “I’ll consent to the change of control, but I want a 20% price reduction on the contract.”

    Red Flag 7: One-Sided Assignment Rights

    Watch for contracts where only one party’s assignment is restricted. If the vendor can freely assign the contract (including to an entity that provides inferior service) but your client cannot, the provision is fundamentally unbalanced.

    Negotiation Strategies

    When you encounter problematic assignment provisions, here are the most effective negotiation approaches.

    Push for “consent not to be unreasonably withheld.” This is the single most valuable edit you can make to an anti-assignment clause. It converts a potential veto into a reasonableness standard that courts can evaluate.

    Carve out affiliate assignments and internal reorganizations. Most counterparties will agree that internal corporate restructuring shouldn’t require consent, especially if the assigning party remains liable for performance.

    Limit the change of control definition to actual third-party acquisitions. Exclude internal reorganizations, IPOs, and changes in board composition from the change of control trigger. Narrow it to: acquisition of 50%+ of voting power by a third party that is not an affiliate.

    Add cure periods. If a change of control triggers a consent requirement, include a cure period (30-60 days) during which the assigning party can obtain consent before any termination right arises.

    Negotiate a termination fee as an alternative to consent. If the counterparty insists on a termination right upon change of control, negotiate a termination fee that compensates your client for the early termination. This removes the counterparty’s incentive to use the termination right as negotiating leverage.

    Include assignment rights in connection with a sale of the business. Specifically permit assignment in connection with a sale of all or substantially all of the assigning party’s assets, or in connection with a merger or consolidation. This directly addresses the M&A scenario.

    For detailed guidance on limitation of liability provisions that interact with assignment clauses — particularly in the context of successor liability — see our deep-dive guide. And if you’re reviewing vendor agreements where assignment is just one of many risk areas, our vendor agreement red flags guide covers the full picture.

    Want to check assignment provisions across multiple contracts before a transaction? Upload your first contract to Clause Labs for a free AI risk analysis — the Solo plan at $49/month handles 25 reviews for ongoing due diligence needs.

    How Clause Labs Reviews Assignment Provisions

    Clause Labs’s AI identifies assignment restrictions, change of control provisions, and consent requirements across every contract you upload. Specifically, it:

    • Flags missing anti-assignment clauses (open-ended transferability risk)
    • Detects one-sided assignment rights
    • Identifies the consent standard (“sole discretion” vs. “not unreasonably withheld”)
    • Checks for change of control treatment and M&A implications
    • Flags “operation of law” language that captures mergers
    • Detects missing affiliate carve-outs
    • Identifies termination rights triggered by assignment or change of control

    For M&A due diligence involving multiple contracts, the Professional plan ($149/month for up to 100 reviews) or the Team plan ($299/month with batch review for up to 10 contracts at once) can process an entire contract portfolio in hours instead of weeks.

    Frequently Asked Questions

    It depends on the contract language. If there’s no anti-assignment clause, contracts are generally freely assignable. If the clause says “no assignment without consent,” assignment without consent is a breach — but courts are split on whether the assignment is void (no effect) or merely voidable (effective until challenged). UCC Section 9-406 overrides anti-assignment clauses for assignments of accounts receivable and payment rights as security interests, regardless of what the contract says.

    Does a merger trigger anti-assignment clauses?

    It depends on the clause’s language. A standard “no assignment without consent” clause, without more, may or may not cover mergers — courts are divided. But clauses that include “whether by merger, operation of law, or otherwise” unambiguously capture mergers. OlenderFeldman LLP’s analysis of this issue notes that careful drafting is essential to resolve the ambiguity. Review every material contract’s specific language; don’t rely on general rules.

    What’s the difference between assignment and change of control?

    Assignment transfers the contract itself from one party to a new party. Change of control changes who owns or controls the contracting party, but the contracting party itself remains the same. A stock sale changes control; an asset sale requires assignment. Many contracts treat a change of control as if it were an assignment, but they are legally distinct concepts with different implications for SaaS agreements and other recurring-revenue contracts.

    Can I assign contract rights but not obligations?

    Technically yes — rights are assignable, obligations are delegable. You can assign the right to receive payment under a contract without delegating the obligation to perform. However, most anti-assignment clauses prohibit both assignment of rights and delegation of duties. And the non-assigning party is rarely willing to accept a new obligor without some say in the matter. The practical reality: most assignments involve both rights and obligations.

    How do assignment clauses affect M&A due diligence?

    They are a critical component of the “contract portability” analysis in any deal. Buyers should: (1) identify all material contracts, (2) categorize the assignment restriction in each, (3) determine which require consent and under what standard, (4) assess the likelihood of obtaining consent from each counterparty, and (5) build the consent timeline into the closing schedule. Contracts where consent is unlikely should be flagged as deal risks and reflected in the purchase price negotiation.


    This article is for informational purposes only and does not constitute legal advice. Assignment and change of control provisions have significant implications for M&A transactions, financing, and business operations. Consult a qualified attorney for advice specific to your situation.

    Need to review assignment provisions across a portfolio of contracts? Clause Labs’s batch review processes up to 10 contracts simultaneously, flagging assignment restrictions, change of control triggers, and consent requirements — so your due diligence team can focus on negotiation strategy instead of document review.

  • Governing Law and Jurisdiction Clauses: How to Choose (and Why It Matters)

    Governing Law and Jurisdiction Clauses: How to Choose (and Why It Matters)

    Governing Law and Jurisdiction Clauses: How to Choose (and Why It Matters)

    Most lawyers copy-paste their governing law clause from the last deal they closed. According to Clio’s 2025 Legal Trends Report, solo and small firm attorneys handle an average of 38% utilization rate across a packed day — which means the “boilerplate” sections of contracts get the least attention. The governing law clause is almost always one of them. That’s a mistake that can cost your client six figures in litigation expenses when a dispute lands in the wrong court, under the wrong state’s laws, with the wrong procedural rules.

    A governing law clause determines which state or country’s laws interpret the contract. A jurisdiction clause determines where disputes get litigated. These are different provisions that serve different functions, and getting either one wrong creates problems that compound the moment a dispute arises. Try Clause Labs Free to see how AI flags governing law mismatches and missing venue provisions in seconds.

    Two Clauses, Two Different Questions

    The governing law clause answers: “Which legal framework applies to this contract?” The jurisdiction (or venue) clause answers: “Where do the parties resolve disputes?”

    They often appear in the same paragraph, which leads many attorneys to treat them as a single provision. They aren’t. You can have New York governing law with California jurisdiction — a California court applying New York law. Whether that’s wise is a different question entirely.

    Governing law determines:
    – How courts interpret ambiguous contract language
    – Which default rules fill gaps the contract doesn’t address
    – What remedies are available for breach
    – Which statute of limitations applies
    – Whether specific performance is available

    Jurisdiction and venue determine:
    – Which court hears the dispute
    – What procedural rules apply
    – Whether a jury trial is available
    – How fast the case moves through litigation
    – What discovery rules apply

    Treating these as afterthought provisions is the contractual equivalent of letting your opponent choose the playing field and the rulebook.

    How to Choose Governing Law

    The choice of governing law should be a strategic decision, not a default. Here are the factors that should drive it.

    Where the Parties Are Located

    If both parties are in the same state, that state’s law is the natural choice. Courts are most comfortable applying their own law, and both sides’ attorneys presumably know it. The calculus gets more complex when parties are in different states — or different countries.

    Which State’s Laws Favor Your Client

    This is the factor most lawyers skip. Different states reach different conclusions on the same contractual issues. Non-compete enforceability, for example, varies dramatically: California Bus. & Prof. Code Section 16600 voids most non-competes, while Florida Statute Section 542.335 enforces them with specific requirements.

    Before defaulting to your home state, ask: On the issues most likely to be disputed in this contract, which state’s law produces the best outcome for my client?

    Industry Conventions

    Certain states have earned their role as default choices for specific transaction types:

    State Preferred For Why
    Delaware Corporate agreements, LLC operating agreements, M&A Most developed body of corporate law; Court of Chancery specializes in business disputes; extensive case law on fiduciary duties
    New York Financial contracts, complex commercial deals, licensing Sophisticated commercial law; GOB Section 5-1401 honors choice of New York law for transactions exceeding $250,000 even without a nexus to the state
    California Technology contracts, employment agreements Strong employee protections; developed IP and trade secret law; tech industry precedents
    Texas Energy, oil and gas, natural resources Favorable business law; developed body of energy contract precedent
    England & Wales International commercial contracts Well-developed common law; perceived as neutral for cross-border deals; extensive arbitration infrastructure in London

    The UCC Reasonable Relation Test

    For contracts involving the sale of goods, UCC Section 1-301 requires a “reasonable relation” between the transaction and the chosen state’s law. A transaction has a reasonable relation to a state when a significant enough portion of the making or performance of the contract occurs there. Choosing a state with no connection to the deal may result in a court refusing to honor the choice.

    For non-UCC contracts, courts apply a similar but less codified analysis under the Restatement (Second) of Conflict of Laws. The general rule: choice of law clauses are enforceable unless they violate fundamental public policy of the state whose law would otherwise apply, or the chosen state has no substantial relationship to the parties or transaction.

    The New York exception: New York GOB Section 5-1401 allows parties to choose New York law for any commercial transaction valued at $250,000 or more, regardless of whether the transaction has any connection to New York. This is why New York law appears in so many financial contracts — the parties don’t need a nexus to the state.

    How to Choose Jurisdiction and Venue

    Jurisdiction and venue decisions involve a different set of considerations than governing law.

    Exclusive vs. Non-Exclusive Jurisdiction

    This is often the most consequential drafting choice in the entire provision.

    Exclusive jurisdiction: Disputes must be litigated in the designated forum. No other court will hear the case (assuming the clause is enforceable). This gives you predictability — you know exactly where you’ll litigate.

    Non-exclusive jurisdiction: Disputes may be litigated in the designated forum, but other forums are also available. This gives flexibility but less certainty.

    When to push for exclusive jurisdiction:
    – Your client is the likely defendant (you want to litigate at home)
    – You want to prevent forum shopping by the other party
    – The designated court has subject matter expertise (e.g., Delaware Chancery for corporate disputes)

    When to accept non-exclusive jurisdiction:
    – Your client might need to sue in the other party’s jurisdiction to reach their assets
    – You want to preserve the option of filing where the harm occurred
    – The other side won’t agree to exclusive jurisdiction in your forum

    Factors That Matter

    Court sophistication. Not all courts handle commercial disputes equally well. Delaware’s Court of Chancery, New York’s Commercial Division, and the federal courts in the Southern District of New York are known for judges with deep commercial experience.

    Speed of resolution. Some jurisdictions move faster than others. If your client needs a quick resolution (e.g., trade secret injunction), choose a forum known for efficient case management.

    Cost of litigation. Litigating in New York City or San Francisco is materially more expensive than litigating in smaller markets. Factor in travel costs, local counsel fees, and the general cost of doing business in that jurisdiction.

    Jury trial availability. Some contracts include jury trial waivers. If you can’t get a waiver, consider that some jurisdictions and judge pools are more favorable to plaintiffs or defendants.

    The Mismatch Problem

    Governing law and jurisdiction don’t have to match — but mismatches create real costs.

    Consider this scenario: A contract specifies New York governing law with venue in California state court. A dispute arises. Now you have a California judge applying New York law. The practical consequences:

    • Expert testimony costs: The California court may need expert testimony on New York law (yes, this happens — judges aren’t presumed to know other states’ law)
    • Interpretive risk: The California court may apply New York statutory language through a California jurisprudential lens, reaching a result that no New York court would reach
    • Appellate uncertainty: California appellate courts review New York law questions de novo, but may lack the institutional expertise to get it right

    The Holland & Knight guide on drafting choice of law provisions recommends aligning governing law and jurisdiction whenever possible. The only common exception: Delaware governing law with New York venue, which works because New York courts routinely apply Delaware corporate law and have deep familiarity with it.

    Practice tip: If your client insists on their home state for jurisdiction but the other side insists on a different state for governing law, the mismatch is usually not worth the compromise. Push to align them, or agree on a neutral third state for both.

    Governing Law Red Flags

    When reviewing contracts — especially those drafted by the other side — watch for these issues. If you want a fast first pass, upload the contract to Clause Labs and the AI will flag governing law and jurisdiction issues automatically.

    No Governing Law Clause At All

    Without a choice of law provision, courts apply their own conflict-of-laws rules to determine which state’s law governs. This is unpredictable, expensive to litigate, and gives the party who files first an advantage (they choose the forum, which influences the conflict-of-laws analysis). As our complete contract review checklist explains, a missing governing law clause is a red flag in any commercial agreement.

    Governing Law With No Connection to Either Party

    If neither party is located in the chosen state, no performance occurs there, and the state wasn’t chosen for its favorable legal framework, courts may refuse to honor the choice. Under the Restatement approach, there must be a “substantial relationship” between the parties/transaction and the chosen state, or another reasonable basis for the choice.

    Mandatory Local Laws That Override the Choice

    Certain laws can’t be contracted around, regardless of what the governing law clause says:

    • Employment laws: You can’t use a Texas governing law clause to avoid California wage and hour requirements for a California employee
    • Consumer protection statutes: Many states prohibit choice of law clauses in consumer contracts that would strip consumers of their home state protections
    • Insurance regulations: Some states require their own insurance laws to apply to policies issued or delivered in-state
    • Real property: The law of the state where real property is located generally governs real property transactions, regardless of contractual choice

    Multiple Conflicting Provisions

    In long contracts — especially those assembled from multiple precedents — governing law provisions sometimes appear in more than one place and contradict each other. Section 18 says “New York law,” but the arbitration clause in Section 22 says “the arbitration shall be governed by California law.” This ambiguity is litigated more often than you’d expect. If you’re reviewing vendor agreements for red flags, contradictory governing law provisions should be near the top of your checklist.

    Special Considerations by Contract Type

    International Contracts

    Cross-border deals raise additional layers of complexity.

    The CISG trap: If you choose “the laws of the State of New York” as governing law in an international sale of goods, you may inadvertently invoke the United Nations Convention on Contracts for the International Sale of Goods (CISG) — which is part of federal law in the United States and automatically applies to international goods sales unless explicitly excluded. A standard choice of law clause does not opt out of CISG. You must specifically state: “The parties agree that the CISG shall not apply to this agreement.”

    Arbitration in international disputes: For international contracts, arbitration is often preferable to litigation because of the New York Convention, which makes arbitral awards enforceable in 172 countries. Foreign court judgments, by contrast, have no equivalent enforcement mechanism.

    Seat of arbitration: In international arbitration, the “seat” determines the procedural law governing the arbitration and the courts with supervisory jurisdiction. Common seats include London, Singapore, Paris, and New York. The seat should be in a jurisdiction that supports arbitration and has acceded to the New York Convention.

    Employment Contracts

    Employment law creates some of the most rigid constraints on governing law choices.

    You generally cannot use a choice of law clause to avoid the mandatory employment protections of the state where the employee works. A Delaware governing law clause in an employment agreement for a California-based employee won’t prevent California courts from applying California’s prohibition on non-competes, its overtime rules, or its meal and rest break requirements.

    ABA Model Rule 1.1 requires competence in understanding these jurisdiction-specific limitations. A limitation of liability clause might be enforceable under the chosen governing law but unenforceable for certain employment claims under the employee’s local law.

    SaaS and Technology Agreements

    SaaS agreements commonly specify California or Delaware governing law, reflecting the vendor’s home jurisdiction. As discussed in our SaaS agreement review guide, the governing law choice in a SaaS contract can affect:

    • Whether SLA credits constitute an adequate remedy
    • Data breach notification requirements and timing
    • Whether automatic renewal provisions are enforceable
    • Which privacy and data protection laws apply to the vendor’s data processing

    Sample Governing Law and Jurisdiction Clauses

    Basic Governing Law with Exclusive Jurisdiction

    Governing Law. This Agreement shall be governed by and construed in accordance
    with the laws of the State of [State], without regard to its conflict of laws
    principles.
    
    Jurisdiction. Each party irrevocably submits to the exclusive jurisdiction of
    the state and federal courts located in [County], [State] for the purpose of
    any suit, action, or proceeding arising out of or relating to this Agreement.
    

    Note: The “without regard to conflict of laws principles” language is critical. Without it, a court might apply the chosen state’s conflict-of-laws rules, which could point to a different state’s substantive law — defeating the entire purpose of the clause.

    Governing Law with Jury Trial Waiver

    EACH PARTY HEREBY IRREVOCABLY WAIVES ALL RIGHT TO TRIAL BY JURY IN ANY
    ACTION, PROCEEDING, OR COUNTERCLAIM ARISING OUT OF OR RELATING TO THIS
    AGREEMENT.
    

    Jury trial waivers are enforceable in most (but not all) jurisdictions. They are standard in financial contracts and increasingly common in commercial agreements. Courts generally require that waivers be conspicuous — hence the all-caps formatting.

    International Contract with Arbitration

    Governing Law. This Agreement shall be governed by the laws of England and
    Wales. The parties expressly agree that the United Nations Convention on
    Contracts for the International Sale of Goods (CISG) shall not apply to
    this Agreement.
    
    Dispute Resolution. Any dispute arising out of or in connection with this
    Agreement shall be finally resolved by arbitration under the Rules of the
    London Court of International Arbitration (LCIA). The seat of arbitration
    shall be London. The language of the arbitration shall be English.
    

    How Clause Labs Reviews Governing Law and Jurisdiction Provisions

    Clause Labs’s AI analysis identifies governing law and jurisdiction provisions and checks for:

    • Missing governing law clause (flagged as a Critical risk)
    • Mismatch between governing law and jurisdiction
    • Exclusive vs. non-exclusive jurisdiction designation
    • Missing “without regard to conflict of laws principles” language
    • Jury trial waivers (flagged for awareness)
    • CISG opt-out in international sale of goods contracts
    • Multiple conflicting governing law provisions

    The AI doesn’t replace your judgment about which governing law is best for your client — that requires understanding the specific deal, the parties’ relative positions, and the substantive issues most likely to be disputed. But it catches the structural issues that many lawyers miss in the boilerplate sections — the same issues that create problems with assignment and change of control clauses when deals move to due diligence.

    Frequently Asked Questions

    Does governing law have to be where one of the parties is located?

    No. Parties can generally choose any state’s law, subject to limitations. Under UCC 1-301, the transaction must bear a “reasonable relation” to the chosen state for goods contracts. For non-UCC contracts, courts require a “substantial relationship” or “reasonable basis” for the choice. The major exception is New York, where GOB Section 5-1401 allows parties to choose New York law for any commercial transaction over $250,000 regardless of nexus.

    What’s the difference between jurisdiction and venue?

    Jurisdiction refers to a court’s authority to hear a case — whether the court has power over the parties and the subject matter. Venue refers to the specific geographic location within a jurisdiction where the case is filed. A clause might specify “the federal courts of the State of New York” (jurisdiction) and “the Southern District of New York” (venue). Both are important; specifying only jurisdiction leaves the question of which specific courthouse open to the filing party.

    Should I always push for my client’s home state?

    Not necessarily. Your client’s home state might have unfavorable law on the most contentious issues. A vendor in California selling SaaS to enterprise customers might prefer Delaware governing law because Delaware enforces limitation of liability provisions more predictably than California. Analyze the substantive issues first, then choose the governing law that best serves your client’s interests.

    Can governing law clauses be challenged?

    Yes. Courts may refuse to enforce a choice of law clause if: (1) the chosen state has no substantial relationship to the transaction and there’s no reasonable basis for the choice; (2) application of the chosen law would violate a fundamental policy of the state whose law would otherwise apply; or (3) mandatory local laws override the contractual choice (employment, consumer protection, insurance). The party challenging the clause bears the burden of showing why it shouldn’t be enforced.

    Can I choose different governing law for different parts of the contract?

    Yes, this is called “depecage.” You might specify Delaware law for corporate governance provisions and New York law for the commercial terms. It’s uncommon outside complex M&A transactions, and it creates interpretive challenges at the boundary between provisions. Unless there’s a compelling reason, stick with a single governing law for the entire agreement.


    This article is for informational purposes only and does not constitute legal advice. Governing law and jurisdiction choices have significant legal consequences. Consult a qualified attorney licensed in the relevant jurisdictions for advice specific to your situation.

    Ready to check your contracts for governing law and jurisdiction issues? Upload any contract to Clause Labs — free for 3 reviews per month, no credit card required — and get an AI-powered risk analysis in under 60 seconds.

  • Most Favored Nation Clauses: A Plain English Guide for Transactional Lawyers

    Most Favored Nation Clauses: A Plain English Guide for Transactional Lawyers

    Most Favored Nation Clauses: A Plain English Guide for Transactional Lawyers

    A SaaS vendor signed an MFN clause with Client A guaranteeing “the most favorable pricing offered to any similarly situated customer.” Eighteen months later, the vendor offered Client B a 40% discount to close an end-of-quarter deal. Client A’s procurement team audited, found the discount, and demanded retroactive price matching across 18 months of invoices. The bill: $380,000 in credits. The vendor’s sales team had no idea the MFN clause existed when they approved Client B’s discount.

    Most Favored Nation clauses — also called “best pricing,” “price parity,” or “most favored customer” provisions — create obligations that ripple across every future deal. Yet they’re often treated as boilerplate, buried in a pricing appendix, and forgotten until an audit surfaces a trigger event. According to World Commerce & Contracting, contract-related value leakage costs businesses an average of 9% of annual revenue, and MFN clauses are a significant contributor because their implications compound as the business grows.

    This article explains how MFN clauses work, the different types, the antitrust risks, and the specific drafting choices that determine whether an MFN protects your client or constrains every future transaction. If you’re reviewing a contract with an MFN provision right now, upload it to Clause Labs’s free analyzer to flag scope issues, missing exclusions, and retroactive triggers in under 60 seconds.

    What Is a Most Favored Nation Clause?

    In plain terms: “If you give someone else a better deal, you have to give me the same deal.”

    The concept originates in international trade law — trade treaties where one nation guarantees another the best tariff rates offered to any trading partner. In commercial contracts, an MFN clause guarantees that one party (the beneficiary) will receive terms at least as favorable as those offered to any comparable customer, partner, or counterpart.

    A basic MFN provision looks like this:

    “Vendor shall not charge Customer fees in excess of the lowest fees charged by Vendor to any other customer for substantially similar services during the term of this Agreement.”

    Simple to read. Extraordinarily complex in practice.

    The core variables that determine how an MFN actually operates:

    • Scope: Does the MFN cover pricing only, or all terms (SLAs, payment terms, support levels)?
    • Comparator: What constitutes “similarly situated” or “comparable”? Same volume? Same contract length? Same industry?
    • Direction: One-way (protects only the beneficiary) or mutual?
    • Trigger: Automatic adjustment, adjustment upon request, or adjustment upon audit?
    • Timing: Retroactive (credits for past overcharges) or prospective (better terms going forward only)?

    Each variable dramatically changes the clause’s practical impact.

    Types of MFN Clauses

    Pricing MFN

    The most common form. The vendor guarantees the beneficiary pricing that is at least as favorable as the best pricing offered to any comparable customer.

    Vendor represents that the fees set forth herein are no higher than the lowest
    fees charged by Vendor to any other customer for substantially similar services
    in similar quantities. If Vendor offers lower fees to any such customer during
    the term of this Agreement, Vendor shall reduce Customer's fees accordingly.
    

    Key issues: What constitutes “substantially similar services”? What does “similar quantities” mean? If Client A buys 100 licenses and Client B gets a discount for buying 10,000, does the MFN trigger? Without precise definitions, these questions become disputes.

    Terms MFN

    Broader and more aggressive. The beneficiary is entitled not just to the best pricing but to the best overall terms — SLAs, payment terms, support response times, warranty periods, anything.

    If Vendor offers to any customer more favorable terms and conditions than those
    set forth in this Agreement for comparable services, Vendor shall promptly
    notify Customer and extend such more favorable terms to Customer upon request.
    

    Why it’s dangerous for the grantor: Every negotiation with every other customer is constrained. A vendor who agrees to a 99.99% SLA with one customer may be obligated to provide that same SLA to the MFN beneficiary — even if the MFN beneficiary is paying a fraction of the price.

    Retroactive vs. Prospective

    Retroactive MFN: If the vendor offers Client B a lower price today, Client A receives credits retroactive to the start of the agreement (or the start of the current term). This is the most expensive version for the grantor. The SaaS vendor in the opening example faced $380,000 in credits because the MFN was retroactive.

    Prospective MFN: Better terms apply going forward only, from the date the trigger event occurs. The beneficiary doesn’t receive credits for historical overcharges. This is significantly less costly and more commercially reasonable.

    The difference between retroactive and prospective application can represent hundreds of thousands of dollars. This single word in the drafting — “retroactively” vs. “prospectively” — is worth more negotiation time than most lawyers give it.

    Audit-Based vs. Automatic

    Audit-based MFN: The beneficiary has the right to audit the grantor’s records to determine whether better terms have been offered elsewhere. If the audit reveals a trigger, the beneficiary receives an adjustment.

    Automatic MFN: The grantor must proactively notify the beneficiary and adjust terms whenever a trigger event occurs, without waiting for an audit.

    Practical reality: Audit-based MFNs are more common because they shift the enforcement burden to the beneficiary. But they also create friction — audits are expensive, time-consuming, and can damage the commercial relationship. Automatic MFNs are more protective but require the grantor to build compliance monitoring into every pricing decision.

    MFN Clause Risks and Traps

    Risks for the Grantor (Vendor/Supplier)

    Every future deal is constrained. Once you grant an MFN, your sales team can’t offer a discount, promotional price, or custom deal structure to any customer without potentially triggering obligations to the MFN beneficiary.

    Volume discounts trigger MFN for small customers. If Client B gets a 30% discount for committing to 10x the volume of Client A, the MFN beneficiary may argue they’re entitled to the same 30% discount at their smaller volume — unless the clause explicitly excludes volume-based pricing.

    Promotional pricing creates cascading obligations. End-of-quarter discounts, launch promotions, and competitive displacement pricing all potentially trigger MFN adjustments.

    Stacking problem. If multiple customers have MFN clauses, any discount offered to one customer triggers adjustments for all MFN holders. The result is a “race to the bottom” where the vendor’s best discount becomes the price floor for every MFN customer.

    Custom deal structures become nearly impossible. Creative pricing — bundled services, tiered commitments, value-based pricing — gets flattened into commodity pricing because any variation could trigger an MFN claim.

    Risks for the Beneficiary (Customer/Buyer)

    Enforcement is difficult. How does the beneficiary know the vendor offered someone else a better deal? Unless the MFN includes audit rights, the beneficiary is relying on the vendor’s voluntary compliance.

    “Comparable” loophole. Vendors learn to structure deals to avoid MFN triggers. Different service tiers, different packaging, different payment structures — all designed to make each deal “not comparable” to the MFN baseline.

    Bespoke SOW pricing. If pricing adjustments are buried in statements of work rather than the master agreement, the MFN may not capture them. This is particularly common in MSA/SOW structures — see our guide on reviewing contracts for red flags for related patterns.

    Audit costs. Exercising audit rights — hiring accountants, reviewing records, traveling to the vendor’s offices — can cost more than the expected recovery. According to the ABA Business Law Today analysis of MFN enforcement challenges, the cost of enforcement often exceeds the benefit for small or mid-size contracts.

    The Antitrust Dimension

    MFN clauses aren’t just a contract drafting issue — they carry significant antitrust risk.

    The DOJ and FTC Perspective

    The Department of Justice has studied MFN clauses extensively. In a public workshop on MFN clauses and antitrust enforcement, the DOJ’s Antitrust Division examined how MFNs can reduce price competition, facilitate coordinated pricing, and create barriers to entry.

    According to Winston & Strawn’s antitrust analysis, the primary antitrust concerns with MFN clauses are:

    • Reduced price competition. If a vendor can’t offer lower prices to new customers without triggering MFN adjustments across existing contracts, the incentive to compete on price diminishes.
    • Facilitated tacit collusion. MFN clauses make price floors more transparent and stable, reducing the incentive for vendors to cut prices.
    • Barriers to entry. New market entrants who would normally compete on price are disadvantaged because existing vendors’ MFN obligations prevent competitive responses.

    The Amazon Example

    The most high-profile MFN enforcement action involves Amazon. The FTC and 17 state attorneys general filed a complaint alleging that Amazon’s “fair pricing” policies function as de facto MFN clauses — requiring sellers to maintain the lowest prices on Amazon or face suppression in search results and removal from the Buy Box. According to Bona Law’s analysis, no U.S. court has yet found that an MFN provision, standing alone, violates antitrust law — but courts have approved consent decrees that enjoined MFN use.

    Practical Takeaway

    MFN clauses in contracts with significant market power deserve antitrust scrutiny. For typical B2B contracts between non-dominant parties, antitrust risk is low but not zero. If your client is the dominant player in their market, consult antitrust counsel before granting or requiring broad MFN provisions.

    Whether you’re granting or receiving an MFN, Clause Labs’s Professional tier ($149/month) lets you compare MFN language across contracts side by side using the contract comparison feature — so you can spot scope variations and inconsistencies before they create cascading obligations.

    Drafting and Negotiating MFN Clauses

    For the Grantor: Limiting MFN Exposure

    If you must agree to an MFN, negotiate these limitations:

    Narrow the scope to pricing only. Don’t agree to a terms MFN if a pricing MFN will satisfy the beneficiary. SLA commitments, support levels, and payment terms should remain individually negotiated.

    Define “comparable” precisely. Specify the comparators: same service tier, same volume range, same contract term, same payment structure. The more specific, the fewer trigger events.

    Exclude volume discounts and promotional pricing. Carve out pricing offered in connection with volume commitments exceeding X units, time-limited promotional offers, competitive displacement deals, and strategic partnership pricing.

    Make it prospective, not retroactive. If better pricing is offered elsewhere, the adjustment applies going forward from the date of discovery — not retroactively to the start of the agreement.

    Limit the audit right. Allow one audit per 12-month period, at the beneficiary’s expense (unless the audit reveals a material discrepancy, in which case the grantor pays). Restrict audit scope to pricing records, not all commercial terms.

    Cap the remedy. If the MFN triggers, the remedy is a price adjustment and, if retroactive, a credit for the difference. The MFN should not give the beneficiary a termination right or damages claim.

    For the Beneficiary: Strengthening MFN Protection

    If you’re seeking MFN protection for your client:

    Broaden the comparator. Push for “any customer” rather than “similarly situated customer.” The broader the comparator group, the more trigger events the MFN captures.

    Make it automatic with notification. The grantor must proactively notify the beneficiary and adjust pricing whenever better terms are offered elsewhere — don’t rely on audits to discover triggers.

    Include retroactive adjustment. If the grantor offered better pricing six months ago and didn’t notify, the beneficiary should receive credits back to the date of the trigger event.

    Secure audit rights. Annual audit right at grantor’s expense if material discrepancies are found. Include access to pricing records, customer lists (redacted as needed), and discount approvals.

    Add a termination right. If the grantor fails to honor the MFN after notification, the beneficiary can terminate for cause without a cure period — this creates real enforcement teeth.

    MFN by Contract Type

    SaaS Agreements

    What’s typical: Pricing MFN, prospective, audit-based. “Vendor will not charge Customer more than the lowest published list price for the same service tier and usage level.”

    What’s negotiable: Published list price vs. actual transaction price (major difference — discounts aren’t reflected in list prices). Audit rights. Whether custom enterprise pricing triggers the MFN.

    Red flag: MFN tied to “list price” only. Vendors can offer 50% off list to other customers and argue the MFN only benchmarks against the undiscounted price. For a deeper analysis of SaaS-specific provisions, see our guide on reviewing SaaS agreements.

    Licensing Agreements

    What’s typical: Royalty MFN. “Licensor shall not grant a license for the Licensed Technology to any third party at a royalty rate lower than Customer’s rate.”

    Key issue: Do different license scopes (exclusive vs. non-exclusive, territory-limited vs. worldwide) qualify as “comparable”? They usually shouldn’t, and the clause should say so explicitly.

    Supply and Distribution Agreements

    What’s typical: Pricing and territory MFN. “Supplier shall not sell Product to any other distributor in the Territory at a price below the price charged to Distributor.”

    Key issue: Whether the MFN extends to direct sales by the supplier (competing with its own distributor). A well-drafted MFN in a distribution agreement covers both third-party and direct-sale pricing.

    Insurance Contracts

    What’s typical: Coverage MFN. “Insurer shall provide Insured with coverage terms no less favorable than those offered to any other insured with a comparable risk profile.”

    Key issue: “Comparable risk profile” is inherently subjective and frequently disputed. The clause should specify the risk factors that define comparability.

    MFN Red Flags to Catch in Review

    When reviewing any contract with an MFN provision, flag these issues:

    • One-sided MFN without reciprocal obligations. Only one party is constrained. The other can negotiate freely.
    • No exclusions for volume discounts, promotional pricing, or strategic deals. Every pricing decision becomes an MFN trigger.
    • Retroactive adjustment without cap. Credits could extend back to the start of the agreement — potentially years of pricing differential.
    • “All terms” scope without limitation. The beneficiary can cherry-pick the best individual terms from different contracts, creating a Frankenstein agreement no real customer has.
    • No audit frequency limitation. The beneficiary could audit continuously, creating administrative burden and relationship friction.
    • No definition of “comparable” or “similarly situated.” Everything is comparable until the grantor proves otherwise — and the burden of proof is on the wrong party.
    • Interaction with limitation of liability. MFN credits and adjustments should be covered by (not excluded from) the contract’s liability cap.

    How Clause Labs Handles MFN Clauses

    MFN provisions are notoriously easy to miss during review — they’re often buried in pricing schedules, general terms sections, or appendices rather than called out in a dedicated section. Clause Labs’s AI identifies MFN provisions regardless of where they appear and flags:

    • One-sided MFN with no reciprocal obligations
    • Missing exclusions for volume, promotional, and strategic pricing
    • Retroactive trigger mechanisms
    • Absent audit-right limitations
    • Interaction between MFN obligations and other contract provisions (pricing, termination, limitation of liability)

    Frequently Asked Questions

    Are MFN clauses enforceable?

    Yes — MFN clauses are generally enforceable as standard contract provisions. Courts treat them like any other pricing or commercial term, applying standard contract interpretation principles. The primary enforceability challenge isn’t the clause itself but proving that a trigger event occurred (i.e., that the grantor offered better terms elsewhere). This is why audit rights are essential for MFN beneficiaries.

    Can I have an MFN clause in a SaaS agreement?

    Yes, and they’re increasingly common — particularly in enterprise SaaS contracts where the customer has significant negotiating leverage. The SaaS vendor will typically push for a narrow MFN limited to published list pricing for the same service tier, while the customer will push for a broader MFN covering actual transaction pricing. The final MFN scope depends on leverage and competitive dynamics.

    How do I enforce an MFN clause?

    Enforcement typically requires: (1) discovering that better terms were offered elsewhere (through audit rights, market intelligence, or the grantor’s notification obligation), (2) demonstrating that the comparator deal involves “comparable” or “similarly situated” customers (as defined in the clause), and (3) demanding the adjustment (price credit, go-forward reduction, or both). If the grantor refuses, the remedy depends on the contract — it may be a breach-of-contract claim, a termination right, or both.

    Should I accept an MFN clause from my vendor?

    As the beneficiary, an MFN clause protects your client against preferential pricing — so yes, seek it when you have leverage. But understand its limitations: enforcement is expensive, “comparable” is easy to argue around, and the vendor will likely structure future deals to avoid triggering the MFN. An MFN is better than nothing, but it’s not a guarantee of the best price — it’s a contractual tool that requires monitoring and enforcement.

    Do MFN clauses create antitrust issues?

    They can, particularly when imposed by a dominant market player. The DOJ and FTC have scrutinized MFN clauses in contexts where they reduce price competition, facilitate collusion, or create barriers to market entry. For contracts between non-dominant parties in competitive markets, antitrust risk is low. For contracts involving platforms, dominant suppliers, or industry-wide pricing standards, consult antitrust counsel. As noted in Secretariat’s competition analysis, platform MFN clauses (PMFNs) receive the highest level of regulatory scrutiny.


    This article is for informational purposes only and does not constitute legal advice. Consult a qualified attorney for advice specific to your situation.


    MFN clauses create obligations that compound across every future deal. If you’re reviewing a contract with pricing parity, best-pricing, or MFN language, upload it to Clause Labs to check the scope, exclusions, and trigger mechanisms before your client signs. Free tier: 3 reviews/month, no credit card required.