Automated Contracting in Europe: developments and future directions

0

The European Commission is currently grappling with the question: should there be European regulation for automated contracting? If yes, should it be based on the UNCITRAL Model Law on Automated Contracting, or should an alternative version with additional provisions be considered? As AI systems increasingly influence commercial and legal relationships, questions regarding the attribution of actions, contract intent, and consumer protection are at the forefront of legal discussions. The recent Münster Colloquia on EU Law and the Digital Economy held at the University of Münster on January 9-10, 2025, discussed critical insights into the landscape, highlighting the complexities of regulating automated contracting within the EU and the ongoing efforts to balance legal certainty with technological innovation.

Beyond digital assistants, automated contracting is already deeply integrated into multiple industries, shaping commercial transactions in real-time. In finance, automated loan approvals and smart derivatives streamline complex agreements. Supply chains rely on automated procurement and dynamic contract execution to enhance efficiency. In healthcare, AI-driven insurance claims processing accelerates settlements. Real estate platforms employ automated lease agreements, while AI negotiates mergers and acquisitions with minimal human intervention. For example, KPMG’s Cognitive Contract Management (CCM) platform automates the contract transition process during M&A transactions, expediting due diligence and negotiations. Additionally, IBM Watson, integrated with ContractPodAi, enhances contract analysis and management, providing deep insights and accelerating contract assembly and review processes. JPMorgan’s COiN (Contract Intelligence) platform is primarily designed to analyze legal documents using machine learning, but its direct association with mergers and acquisitions (M&A) is limited. Automated contracting is widely used for contract drafting, negotiation, and compliance monitoring in the legal field. AI-powered tools like Juro, Ironclad, and Evisort automate contract lifecycle management, while Kira Systems and LawGeex analyze agreements for risks and inconsistencies. DocuSign CLM streamlines contract execution and IBM Watson Contract Analyzer assists with due diligence and compliance. These tools are transforming industries like finance, real estate, and procurement by reducing manual effort and improving accuracy. However, human oversight remains essential to ensure accuracy, mitigate risks, and handle complex legal nuances.

The widespread applications highlight the growing reliance on automated contracting, making it imperative to develop legal frameworks that address evolving risks, liability concerns, and enforcement mechanisms.

 

A closer look at the evolving framework for automated contracting

The UNCITRAL Model Law provides a foundational framework for addressing AI’s role in contract formation. It differentiates between deterministic and non-deterministic systems, offering guidelines for automated contracting, smart contracts, and dynamic information processing. This adaptability makes it relevant to emerging technologies. However, as a “model law,” it serves only as a recommended framework for national adoption and may not resolve all challenges. Contract law, deeply intertwined with societal and economic aspects, must address complex issues such as AI’s “unexpected actions” or errors that fall outside pre-agreed parameters.  Issues such as how to handle errors or unexpected actions by AI – actions that deviate significantly from their programming or exceed the parties’ expectations – are more specific than the general scope of the Model Law. These challenges often require additional principles or tailored legislation. At least this is the expectation we would like to keep as contract lawyers.

European AI Act, Insights from the ELI Guiding Principles and Model Rules on Algorithmic Contracts (ELI Guiding Principles), and Prof. Christiane Wendehorst’s guiding principles are also shaping this evolving landscape. The AI Act is already a cornerstone for AI regulation in Europe, and scholarly discussions at the colloquia emphasized the absence of specific provisions on automated contracting in the EU and the need for setting forth clear guidelines, especially in cases where current national laws fail to provide certainty. Issues such as the validity of contracts, attribution for determining liability, disclosure, unexpected outcomes/outputs and consumer protection are key areas where legal harmonization at the EU level is necessary.

Prof. Wendehorst’s guiding principles (lightly referred to as Wendehorst’s Principles at the Colloquia) emphasize the importance of aligning AI’s actions with the intent of the contracting parties. These principles and the ELI Guiding Principles highlight AI’s growing role as an agent in contractual processes, underscoring the need for clarity in attributing accountability for AI’s actions to its deployers, whether natural or legal persons. They propose mechanisms such as pre-contractual disclosures to ensure AI systems operate within agreed legal and business objectives.

While these frameworks provide significant guidance, they leave certain gaps, particularly concerning rapidly evolving AI technologies. For example, they do not fully address the implications of non-deterministic AI or black-box systems. Additionally, ensuring consistency across jurisdictions remains a challenge, highlighting the need for further harmonization at EU and international levels. Proposals such as fall-back liability provisions and procedural safeguards aim to address accountability for unexpected AI actions, but adapting these principles to future technological advancements will require ongoing refinement and coordination. One thing to note is that both these efforts viz. the ELI Guiding Principles and Wendehorst’s Principles are not final at the moment.

 The dearth of Legal Cases on Automated Contracting is another aspect to check since we are still using hypothetical examples about automated contracting, the scarcity of legal cases on automated contracting can be attributed to a combination of factors. On one hand, existing foundational elements of contract formation (or we can refer to them as rules under the Principles of European Contract Law a.k.a., PECL which are specific, structured norms that directly determine legal outcomes) – such as offer, acceptance, and intention are broad and flexible enough to address disputes involving automation, potentially reducing the need for litigation. On the other hand, a lack of awareness about the unique challenges posed by automated contracting, such as attribution of actions and liability for errors, might deter parties from bringing these issues to court. Furthermore, businesses often resolve disputes informally or through private arbitration to avoid setting legal precedents that could lead to regulatory scrutiny. Prof. Wendehorst highlights the “surprisingly little legislation on automated contracting” but not the scarcity of legal precedents.  She stresses the need for developing principles that can inspire the future development of law as well as guide courts and practitioners in future automated contract-related disputes.

 Another consideration in this regard is the flexibility of common law systems. Unlike codified legal systems, common law jurisdictions rely on a precedent-based approach, which shapes the evolution of contract law through judicial decisions. While common law is often perceived as more adaptable due to the absence of a comprehensive civil code, judicial constraints, such as adherence to precedent, also play a significant role in shaping its application. How this approach interacts with the complexities of automated contracting remains an open question, as it may allow for nuanced interpretations based on emerging cases while also presenting challenges in ensuring consistency and predictability.

Assuming that the common law’s adaptability can be advantageous, it evolves through judicial decisions, allowing courts to interpret and adapt contract law elements in the context of AI systems. This adaptability provides a valuable framework for addressing novel issues, such as determining liability for AI’s autonomous actions or assessing whether AI systems align with the contractual intent of the parties. As AI technology progresses, common law jurisdictions may be positioned to develop innovative solutions, influencing global discussions on regulating algorithmic contracts.

However, this flexibility does not come without its challenges. While the U.S. may be seen as a favourable environment for AI-driven corporate ventures due to its legal flexibility, there are concerns about the protection of contractual parties, particularly consumers and smaller businesses, who may not have the same negotiating power as large corporations and their decision makers (typically board members). The absence of comprehensive regulation on automated contracting in the U.S. means that the contract law under the Uniform Commercial Code (UCC) and state-level common law serve as the governing framework, supplemented by sector-specific regulations where applicable. Yet, this reliance on existing legal structures raises questions about the sufficiency of protections for less powerful parties in the face of technological advancements and the increasing dominance of major corporate players.

Additionally, the U.S. legal framework provides protections for businesses through mechanisms such as freedom of contract and intellectual property rights. However, it is important to note that intellectual property law, particularly in the context of AI technologies, has faced significant scrutiny due to the unprecedented number of claims regarding violations of IP rights. These cases have highlighted some of the limitations of IP law, particularly in addressing the unique challenges posed by AI’s autonomous capabilities.

In terms of legislation, as many as 28 U.S. states have enacted or at least supplemented their legislation addressing blockchain technology, smart contracts, and verifiable credentials. These legislative efforts aim to provide legal recognition and frameworks for the use of smart contracts and related technologies within their jurisdictions. However, this reference is limited in scope, as the focus here is on automated contracting, which extends beyond smart contracts to include AI-driven contract formation, execution, and enforcement. Prof. Wendehorst highlights the limited legislative attention to automated contracting, with existing laws such as the U.S. Uniform Electronic Transactions Act (UETA) 1999, the Model Computer Information Transactions Act (MCITA) (1999/2002), and the Draft UCC Article 2 (2003) only offering partial guidance rather than a comprehensive legal framework.

A U.S. professor in the colloquia argued that, despite the unpredictable nature of automated contracting, the existing safeguards in contract law can still protect consumer intent. This assertion reinforces the argument that common law’s inherent flexibility, coupled with judicial interpretation, aims to provide a safety net for contractual parties, ensuring that AI-driven contracts remain aligned with fundamental legal principles. Indeed, courts can still use the existing guardrails of contract law to protect consumers. The fact that they can use them, does not, however, make those contractual rules the most adequate.

 

Future Directions

As AI continues to reshape the landscape of contract formation, some European scholars argue that the need for clear, consistent, and forward-thinking regulation is undeniable. On one hand, there is a scholarly perspective advocating for minimal regulatory intervention to avoid placing additional burdens on businesses. On the other hand, some scholars contend that existing legal frameworks, such as the German Civil Code and the PECL, are sufficient to address the legal challenges of automated contracting. It may be true to an extent but that does not mean that private law, in particular European Private Law, should not rethink some of its principles and rules in light of unprecedented digitalization. Meanwhile, industry voices, particularly from the EU’s small and medium-sized enterprises (SMEs), express deep concerns over regulatory fatigue. Many businesses are still grappling with compliance under recently enacted laws, and the mere prospect of additional regulation is met with apprehension. This sentiment was strongly echoed by industry representatives. Given these diverging perspectives, the question remains: Do we need new regulation? It is yet to be seen how the European Commission will navigate this debate. Many in the scholarly community, however, consider regulatory clarity essential for promoting legal certainty and protecting stakeholders by addressing potential gaps.

The European Commission’s current position seeking opinions on the potential need for automated contracting regulation reflects a broader recognition of the gaps in existing legal frameworks. By drawing on international guidelines such as the UNCITRAL Model Law, the ELI Guiding Principles, Prof. Wendehorst’s guiding principles, other scholars’ survey of the matter, and the AI Act, Europe has an opportunity to create a legal environment that balances innovation with legal certainty, paving the way for AI to play a transformative role in the world of contracting.

Share this article!
Share.

About Author

Leave A Reply