TLDR : The French Court of Cassation is exploring the use of artificial intelligence (AI) to enhance judicial efficiency through a methodological, ethical, and pragmatic approach. The report advocates specific AI uses to strengthen judges' analysis and internal organization, while maintaining a clear red line: the final decision always belongs to the judge, not the algorithm.
Table of contents
On April 28th, the dedicated AI working group of the Court of Cassation, the guardian of legal interpretation in France, officially submitted its report "Court of Cassation and Artificial Intelligence: Preparing the Court of Tomorrow". It explores the current and potential uses of AI to enhance the efficiency of judicial work, advocating for a methodological, ethical, and pragmatic approach.
The Court of Cassation benefits from a strategic advantage in this regard: its internal data science team, which is rare in the European judicial landscape, allows it to independently develop the necessary tools, reduce costs, and ensure transparency. This technical autonomy proves even more valuable in a context marked by the growing tension between technological performance and institutional independence.
Charged by the legislature to ensure the Open Data release of judicial decisions, it notably developed a judicial decision pseudonymization software in 2019 and, the following year, an AI-based appeal orientation system.
Building on this expertise, its First President Christophe Soulard and Attorney General Rémy Heitz formed a dedicated AI working group, chaired by Sandrine Zientara, chamber president and director of the Documentation, Studies, and Reporting Service (SDER), a key entity of the Court of Cassation.
A Rigorous Approach Focused on the Institution's Real Needs
The group, mandated in May 2024, adopted a transversal methodology, involving magistrates, clerks, researchers, and AI experts. The survey conducted among the Court's staff, enriched by external hearings (ENM, CEPEJ, legaltechs, national and European high courts), identified a wide range of use cases.
Avoiding the hype around generative AI, the report also explores the contributions of more proven technologies: expert systems, supervised, or hybrid learning.
Pragmatic Use Cases, Prioritized According to Their Value and Risks
The identified cases are classified into five major categories, ranging from document structuring and enrichment to writing assistance, including party submissions analysis and case law research. These uses are designed to strengthen magistrates' analytical capacity, improve processing quality, and optimize internal organization, without interfering in judicial decision-making. Some, like automatic brief enrichment or precedent detection, offer a good efficiency/cost ratio without raising major ethical issues. Others, notably those related to assisted writing, appear more sensitive. The intention not to introduce decision-support tools reflects a clear institutional red line: the decision belongs to the judge, not the algorithm.
Tools for mapping disputes, detecting connections between cases, or analyzing large case law volumes could enhance legal coherence, better identify emerging disputes, and facilitate judge dialogue.
Strict Criteria to Frame AI Use
The multi-criteria analysis conducted for each use case (ethical, legal, functional, technical, and economic) allows moving beyond opportunistic experimentation to anchor AI uses in a reasoned and replicable framework. This evaluation model could inspire other jurisdictions, in France or abroad, keen to combine innovation and legal security. The report emphasizes transparency and explainability of AI systems, their frugality, compliance with GDPR and the AI Act, data hosting control, and technological sovereignty. These requirements remind that AI integration cannot be abstracted from rigorous governance based on fundamental legal values.
Responsible Governance and Continuous Monitoring
The report recommends setting up an internal supervisory committee responsible for operational and ethical monitoring of AI uses, a guide of best practices, and the adoption of a specific ethical charter for the Court. It also emphasizes the importance of independent governance and a gradual acculturation process of magistrates and judicial staff to these emerging technologies.
Translated from IA et justice : la Cour de cassation prépare l'avenir
To better understand
What is the AI Act and how does it influence the use of AI by the Cour de cassation?
The AI Act is a proposed EU regulation aiming to establish harmonized rules for AI. It influences the Cour de cassation by imposing strict transparency and security standards for AI systems used in the judicial field.
How has the history of pseudonymization in the judicial field evolved up until its adoption by the Cour de cassation in 2019?
Pseudonymization of judicial decisions arose in response to increasing personal data protection demands, with laws like the EU's GDPR influencing practices. The Cour de cassation adopted this technology in 2019 to enhance confidentiality while allowing access to judicial decisions.