As for ICML 2020, the Law and Machine Learning workshop will move to a virtual format. Most logistic details are still TBD, but we can already announce that the ICML online platform will be used to stream the talks of the LML workshop.
This first workshop in Law and Machine Learning aims to contribute to the research on social and legal risks of the deployment of AI systems using machine learning based decisions.
Today, algorithms have been infiltrating and governing every aspect of our lives as individuals and as a society. Specifically, Algorithmic Decision Systems (ADS) are involved in many social decisions. For instance, such systems are increasingly used to support decision-making in fields, such as child welfare, criminal justice, school assignment, teacher evaluation, fire risk assessment, homelessness prioritization, healthcare, Medicaid benefit, immigration decision systems or risk assessment, and predictive policing, among other things. Law enforcement agencies are increasingly using facial recognition, algorithmic predictive policing systems to forecast criminal activity and allocate police resources. However, these predictive systems challenge fundamental rights and guarantees of the criminal procedure. For several years, numerous studies have revealed, social risks of ML, especially the risks of opacity, bias, manipulation of information.
While it is only the starting point of the deployment of such systems, more interdisciplinary research is needed. The purpose of this workshop is to contribute to this new field which brings together legal researchers, mathematicians and computer scientists, by bridging the gap between the performance of algorithmic systems and legal standards. For instance, notions like “privacy” or “fairness” are formulated in law, as well as well as in applied mathematics and in computer science. However, their meaning and their impact is not necessarily identical. Besides, legal norms to regulate AI systems appear in certain national laws but have to be relevant and compatible with technical requirements. Furthermore, these standards must be checked by legal experts and regulators, which presupposes that AI systems are sufficiently meaningful and transparent. These issues emerge in different topics, such as privacy in data analysis and fairness in algorithmic decision-making. The topic will cover the research that denounces the risks and, above all, multidisciplinary research that proposes solutions, especially legal and technical solutions.
This workshop also aims to consider an AI regulatory framework. Specific legal rules have been enacted in the US, Europe and Canada on algorithmic decision-making pursuing three different ways: certain norms grant rights, such as the right to be informed and the right not to be subject to a decision based solely on automated processing; other rules impose an algorithmic impact assessment before deploying ML systems; and, finally, other lawmakers have been established task forces to observe the impact of machine learning before adopting legal rules. Other sectoral regulations concern personal data protection, autonomous vehicles, biometric systems or facial recognition. This workshop can make recommendations to the lawmakers, as well as lead researchers in machine learning to integrate legal requirements in the algorithms and in the developmental process of these algorithms, depending on the activity sectors of their applications.
This workshop will bring together legal researchers, mathematicians and computer scientists from the law and machine learning communities, and highlight recent work that contribute to address these challenges.
Our agenda will feature contributed papers and posters with invited speakers (keynotes).
Conference submissions follow two tracks: full conference papers and two-pages abstracts (posters).
Full papers contain methodological developments, well-validated applications, or Law papers covering the topics of the workshop. In order to favor contributions from different fields, there is no strict limit on paper length or particular format to follow. For methodological paper, we however recommend limiting the paper length to 9 pages with an additional page for the references and as many pages as needed in an appendix section (all in a single pdf). An amount of 6 to 9 full papers will be selected for oral presentations. Other accepted full papers will be presented as posters.
We also accept two-pages abstracts discussing recently published or submitted journal contributions to give authors the opportunity to present their work and obtain feedback from conference attendees. All accepted abstracts will be presented as posters.
Through the workshop we hope to help identify fundamentally important directions on law and ML, and foster future collaborations. We invite the submission of papers on topics including, but not limited to:
-
Governmental automated decision-making;
-
Algorithmic predictive systems;
-
“Privacy“, “fairness”, “transparency”, “accountability” in law, mathematics and computer science;
-
Public policies, uncertainties and predictive models;
-
Predictive models and health and social crisis;
-
Algorithmic predictive policing systems;
-
Facial recognition;
-
Legal and technical issues of biometrics;
-
Legal and technical approaches to address cybersecurity issues (AI-based solutions).
-
Call for papers: April 19th, 2020
-
Deadline for submission of two-pages abstracts and full papers: June 19th, 2020
-
Notification of acceptance: June 26th, 2020
-
Final submission: July 10th, 2020
-
Conference: July 17th or 18th, 2020 (online)
-
Professor Olivier Sylvain (Fordham Law School): “Recovering Tech’s Humanity”. Columbia Law Review, Vol. 119, 2019. Available at SSRN: https://ssrn.com/abstract=3499821
-
Professor Frederik Zuiderveen Borgesius (Amsterdam University & Radboud University): “Legal Protection in Europe against Discrimination by Machine Learning systems”. “Discrimination, Artificial Intelligence and Algorithmic Decision-Making”, Report, Council of Europe (2019)