EU Artificial Intelligence Act: The European Approach to AI

Stanford - Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue No. 2/2021

New Stanford tech policy research: “EU Artificial Intelligence Act: The European Approach to AI”.

EU regulatory framework for AI

On 21 April 2021, the European Commission presented the Artificial Intelligence Act. This Stanford Law School contribution lists the main points of the proposed regulatory framework for AI.

The Act seeks to codify the high standards of the EU trustworthy AI paradigm, which requires AI to be legally, ethically and technically robust, while respecting democratic values, human rights and the rule of law. The draft regulation sets out core horizontal rules for the development, commodification and use of AI-driven products, services and systems within the territory of the EU, that apply to all industries.

Legal sandboxes fostering innovation

The EC aims to prevent the rules from stifling innovation and hindering the creation of a flourishing AI ecosystem in Europe. This is ensured by introducing various flexibilities, including the application of legal sandboxes that afford breathing room to AI developers.

Sophisticated ‘product safety regime’

The EU AI Act introduces a sophisticated ‘product safety framework’ constructed around a set of 4 risk categories. It imposes requirements for market entrance and certification of High-Risk AI Systems through a mandatory CE-marking procedure. To ensure equitable outcomes, this pre-market conformity regime also applies to machine learning training, testing and validation datasets.

Pyramid of criticality

The AI Act draft combines a risk-based approach based on the pyramid of criticality, with a modern, layered enforcement mechanism. This means, among other things, that a lighter legal regime applies to AI applications with a negligible risk, and that applications with an unacceptable risk are banned. Stricter regulations apply as risk increases.

Enforcement at both Union and Member State level

The draft regulation provides for the installation of a new enforcement body at Union level: the European Artificial Intelligence Board (EAIB). At Member State level, the EAIB will be flanked by national supervisors, similar to the GDPR’s oversight mechanism. Fines for violation of the rules can be up to 6% of global turnover, or 30 million euros for private entities.

CE-marking for High-Risk AI Systems

In line with my recommendations, Article 49 of the Act requires high-risk AI and data-driven systems, products and services to comply with EU benchmarks, including safety and compliance assessments. This is crucial because it requires AI infused products and services to meet the high technical, legal and ethical standards that reflect the core values of trustworthy AI. Only then will they receive a CE marking that allows them to enter the European markets. This pre-market conformity mechanism works in the same manner as the existing CE marking: as safety certification for products traded in the European Economic Area (EEA).

Trustworthy AI by Design: ex ante and life-cycle auditing

Responsible, trustworthy AI by design requires awareness from all parties involved, from the first line of code. Indispensable tools to facilitate this awareness process are AI impact and conformity assessments, best practices, technology roadmaps and codes of conduct. These tools are executed by inclusive, multidisciplinary teams, that use them to monitor, validate and benchmark AI systems. It will all come down to ex ante and life-cycle auditing.

The new European rules will forever change the way AI is formed. Pursuing trustworthy AI by design seems like a sensible strategy, wherever you are in the world.

Read More

Establishing a Legal-Ethical Framework for Quantum Technology

Yale Law School, Yale Journal of Law & Technology (YJoLT) The Record 2021

New peer reviewed cross-disciplinary Stanford University Quantum & Law research article: “Establishing a Legal-Ethical Framework for Quantum Technology”.

By Mauritz Kop

Citation: Kop, Mauritz, Establishing a Legal-Ethical Framework for Quantum Technology (March 2, 2021). Yale J.L. & Tech. The Record 2021, https://yjolt.org/blog/establishing-legal-ethical-framework-quantum-technology

Please find a short abstract below:

What is quantum technology?

Quantum technology is founded on general principles of quantum mechanics and combines the counterintuitive physics of the very small with engineering. Particles and energy at the smallest scale do not follow the same rules as the objects we can detect around us in our everyday lives. The general principles, or properties, of quantum mechanics are superposition, entanglement, and tunnelling. Quantum mechanics aims to clarify the relationship between matter and energy, and it describes the building blocks of atoms at the subatomic level.

Raising Quantum Awareness

Quantum technologies are rapidly evolving from hypothetical ideas to commercial realities. As the world prepares for these tangible applications, the quantum community issued an urgent call for action to design solutions that can balance their transformational impact. An important first step to encourage the debate is raising quantum awareness. We have to put controls in place that address identified risks and incentivise sustainable innovation.

Connecting AI and Nanotechnology to Quantum

Establishing a culturally sensitive legal-ethical framework for applied quantum technologies can help to accomplish these goals. This framework can be built on existing rules and requirements for AI. We can enrich this framework further by integrating ethical, legal and social issues (ELSI) associated with nanotechnology. In addition, the unique physical characteristics of quantum mechanics demand universal guiding principles of responsible, human-centered quantum technology. To this end, the article proposes ten guiding principles for the development and application of quantum technology.

Risk-based Quantum Technology Impact Assessment Tools

Lastly, how can we monitor and validate that real world quantum tech-driven implementations remain legal, ethical, social and technically robust during their life cycle? Developing concrete tools that address these challenges might be the answer. Raising quantum awareness can be accomplished by discussing a legal-ethical framework and by utilizing risk-based technology impact assessment tools in the form of best practices and moral guides.

Read More