EU Focuses New Governance on High-Risk AI Systems

EU Focuses New Governance on High-Risk AI Systems

By Nabil Abdalla & Alan Thiemann

On April 21, 2021, the European Commission released its long-anticipated proposed regulation on the use of artificial intelligence (AI). AI continues to have a fast-evolving and measurable impact in our social and professional settings and these regulations further the discussion around the practical application of this technology.

The European Commission aims to ensure that the health, safety, and fundamental rights of European Union (EU) persons are fully protected by governing AI, including the next evolution of AI. Comments on the proposed regulations must be sent by June 22, 2021.

The EU regulations on AI adopt a risk-based approach to governing AI. Four categories of risk have been proposed, each of which assume its own level of AI governance. These categories include:

  1. Unacceptable risk—resulting in an outright ban; for example, AI systems intended to manipulate human behavior.
  2. Minimal risk—free to operate without intervention; example include, spam filters, video game applications, etc.
  3. Low risk—need only inform users that they are interacting with an AI system and not a human; think, chatbots, etc.
  4. High risk—defined broadly, covering a plethora of AI systems, and includes everything from dispatching emergency first response services and systems used in a law enforcement context to systems that evaluate credit-worthiness and education or vocational training (e.g., evaluating persons on tests that are part of or as a precondition for their employment or education opportunities).

As you can see from above, the bulk of proposed AI governance is directed towards AI systems considered to be “high-risk” activities.

In contrast to the other categories, the EU proposed a comprehensive set of compliance requirements for high-risk AI systems. These include:

  • Validating training data quality
  • Maintaining adequate record-keeping
  • Providing adequate transparency to users
  • Providing adequate human oversight
  • Ensuring the accuracy and robustness (e.g., bias) of the AI system, itself

These compliance requirements present a sea-change for technology companies running high-risk AI systems. In fact, the European Commission published a further study on April 21, 2021, that estimated AI regulation compliance for high-risk systems could cost up to 17% of total AI investment.

At this stage, we at Han Santos, anticipate push-back from the technology community over the broad definition of high-risk systems and the expansive list of compliance requirements for all AI systems that fall under its purview.

Rather than a “one-size-fits-all” regulatory approach, we see value in adopting a more targeted and granular approach to compliance of high-risk AI systems. For example, we believe in the adoption of sub-classifications that include a more granular and targeted set of compliance requirements. For this reason, we are actively working with a number of clients on the comments they plan to send prior to the June deadline.

Our team will stay up to date on the European Commission’s proposed changes. Subscribe to our website and blog to stay informed as well.

INDIVIDUAL ARTICLE DISCLAIMER:

Use of, access to, and information exchanged on this web page or any of the e-mail links contained within it cannot and does not create an attorney-client relationship between Han Santos, PLLC and the user or browser. Please do not post any personal or confidential information. You should contact your attorney to obtain advice with respect to any particular issue or problem. Contact us for additional information. One of our lawyers will be happy to discuss the possibility of representation with you. The opinions expressed at or through this site are the opinions of the individual author and may not reflect the opinions of the firm or any individual attorney.