EU Artificial Intelligence Act


Risk categories: Where do fit in?

whitepaper

With the publication of the proposed EU Artificial Intelligence Act mid 2021, ai systems are categorised as either:

  • prohibited;
  • high-risk;
  • limited risk;
  • minimal risk.
Most attention has been given to the former two.
Yet, what about the latter two categories? What are the implications for limited or minimal risk ai systems? To what extent does the regulation apply? Or are you free to do whatever you want to do?

The first question would be to ask what type of ai systems would qualify as limited risk or minimal risk. Key to answer this question is to think how an ai system might present a risk to an individual’s citizen rights or safety.

If the risk is none, it is very likely that the the ai system would qualify as minimal risk. Think for example of applications such as ai-enabled video games or spam filters. Poor ai performance would probably do no harm (although it may be annoying, of course).

Another question might be how important transparency around the ai system/application is. For example, ai systems such as chatbots would qualify as limited risk. Someone needs to be aware that he/she is interacting with an ai-system and not a person, i.e. they can take the decision to continue or step back.

In conclusion, when developing or deploying ai-systems it is of the essence to know whether the ai is classified as prohibited, high-risk, limited-risk or minimal risk. The higher the risk, the higher the requirements on the system will be and the larger the responsibility to demonstrate compliance with the EU Artificial Intelligence Act.

Learn more?

Please enter your contact details to further discuss what we can do for you.