On March 13, 2024, the European Parliament finally adopted the Artificial Intelligence Act (called: AI Act). It is the world’s first comprehensive horizontal legal framework for AI. This act proposed EU-wide rules on transparency, data quality, human oversight, and accountability. The AI Act will have a profound impact on a significant number of MedTech companies.

Where the AI act apply?

The AI Act applies to all the providers of AI systems, including companies developing AI systems to place them on the market or put them into service. This Act also is applicable to importers and distributors systems utilizing AI in the European Union.

What are the risk classes of AI?

The AI Act relies on a risk-based approach, so different requirements can apply in accordance with the level of risk.

  • Unacceptable risk. They can harm people’s rights and are not allowed. For example, AI that changes how people behave or takes advantage of their weaknesses, like age or disability, is banned. Other banned AI includes systems that recognize emotions at work or label people in real-time.
  • High risk. These systems must follow strict rules to lower the risk. These rules include using good data, keeping records of what they do, explaining clearly how they work, having humans check them, and being very strong, accurate, and secure. Examples of these high-risk AI include important systems like power plants and transportation systems, medical devices, and systems that decide about school and job.
  • Limited risk. Companies must make sure that AI systems which directly talk to people, like chatbots, tell users they’re talking to a machine. Also, those who use AI to make deepfake videos must say the videos are fake.
  • Low risk. There are no strict rules for these kinds of AI, like AI in video games or email filters. But companies can still choose to follow some good rules if they want to.

AI ACT: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf