Knowledge base
August 25, 2024
What is the AI Act? ππ€
The AI Act is a landmark European regulation aimed at regulating autonomous computer systems and algorithms that make decisions, generate content or assist humans.
The goal of this legislation is to ensure that artificial intelligence (AI) within the EU is safe, transparent, traceable, non-discriminatory and environmentally friendly, all under human supervision.
π±
Key points of the AI Act:
- Security and transparency: AI systems must be safe to use and work transparently so that it is clear how decisions are made.
- Risk-based approach: The law takes a risk-based approach, meaning that obligations depend on the risk level of the AI system.
- Application in various sectors: The rules apply to all sectors, from healthcare to entertainment, with specific adaptations for, for example, investigative and security agencies.
Who does the AI Act apply to?
π₯βοΈ
The AI Act is relevant to anyone who develops, markets or deploys AI.
This includes manufacturers, integrators, importers and users of AI.
It is crucial that all parties involved can demonstrate that their AI systems comply with the legislation.
This aligns with existing European regulations, such as the Medical Device Regulation (MDR) in healthcare.
Fines πΈπ¨
Violations of the AI Act can result in very large fines:
- Major violations: Up to 35 million euros or 7% of global turnover.
- Smaller administrative offenses: Up to 7.5 million euros or 1.5% of turnover.
Risk levels in the AI Act ππ
The AI Act classifies AI systems according to their level of risk:
- Unacceptable risks: AI systems that violate European norms and values, such as predictive policing or biometric surveillance in public places, are banned in the EU.
- High risks: These AI systems may be used only if they meet strict conditions, such as clarity about data provenance and human oversight.
Examples include AI systems that control medical devices or assess job applicants. - Low risk: Low-risk AI systems may enter the market without too many restrictions, but must be transparent and must not make decisions without human intervention.
Obligations for AI systems π οΈπ
The following obligations apply to high-risk AI systems:
- Fundamental Rights Impact Assessment: an assessment of the impact of AI on issues such as security, privacy and discrimination.
- Risk Management System: Identifying, evaluating and managing potential risks.
- Data management system: using unbiased and qualitative datasets.
- Technical documentation: Users need to know exactly how the AI system works and how to use it safely.
- Transparency and logging: The AI system must be transparent in its operation, and everything must be documented to track what happened.
- Human supervision: There should always be human supervision when using AI.
- Conformity Assessment: A specific assessment procedure tailored to the unique risks of AI systems.
What does the AI Act mean to you?
π€π’
If you work with AI or algorithms, or are considering doing so, there are a few important steps you can take right now:
- Inventory your AI systems: Make sure you know what AI systems you use, where they come from and what information you need from vendors.
- Assess your role and responsibilities: Check that your contracts and agreements are in line with the new legislation.
- Evaluate AI output: Provide protocols to periodically check the accuracy of AI outcomes.
- Determine the risk level: Check whether your AI systems fall into the high risk category and ensure the appropriate conformity assessment.
When will the AI Act take effect?
π
π
The AI Act was officially published on July 12, 2024, and takes effect on Aug. 1, 2024.
Most obligations for companies and institutions take effect two years later, but systems with unacceptable risk must be taken off the market as early as six months.
The AI Act and AVG: Whatβs the connection?
ππ‘οΈ
The AI Act works closely with the General Data Protection Regulation (GDPR), which protects the privacy of individuals.
Together, these regulations ensure that AI systems are not only secure and reliable, but also respect usersβ privacy.
Conclusion π―
The AI Act is an important step toward safe and responsible AI in Europe.
By controlling risks and ensuring transparency, the EU aims to promote innovation without harming the rights of individuals.
For organizations, this means starting now to adapt their AI strategies to comply with these new regulations.
For more info check out the Rules when using AI & algorithms.
Want to know more?
Related blogs
Tech Updates: Microsoft 365, Azure, Cybersecurity & AI β Wekelijks in je Mailbox.