The EU AI Act, approved in March 2024, is an important regulation that businesses must pay attention to. In light of the increasing use of artificial intelligence across industries, the Act aims to safeguard EU citizens against potential privacy and misinformation risks and define new standards for AI applications. This article will explore what companies need to know about the new regulations.
Simply put, think of it as a version of the General Data Protection Regulation (GDPR), but specifically for AI.
This law sets out rules for companies that create or use AI within the European Union, and it comes with severe consequences for those who don’t follow them.
Fines for non-compliance range from 7.5 million euros or 1% of a company’s turnover to 35 million euros or up to 7% of its global turnover, depending on the violation and company size. The penalties apply to various violations, such as using AI in manipulative ways or leveraging biometric data to uncover private information.
Any company operating in the EU needs to ensure they follow the new AI Act. It’s not something you want to ignore!
The European Union AI Act is the first worldwide law on artificial intelligence designed to tackle the associated risks. It focuses on providing clear guidelines for developers and users while protecting fundamental rights such as privacy and non-discrimination.
Under the Act, AI systems must meet several requirements:
1. Transparency: AI systems, like chatbots, should inform users when they’re interacting with AI.
2. Labelling: AI-generated content, such as deepfakes, must be clearly marked.
3. Impact assessment: companies using AI, especially in crucial areas like banking and insurance, must check how AI affects people’s rights.
The European AI Act also introduces a risk-based approach, meaning that the level of regulation depends on the potential risks associated with the AI system and its applications.
The EU AI Act introduces stringent regulations to prevent the misuse of artificial intelligence in ways that could endanger public safety and individual rights. It outright bans AI applications that could deceive or manipulate people, including children’s toys that pose potential harm.
The law says that AI can’t be used to take advantage of people’s personal traits, such as age, disability, or socio-economic status. It bars the categorisation of individuals through sensitive biometric data.
Moreover, the law prohibits the collection of facial images for databases and using technology to determine emotions in professional or educational settings, except for specific medical or safety reasons. However, the act does permit certain uses of AI in law enforcement and medical or safety applications, acknowledging their essential roles.
The Act categorises high-risk AI systems into the following areas:
1. Biometrics: remote face recognition, identifying people by physical traits, and analysing feelings, all within legal limits.
2. Critical infrastructure, e.g. public services like transportation, water, gas, and electricity.
3. Education: AI that impacts a person’s future, such as grading exams.
4. Employment: tools for hiring, sorting resumes, evaluating worker performance, etc.
5. Public and private services: assessing eligibility for assistance, evaluating credit scores, organising emergency responses, and calculating insurance prices.
6. Law enforcement: AI predicting who might be a victim or perpetrator of a crime, helping in examining evidence and profiling.
7. Migration and border control: managing asylum and visa requests, identifying individuals, and more.
8. Justice and democratic processes: using AI in legal decisions, influencing elections or voting, but not campaign tools.
AI systems classified as having limited risk must be transparent, letting users know when they’re dealing with AI. This helps users make informed choices, especially regarding AI-generated or altered content, like deepfakes.
Key transparency rules in the Artificial Intelligence Act include:
1. Clearly indicate when users interact with AI, e.g. chatbots.
2. Label AI-generated content.
3. Inform users about AI’s emotion or physical trait analysis.
4. Mark AI-altered content, including visuals, audio, or text, especially on public interest topics.
Exceptions exist for legally allowed uses in crime detection and some creative content. The European AI Office will promote compliance codes of practice.
Systems like spam filters and video games pose minimal risk and don’t need specific rules. Most EU-used AI falls here.
Under the EU AI Act, AI systems, including advanced ones like ChatGPT, must meet these transparency rules:
1. Clearly inform users when they interact with AI-generated content.
2. Ensure AI models are designed to block the creation of illegal content.
3. AI training must list copyrighted data sources.
4. Serious issues with top-tier AI (e.g., GPT-4) require an immediate report to the European Commission.
In summary, AI that makes synthetic content must be marked as such. Systems that recognise emotions or physical traits must inform users and respect privacy. Creating or altering content, like deepfakes, has to be openly stated, except for legal exemptions or creative purposes.
To comply with the EU AI Act, businesses must follow several essential steps.
First, it’s important to note that the Act applies to those who create and use AI systems, whether they’re based in the EU or their AI systems are available in the European market. This also includes businesses outside the EU if their AI is used within the union.
Start by using the Compliance Checker to determine if the new rules cover your AI system. This tool helps you understand what you need to do to comply.
Next, check if your business uses AI in any way that is considered high-risk by the EU (Annex III of the Act includes an overview of the use cases). If so, the European Commission has provided guidelines to help you meet the necessary standards.
The European Commission guide for providers of high-risk AI systems
The European Parliament approved the AI Act on March 13, 2024. It will apply once officially published, with its rules becoming fully active sometime between the end of 2024 and mid-2027.
Here’s a quick look at the adoption timeline for the European AI Act.
EU AI Act timeline
The Act will affect a wide array of AI-related roles, including providers, sellers, and users of AI systems. It’s crucial to quickly get up to speed with what the Act expects. Businesses should maintain a current list of their AI systems and be clear on the requirements for each level of risk defined by the Act.
The EU AI Act is set to be the most thorough set of AI regulations to date. Similar rules will likely be introduced in the UK, the US, and other parts of the world soon.
Need help implementing AI in your company? Let our team guide you and help you make the most of this technology