The European to consider Parliament adopted the AI Act last summer. This European law imposes strict rules on developers and users of AI to protect the rights of European citizens. As a Frankwatching reader, there is a real chance that the AI Act will also apply to your organization and work from 2026. This article explains what the AI Act is and what you need to take into account.
The attention for AI lately has probably not escaped your notice. And perhaps you are already experimenting with ChatGPT or are already working with it. Because the possibilities of tools like ChatGPT are impressive and range from writing a persuasive sales email to building complete websites.
And the possibilities of AI go much further. For example, AI detects tumors in scans that experienced doctors miss and AI detects fraud in financial transactions.
The Rationale for the AI Act: The Downside of AI
But as is often the case with great inventions, AI can also be used for bad purposes. For example, dynamite was once invented to make life easier for miners, but others use it to make bombs. And so, for example, you can misuse AI for facial recognition, to track or pick out people with certain characteristics.
The AI Act protects against the dangers of AI
To protect us from wrong intentions with AI, the European Parliament is working on the so-called AI Act . The AI Act is a European law that protects Europeans from the risks of AI. For example, the AI Act guarantees our privacy and prohibits AI from characterizing us based on gender and race.
The European Parliament has now adopted the final version of the AI Act and it is expected that the AI Act will enter into force in 2026.
Also read: AI in marketing: match made in heaven or too good to be true?
This is how the AI Act works
The AI Act assumes the risk of misuse and undesirable effects of AI for people. To this end, the AI Act works with the following risk levels, which indicate what the builders and users of AI may and may not do:
Unacceptable risk
This includes AI that is a threat to humans and the AI Act prohibits these forms of AI. Think for example of AI that predicts whether someone will commit a crime. But also social scoring based on purchasing behavior or financial situation (which can be interesting for marketing) falls under this and will soon no longer be allowed.
High risk
This level includes AI that could harm people’s rights or safety. For example, AI that screens job applicants or controls the climate in offices. The AI Act allows such AI under strict conditions. For example, it must be absolutely clear what data the AI uses and people must ensure that this AI does not harm people’s rights and safety.
Low risk
All other AI falls under this level and for this it is always clear to users that they are dealing with AI. Furthermore, this AI may not make decisions and any copyrights must be complied with. ChatGPT, for example, falls under low risk, but also AI that generates video for deepfakes .
The AI Act applies to everyone who develops, supplies mexico email list 8 million contact leads or applies AI. In all sectors. And to make risk assessments transparent, a European database will be created that registers AI applications such as ChatGPT. Only AI that is included in the database may be sold and used in Europe.
What should you take into account?
As a Frankwatching reader, you are most likely to be check out our complete data privacy guide for brands and ensure your users’ trust or become an AI user. And even then, the AI Act applies to you. Because suppose you are a marketer. Then the AI Act prohibits you from using AI for social scoring. And when you create content with ChatGPT as a communications specialist, you will soon have to state that your text was created with AI.
Fines for offenders will vary depending on the type of aleart news violation and the size of the company, but could be up to €40 million.
It is therefore important to prepare yourself and the following step-by-step plan can help you comply with the AI Act:
Map the AI you use
Review all the software you use and see if AI is a part of it. If necessary, ask the developer or supplier of your software.
Check in the registration database whether you are allowed to use the AI.
You will soon be liable if you use unregistered and therefore prohibited AI, so be sure of the AI you use.
Determine the risk level per AI
Now determine or estimate the risk level (unacceptable/high/low) per AI you use.
For each AI, record how you will apply or use it.
Then work out according to which rules and with which facilities you will use the AI. Suppose you use ChatGPT to generate product pages for your website. Then agree that one or more persons will check a created page for publication and record this in a work procedure.
Enter the established method for each AI