Starting from February 2, the European Union's new Artificial Intelligence (AI) Act has begun to be implemented. This law aims to manage the risks AI poses to humans.
The EU categorizes AI into four risk levels:
Minimal Risk – For example, spam emails. AI systems of this kind do not require any special regulation.
Limited Risk – For example, customer service chatbots. These AI systems are subject to light and simple regulation.
High Risk – For example, AI systems providing healthcare advice. These systems are under strict oversight.
Unacceptable Risk – Some AI applications are completely banned. Examples include:
- Systems that assess individuals' risks based on their behaviors (e.g., social scoring).
- Systems that manipulate human decisions through secrecy or deception.
- Systems that exploit individuals based on characteristics such as age, disability, or social status.
- Systems that attempt to predict whether individuals will commit crimes based solely on their appearance.
- Systems that collect biometric data or expand facial recognition databases.
Such AI applications will be penalized in Europe. Fines can reach up to 35 million euros or 7% of a company's annual revenue. Many companies, like Amazon and Google, have signed the AI Pact and are beginning to comply with the law. However, some large companies, such as Meta and Apple, have not signed the Pact.
The AI Act also includes exceptions. For example, law enforcement agencies may collect biometric data in life-threatening situations, but only with special permission and strict oversight.
As the AI Act is implemented, more precise guidelines and rules will be provided by the beginning of 2025.