
The European Union has taken a big step in regulating artificial intelligence (AI) by releasing new guidance on what’s considered off-limits under its AI Act. This groundbreaking law, which became official last year, targets AI applications deemed too risky, like social scoring systems or manipulative techniques that could harm individuals.
Earlier this week, the EU Commission shared detailed advice for developers on how to avoid these prohibited uses. Breaking these rules could lead to hefty fines—up to 7% of a company’s global turnover or €35 million, whichever is higher.
The guidelines aim to make sure everyone in the EU applies the AI Act the same way, ensuring fairness and safety. However, the Commission clarified that these recommendations aren’t legally binding. In the end, it’s up to regulators and courts to interpret and enforce the rules.
For now, the guidance is available in draft form, as it still needs to be translated into all the EU’s official languages. Developers can check it out here to get a better sense of what’s allowed and what’s not.
The AI Act’s rollout is happening in stages, with more compliance deadlines coming soon. Member states have until August 2 to designate the authorities responsible for enforcing the rules.
This latest move highlights the EU’s commitment to creating a safer and more ethical AI environment. As tech companies navigate these new regulations, one thing is clear: the stakes are high, and compliance is key.