On February 2, 2025, certain AI systems became illegal to use in the European Union.
Not "discouraged." Not "under review." Illegal. As in: if you're using social scoring systems or real-time biometric surveillance in public spaces, you must stop. Now.
This is the EU Artificial Intelligence Act—the world's first comprehensive legal framework for regulating AI. It passed in June 2024 and started taking effect this year. More requirements roll out through 2027.
If you use AI in your product, or if your business has EU customers, this matters. Here's what you need to know.
What the AI Act Actually Regulates
Unlike GDPR, which regulates how you handle personal data, the AI Act regulates AI systems themselves—specifically, the risk they pose to people's safety and fundamental rights.
The law doesn't care whether you built the AI or bought it from OpenAI. If an AI system affects people in the EU, someone needs to be responsible for compliance. And if you're the one deploying it, that someone is often you.
The regulation uses a risk-based approach. Different AI uses carry different risks, so they have different requirements. AI that could get someone denied a job faces stricter rules than AI that recommends products.
Does This Apply to You?
The AI Act applies if:
- You have users or customers in the EU (even if your company is based elsewhere)
- You use AI systems that affect people in the EU
- You provide AI systems to others who use them in the EU
In practice, this means:
Your SaaS uses the ChatGPT API. If EU users interact with those AI features, you need transparency disclosures.
You have a customer support chatbot. You must clearly tell EU users they're talking to AI.
You use AI for hiring or performance reviews. This is classified as "high-risk" and requires registration, testing, and ongoing documentation.
Your app generates images or text with AI. You must label that content as AI-generated.
You use product recommendation algorithms. Depending on how personalized they are, you may need to disclose them.
Like GDPR, the AI Act applies globally if it affects EU residents. A New York startup with 50 EU customers must comply. A Singapore company selling to European businesses must comply. There's no minimum threshold—one EU user is enough.
The Four Risk Levels (And What They Mean for You)
The AI Act categorizes AI systems into four risk levels. Your obligations depend on which category your AI falls into.
Unacceptable Risk: Banned Entirely
These AI systems are prohibited. Full stop. As of February 2, 2025, you cannot use them in the EU.
Examples:
- Social scoring systems that rate people based on behavior or personal characteristics
- AI that exploits vulnerabilities of children or disabled persons
- Real-time biometric identification in public spaces (limited law enforcement exceptions exist)
- AI that categorizes people based on sensitive attributes like race or sexual orientation
Most businesses don't use these systems. But if you do—even tangentially—you need to shut it down or face fines up to €35 million or 7% of global annual turnover, whichever is higher.
High Risk: Heavy Regulation
These AI systems significantly impact safety or fundamental rights. If your AI falls into this category, compliance is expensive and time-consuming.
Examples:
- AI used in hiring, firing, or managing workers
- Credit scoring or loan approval algorithms
- AI in critical infrastructure (energy grids, water systems, transportation)
- Educational AI (grading exams, admissions decisions)
- Law enforcement risk assessment tools
- Immigration or asylum decision systems
What you must do:
- Register your AI system in the EU database
- Conduct formal risk assessments
- Maintain detailed technical documentation
- Implement human oversight mechanisms
- Ensure accuracy and robustness through testing
- Log all AI decisions and keep records
Deadline: August 2027 (36 months after the law entered into force)
This is not a weekend project. If you use high-risk AI, start now. Documentation alone can take months to prepare properly.
Limited Risk: Transparency Required
This is where most businesses fall. If you use AI in ways where users should know they're interacting with a machine, you need to tell them.
Examples:
- Chatbots and virtual assistants
- AI-generated text, images, audio, or video
- Deepfakes (even for legitimate purposes)
- Emotion recognition systems
What you must do:
- Clearly disclose that AI is being used
- Label AI-generated content
- Design systems to prevent generation of illegal content
- For generative AI providers: publish summaries of copyrighted data used in training
Deadline: February 2026 (12 months after entry into force)
The disclosure can be simple: "You are chatting with an AI assistant." Or: "This content was generated with AI." The point is transparency, not perfection.
Minimal Risk: No Special Rules
Most AI systems fall here. If the AI poses little to no risk to rights or safety, you can use it freely.
Examples:
- Spam filters
- AI-powered video games
- Inventory optimization algorithms
- Basic recommendation systems (non-personalized)
What you must do: Nothing. Use it as you wish.
The Compliance Timeline
The AI Act doesn't go into full effect all at once. Requirements phase in over three years:
February 2, 2025 (in effect now): Ban on unacceptable-risk AI systems.
August 2025: Codes of practice published for general-purpose AI. This affects AI model providers like OpenAI and Anthropic more than end users.
February 2026: Transparency requirements kick in. If you use chatbots, AI-generated content, or other limited-risk systems with EU users, you must disclose it.
August 2027: Full compliance required for high-risk AI systems. If you use AI for hiring, credit decisions, or other high-stakes purposes, everything must be documented, tested, and registered by this date.
Even though high-risk compliance isn't required until 2027, start now. Risk assessments, technical documentation, and testing take time. If you wait until 2026, you'll be scrambling.
What You Actually Need to Do
Step 1: Make a List of Every AI System You Use
This sounds obvious, but most companies don't have a complete inventory. Include:
- AI features you built (recommendation engines, search algorithms, chatbots)
- Third-party AI you use directly (ChatGPT API, Anthropic Claude, image generation)
- AI buried inside SaaS tools (Intercom's AI chatbot, Salesforce's Einstein, HubSpot's content assistant)
- Internal AI tools (hiring platforms, performance review systems, surveillance software)
Don't skip the SaaS tools. If Intercom's AI answers customer questions on your website, you're responsible for compliance—not Intercom.
Step 2: Classify Each AI System by Risk
Go through your list and categorize each system:
Is it unacceptable risk? (Social scoring, manipulative AI, unauthorized biometric surveillance)
If yes: Stop using it immediately.
Is it high risk? (Hiring, credit scoring, critical infrastructure)
If yes: Start preparing for full compliance. You have until August 2027, but you need that time.
Is it limited risk? (Chatbots, AI-generated content, deepfakes)
If yes: Prepare transparency disclosures. Deadline is February 2026.
Is it minimal risk? (Spam filters, inventory management, non-personalized recommendations)
If yes: You're fine. Document it for your records and move on.
Step 3: Add Transparency Disclosures (Limited-Risk AI)
If you use chatbots or generate AI content, update your interfaces to disclose it.
For chatbots:
Add a notice like: "You are chatting with an AI assistant. Responses are generated automatically and may be reviewed by our team."
For AI-generated content:
Add a label like: "This content was created with AI assistance" or include a watermark for images.
In your Privacy Policy:
Add a section explaining what AI systems you use and how they process user data.
In your Terms of Service:
Mention AI features and set expectations for accuracy and limitations.
These disclosures don't need to be long. They just need to be clear.
Step 4: If You Use High-Risk AI, Start Documentation
High-risk compliance is complex. You'll need:
1. Risk Assessment Documentation
- How does the AI make decisions?
- What could go wrong?
- Who could be harmed?
- How are you mitigating those risks?
2. Data Governance
- Where does your training data come from?
- Is it representative and unbiased?
- How do you ensure data quality?
3. Technical Documentation
- System architecture and specifications
- Training methodology
- Performance metrics and accuracy rates
- Known limitations and failure modes
4. Human Oversight
- Who monitors the AI?
- Can humans override AI decisions?
- Are oversight staff trained and qualified?
5. Logging and Traceability
- Automatic logs of all AI decisions
- Ability to trace why a specific decision was made
- Retention of logs for the required period
6. Registration
- Submit your system to the EU AI database
- Update the registration when the system changes
If this sounds overwhelming, that's because it is. High-risk AI compliance is serious. Consider hiring legal experts who specialize in AI regulation.
Step 5: Update Your Legal Documents
Your Privacy Policy and Terms of Service need to address AI.
Privacy Policy additions:
- What AI systems you use
- How they process personal data
- User rights regarding AI decisions (right to explanation, right to contest automated decisions)
- Human oversight mechanisms
Terms of Service additions:
- AI features and their limitations
- Acceptable use policies for AI tools
- Disclaimers about AI accuracy
What Happens If You Don't Comply
The fines are steep:
Using banned AI systems: €35 million or 7% of global annual turnover (whichever is higher)
Non-compliance with high-risk requirements: €15 million or 3% of global turnover
Providing incorrect information to authorities: €7.5 million or 1% of global turnover
For small companies and startups, there are some caps and adjustments. But even the reduced penalties can be business-ending.
Beyond fines, non-compliance can kill enterprise deals. Large companies won't buy from vendors who can't demonstrate AI compliance. If you're selling B2B, this matters.
Common Questions
I'm a US company with a few EU customers. Do I really need to comply?
Yes. The AI Act has extraterritorial reach, like GDPR. If your AI affects EU residents, you must comply—regardless of where you're based.
What if I just use ChatGPT API? Do I need to do anything?
It depends on how you use it. If EU users interact with ChatGPT through your product, you need transparency disclosures ("You are chatting with AI"). OpenAI handles the model-level compliance, but you're responsible for how you deploy it.
If you use ChatGPT for high-risk purposes—like screening job applicants—you bear more responsibility. The fact that you didn't build the AI doesn't exempt you.
How is this different from GDPR?
GDPR regulates personal data. The AI Act regulates AI systems. They overlap but cover different ground:
GDPR asks: Are you collecting, storing, and processing personal data lawfully?
AI Act asks: Is your AI system safe, transparent, and properly supervised?
You need to comply with both. An AI system can be GDPR-compliant but violate the AI Act, and vice versa.
What if I only use AI internally—no customer-facing features?
Internal AI still counts. If you use AI to make hiring decisions, monitor employee performance, or manage your workforce, that's high-risk AI. You need to comply.
I'm still building my product. When do I need to comply?
The deadlines apply when you deploy the AI in the EU. If you're still in development, use this time to build compliance in from the start. It's much easier than retrofitting later.
What to Do Right Now
If you're reading this in late 2024 or early 2025, here's your priority list:
1. Audit your AI use. Make a complete list of every AI system and feature in your business—including third-party tools.
2. Classify by risk level. Go through the list and categorize each system as unacceptable, high, limited, or minimal risk.
3. Stop using banned AI. If you're using social scoring or unauthorized surveillance, shut it down.
4. Add transparency labels. If you have chatbots or generate AI content, add simple disclosures before the February 2026 deadline.
5. Update your Privacy Policy. Add a section on AI use. Explain what systems you use and how they work.
6. For high-risk AI, start documentation now. Don't wait until 2027. Begin risk assessments, technical documentation, and planning for human oversight.
7. Mark the deadlines. February 2026 (transparency) and August 2027 (high-risk compliance). Work backwards from there.
The Bigger Picture
The EU AI Act isn't designed to stop AI innovation. It's designed to make sure AI is developed responsibly—especially when it affects people's lives, livelihoods, and rights.
For most businesses, compliance isn't that onerous. Add a few disclosures. Update your privacy policy. Document your AI usage.
For companies using high-risk AI, it's more involved. But the alternative—waiting until regulators come knocking—is worse.
Early compliance can be a competitive advantage. Enterprise customers are starting to ask about AI governance. Being able to say "Yes, we're compliant with the EU AI Act" will matter.
The law is here. The deadlines are set. The question isn't whether you'll comply—it's whether you'll do it early, calmly, and correctly, or late, frantically, and expensively.
Choose wisely.