Skip to main content
The EU AI Act is the world’s first comprehensive AI regulation, applying to any AI system that operates in the EU or affects EU citizens. It establishes requirements for data minimization, transparency, and documentation — with stricter rules for high-risk AI systems. Blindfold helps you comply by ensuring personal data is removed from AI inputs, providing an audit trail of all data processing, and supporting EU data residency.

Timeline

DateMilestone
Aug 2024EU AI Act enters into force
Feb 2025Prohibited AI practices apply
Aug 2025General-purpose AI rules + governance
Aug 2026High-risk AI system obligations (Annex III)
Aug 2027Full enforcement for all AI systems

Risk Categories

The EU AI Act classifies AI systems by risk level:

Unacceptable Risk

Banned. Social scoring, real-time biometric surveillance, manipulative AI. These systems are prohibited.

High Risk

Strict requirements. AI in healthcare, finance, HR, education, law enforcement. Must meet transparency, documentation, and data governance standards.

Limited Risk

Transparency obligations. Chatbots, AI-generated content. Must disclose AI involvement to users.

Minimal Risk

No requirements. Spam filters, AI in games. Most AI applications fall here.

Key Requirements for AI Systems

Requirement: Training and input data must be relevant, representative, and limited to what is necessary.Risk: Sending full user conversations to LLMs includes far more personal data than necessary for the AI task.With Blindfold: Tokenize PII before AI calls. The LLM receives only the information needed for its task — personal identifiers are replaced with anonymous tokens.
# Data minimization: only anonymized context reaches the AI
tokenized = blindfold.tokenize(user_message, policy="gdpr_eu")
response = openai.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": tokenized.text}],
)
Requirement: AI systems must be transparent about how they process data. Users must be informed when interacting with AI.With Blindfold: Audit logs document exactly what personal data was detected, what was anonymized, and what was sent to the AI provider. This creates a clear record for transparency requirements.
Requirement: High-risk AI systems must maintain technical documentation and log all operations.With Blindfold: Every API call is logged with entity types detected, policy used, timestamp, and region. Export these logs for regulatory documentation.
Requirement: Data used in AI systems must meet quality, relevance, and privacy standards.With Blindfold: The detect() method lets you audit text for personal data without modifying it — useful for data governance reviews and quality checks.
# Audit data for PII without modifying it
detection = blindfold.detect(training_data)
for entity in detection.detected_entities:
    print(f"Found {entity.type}: {entity.text}")
Requirement: High-risk AI systems must allow human oversight and intervention.With Blindfold: Tokenization is reversible — humans can always see the real data via detokenize(), while the AI only works with anonymized versions. This maintains human oversight of the actual information.

How Blindfold Maps to the EU AI Act

AI Act RequirementArticleBlindfold Feature
Data minimizationArt. 10tokenize() removes PII before AI input
TransparencyArt. 13Audit logs document all PII processing
DocumentationArt. 11-12Export audit logs for regulatory records
Data governanceArt. 10detect() audits data for PII
Human oversightArt. 14detokenize() restores data for human review
Data protection by designArt. 10SDK-level PII protection in your pipeline

High-Risk AI Systems

The EU AI Act imposes stricter requirements on AI systems in these domains:
AI Act Classification: High-risk (Annex III, Section 5)Requirements: Robust data governance, thorough testing, documentation of training data, continuous monitoring.Blindfold Approach:
  • Use region="us" or region="eu" depending on patient location
  • Apply hipaa_us (US patients) or gdpr_eu (EU patients) policy
  • Tokenize all PHI before clinical AI tools
  • Maintain audit trail for regulatory inspections
See HIPAA Compliance for healthcare-specific guidance.

Code Examples

Data Minimization for AI Calls

from blindfold import Blindfold
from openai import OpenAI

blindfold = Blindfold(api_key="your-key", region="eu")
openai_client = OpenAI(api_key="your-openai-key")

# Customer support message with personal data
message = (
    "Hi, I'm Marie Dupont (marie.dupont@example.fr). "
    "I was charged twice on 02/10/2026 for order #12345."
)

# Remove personal data before AI processing (data minimization)
tokenized = blindfold.tokenize(message, policy="gdpr_eu")
# → "Hi, I'm <Person_1> (<Email Address_1>).
#    I was charged twice on 02/10/2026 for order #12345."

# AI only processes what it needs — no real personal data
completion = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": tokenized.text}],
)

# Restore for the human agent
restored = blindfold.detokenize(
    completion.choices[0].message.content,
    tokenized.mapping,
)

Audit Data for PII (Data Governance)

Use detect() to check datasets for personal data without modifying them:
# Audit training data for PII before using in AI systems
training_samples = [
    "Customer feedback: Great service from the team!",
    "John Smith at john@example.com reported a bug on 02/15.",
    "Order #98765 shipped to Berlin on schedule.",
]

for sample in training_samples:
    detection = blindfold.detect(sample, policy="gdpr_eu")
    if detection.entities_count > 0:
        print(f"PII found in: {sample[:50]}...")
        for entity in detection.detected_entities:
            print(f"  - {entity.type}: {entity.text}")

Relationship with GDPR

The EU AI Act and GDPR are complementary:
AspectGDPREU AI Act
FocusPersonal data protectionAI system safety and transparency
ScopeAny personal data processingAI systems operating in the EU
Data requirementsMinimize collectionMinimize AI inputs + ensure quality
DocumentationProcessing records (Art. 30)Technical documentation (Art. 11)
OversightData Protection OfficersAI governance structures
Using Blindfold for both: Apply the gdpr_eu policy with the EU region to satisfy both regulations simultaneously. GDPR protects the personal data, while the audit trail satisfies AI Act transparency requirements.

EU AI Act Compliance Checklist

1

Classify your AI system

Determine if your AI system is high-risk, limited-risk, or minimal-risk under the EU AI Act.
2

Minimize data in AI inputs

Use blindfold.tokenize() to remove personal data before AI processing.
3

Use the EU region

Set region="eu" for data processed in Europe — required for GDPR alignment.
4

Implement audit logging

Use Blindfold’s audit trail to document what PII was detected and anonymized.
5

Audit training data

Use blindfold.detect() to scan training datasets for personal data.
6

Document your data pipeline

Record how data flows through your system, where PII is detected, and how it’s protected.
7

Review regularly

As the EU AI Act phases in (through 2027), review your compliance posture with each milestone.