The EU AI Act affects every business deploying AI phone assistants — as a provider or as a deployer. Yet the topic remains surprisingly under-discussed in the voice AI industry.
This article is a deep dive into the actual legal text, with original quotes from Regulation (EU) 2024/1689. So you can assess which obligations apply to you — and what you concretely need to do.
What Is the EU AI Act?
Regulation (EU) 2024/1689 — better known as the EU AI Act — is the world's first comprehensive AI law. It entered into force on August 1, 2024, but applies in stages:
- February 2, 2025: Prohibited AI practices and AI literacy obligation
- August 2, 2025: Rules for general-purpose AI models (GPAI)
- August 2, 2026: Full applicability — including transparency obligations and high-risk rules
- August 2027: High-risk systems in regulated products
The law classifies AI systems into four risk categories: prohibited, high-risk, transparency-required, and minimal. The category determines which obligations apply.
Where Do Voice Agents Fall?
This is where it gets concrete for our industry. An AI phone assistant is:
- Not prohibited — it doesn't perform social scoring, manipulation, or biometric identification
- Not high-risk — it doesn't make decisions about creditworthiness, employment, or educational access
- Transparency-required — it interacts directly with natural persons
Transparency-required sounds straightforward at first. But the concrete requirements are substantial.
The regulation does allow an exception — when the AI nature is "obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use." For a chatbot on a website labelled "AI Assistant," you could argue that. For a phone call with a human-sounding voice? Hardly. That's the point: the better the voice sounds, the more important the disclosure becomes.
Recital (132) of the regulation underscores this — and goes further:
"In particular, natural persons should be notified that they are interacting with an AI system. [...] The characteristics of natural persons belonging to vulnerable groups due to their age or disability should be taken into account to the extent the AI system is intended to interact with those groups as well."
— Recital (132), Regulation (EU) 2024/1689
This means: the information must not only be given — it must be understandable and accessible. For a phone call, that means: a clearly spoken disclosure, not buried in a subordinate clause.
Obligation 1: AI Disclosure — Article 50(1)
The core transparency obligation is stated in Article 50(1):
"Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system."
— Art. 50(1), Regulation (EU) 2024/1689
Two words are critical here: "designed and developed." The law obliges the provider — the company building the platform — to technically build AI disclosure into the system. At the same time, deployers (the businesses using the voice agent) bear the responsibility to use the system correctly and not circumvent the disclosure.
In practice, this means: the provider must ensure that an AI disclosure is a default part of the system. The deployer must ensure that this disclosure actually happens during use.
And when must the disclosure happen? Article 50(5) is unambiguous:
"The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and distinguishable manner at the latest at the time of the first interaction or exposure."
— Art. 50(5), Regulation (EU) 2024/1689
At the latest at the time of the first interaction. For a phone call, that's the greeting. Not the fine print on a website. Not the terms of service. The first spoken message.
Obligation 2: Mark Synthetic Audio — Article 50(2)
The second obligation is barely discussed across the industry:
"Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated."
— Art. 50(2), Regulation (EU) 2024/1689
Every voice agent uses text-to-speech (TTS) — the generated audio is synthetic. From August 2026, this audio must be machine-readably marked as AI-generated.
The regulation qualifies: "as far as this is technically feasible" and considering the "generally acknowledged state of the art." Recital (133) names concrete methods:
"Such techniques and methods should be sufficiently reliable, interoperable, effective and robust [...] such as watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, logging methods, fingerprints or other techniques."
— Recital (133), Regulation (EU) 2024/1689
In practice, the entire industry is still developing solutions here. Technical options being worked on:
- Audio watermarking: Inaudible signals embedded into TTS audio. ElevenLabs already offers this.
- C2PA Content Credentials: An open standard by the Coalition for Content Provenance and Authenticity, documenting provenance and generation method in metadata.
- Metadata tagging: Marking in the audio container format (e.g., WAV/MP3 headers) with AI generation information.
The EU Commission is working on concrete guidelines for the practical implementation of Art. 50 transparency obligations in 2026. Until then: the state of the art determines what is "technically feasible" — and that state is evolving rapidly.
Obligation 3: AI Literacy — Article 4
An obligation that has been in effect since February 2, 2025 and is overlooked by many businesses:
"Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf."
— Art. 4, Regulation (EU) 2024/1689
Here, both are responsible: providers and deployers. If your business deploys a voice agent, your employees must understand what they're doing. What this concretely means is deliberately left open — but the minimum should be: your staff knows that an AI system is handling calls, how it works, and where its limitations are.
For providers, this also means: training materials, documentation, and transparent communication about how the system works should be part of the product — not an afterthought.
How Does This Relate to the GDPR?
The AI Act doesn't replace the GDPR — it complements it. Both regulations run in parallel, and for voice agents, they overlap significantly.
GDPR Art. 6 — Lawful basis for processing: Every data processing operation by a voice agent requires a lawful basis. The two most relevant for appointment scheduling systems:
- Art. 6(1)(b) — Performance of a contract: the caller wants to book an appointment, processing is necessary for that purpose
- Art. 6(1)(f) — Legitimate interest: the business has a legitimate interest in efficient appointment management
Both bases require that only data actually necessary for the respective purpose is collected. Which data that is depends on the specific use case — name, phone number, and reason for the appointment may well be necessary for a medical practice or a trades business. The key is that collection is proportionate to the purpose.
GDPR Art. 13 — Information obligation when collecting data: When a voice agent collects personal data (name, phone number, appointment request), the controller must inform: who processes the data, for what purpose, on what legal basis, and how long it's stored.
Reading out a complete privacy notice on the phone is obviously not practical. A pragmatic approach: the AI assistant should be able to provide information on request — who processes the data, for what purpose, and where to find the full privacy notice. This requires that this information is stored as part of the system configuration.
GDPR Art. 13(2)(f) — Automated decision-making: The GDPR requires disclosure when automated decisions are made — "including profiling" — with "meaningful information about the logic involved." A voice agent that autonomously books appointments makes decisions automatically. Here too: transparency about how the system works is the safest path.
The combination is what matters:
| Obligation | GDPR | AI Act |
|---|---|---|
| AI disclosure during conversation | Not explicitly required | Art. 50(1): Required |
| Timing of information | "At the time of collection" (Art. 13) | "At the latest at first interaction" (Art. 50(5)) |
| Mark audio as synthetic | Not required | Art. 50(2): Required |
| Privacy notice | Art. 13: Required | Supplements, doesn't replace |
| AI literacy for staff | Not required | Art. 4: Required since Feb. 2025 |
The AI Act closes the gaps the GDPR left open for AI systems. Some obligations — like AI literacy under Art. 4 — have been in effect since February 2025. The transparency obligations under Art. 50 follow in August 2026.
The Penalties: A Serious Matter
The AI Act has three penalty tiers:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices (Art. 5) | EUR 35M or 7% of global annual turnover |
| Transparency obligations (Art. 50) | EUR 15M or 3% of global annual turnover |
| False information to authorities | EUR 7.5M or 1% of global annual turnover |
For SMEs, there's a relief clause — Article 99(6):
"In the case of SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages or amount referred to in paragraphs 3, 4 and 5, whichever thereof is lower."
— Art. 99(6), Regulation (EU) 2024/1689
This means: for an SME with EUR 500,000 annual revenue, the maximum penalty for a transparency violation would be EUR 15,000 (3% of turnover) instead of EUR 15M. Depending on margins, that can be a significant amount for a small business. And the GDPR has shown that fines start rare, then suddenly become consistent.
What to Look For
The legal texts yield concrete checkpoints — for your own business and when choosing a voice AI provider:
1. Is an AI disclosure built into the greeting? Art. 50(1) obliges the provider to design the system so that disclosure happens. Art. 50(5) requires it at the latest at first interaction. A greeting without an AI disclosure is a compliance risk from August 2026.
2. How is synthetic audio handled? From August 2026, TTS audio must be machine-readably marked as AI-generated (Art. 50(2)). Technical standards are still evolving — but it's worth having this on your radar today.
3. Where is voice data processed? The AI Act doesn't replace the GDPR. If caller data flows through servers outside the EU, you need Standard Contractual Clauses (SCCs) or EU data residency.
4. Is there AI literacy training material? Art. 4 has been in effect since February 2025. Your provider should supply materials explaining how the system works and where its limitations are.
5. Are retention periods transparent? The GDPR demands storage limitation (Art. 5(1)(e)). Check how long call data is stored — at your provider and in your own business. Some data must be retained for tax or contractual reasons, but the periods should be documented and justified.
What We Do at FlowCaptain
We're implementing the AI Act requirements proactively:
- AI disclosure in the greeting — integrated as a mandatory system component in every greeting
- 90-day data retention — call data is automatically deleted after 90 days. The GDPR demands data minimisation
- EU hosting — database (Supabase, Frankfurt), API (Railway, Amsterdam), dashboard (Vercel, EU Edge)
- Open sub-processor list — you know exactly who processes your data
- AI literacy — we're building documentation and training materials to help our customers fulfil their Art. 4 obligation
On audio watermarking (Art. 50(2)), the entire industry is still developing solutions. We're monitoring technical standards and the upcoming EU guidelines to implement a solution in time.
Conclusion
The EU AI Act is no reason to panic — but a reason to prepare now. The regulation sets sensible requirements: those who deploy AI should do it transparently. Those who generate AI content should label it. Those who operate AI systems should understand what they're doing.
For SMEs in the DACH region deploying voice agents for appointment scheduling, the obligations are manageable — but real. And the best time to prepare isn't August 2026. It's now.
Further Resources
- Full text of Regulation (EU) 2024/1689 (AI Act)
- Full text of Regulation (EU) 2016/679 (GDPR)
- EU AI Act Compliance Checker — find out in 10 minutes which requirements apply
- AI Act Single Information Platform — official service desk with FAQ and guidelines
- German Federal Government on the AI Act
- Digital Austria — AI Act — information for Austrian businesses
Sources: Regulation (EU) 2024/1689 (AI Act), Regulation (EU) 2016/679 (GDPR). Quoted passages are from the official versions in the Official Journal of the European Union. This article does not constitute legal advice.