Vektora
Kanzlei
/
Compliance

EU AI Act: What German Startups Need to Know Before August 2026

The AI Act is fully enforceable from August 2026. Risk categories, obligations, penalties, and the startup-specific exemptions that matter.

Immo Ait Stapelfeld·Rechtsanwalt··Verified April 12, 2026·5 min read
LinkedIn

The EU AI Act is fully enforceable from August 2, 2026. If your startup uses AI in any form, you need to understand which rules apply to you. Most startups fall into the low-risk category and face minimal obligations. But getting it wrong on the few rules that do apply can cost up to EUR 35 million.

The Four Risk Categories

The AI Act classifies every AI system into one of four categories. Your obligations depend entirely on which category your product falls into.

Unacceptable risk (banned). These AI practices are prohibited outright since February 2, 2025. They include social scoring systems, real-time biometric identification in public spaces, emotion recognition in the workplace, and AI that manipulates behavior through subliminal techniques. If your product does any of these, stop.

High risk. AI systems used in areas listed in Annex III of the regulation: recruitment and HR decisions, credit scoring, insurance risk assessment, access to essential services, law enforcement, and migration management. These systems face the heaviest requirements.

Limited risk. Chatbots, deepfake generators, and other AI that interacts with people. The main obligation is transparency: users must know they are interacting with AI.

Minimal risk. Spam filters, translation tools, autocorrect, simple image processing. No regulatory obligations under the AI Act.

Most startup products fall into minimal or limited risk. If you build a SaaS tool that uses AI for internal analytics, text generation, or customer support chatbots, you are likely in the limited-risk category. The key question is whether your AI makes or supports decisions about people in the high-risk sectors listed above.

What Startups Must Do

Everyone: AI Literacy (Article 4)

This applies to every company that deploys AI, regardless of risk category. Since February 2, 2025, organizations must ensure that staff operating AI systems or using their outputs have sufficient AI competence. In practice, this means documented training for employees who work with AI tools.

This is the most commonly overlooked requirement. It applies even if you only use third-party AI tools like ChatGPT or GitHub Copilot internally.

Limited Risk: Transparency

If your product includes a chatbot or generates synthetic content, you must disclose that to users. A simple "This response was generated by AI" notice is typically sufficient. Deepfakes and AI-generated images must be labeled as such.

High Risk: The Full Compliance Stack

If your AI system falls into the high-risk category, the requirements are substantial:

RequirementWhat It Means
Risk management systemDocumented process for identifying and mitigating risks
Data governanceTraining data must be relevant, representative, and error-free
Technical documentationFull documentation of the system's design, purpose, and limitations
Record-keepingAutomatic logging of system operations
TransparencyUsers must receive clear instructions for use
Human oversightA human must be able to interpret and override the system
Accuracy and robustnessThe system must perform reliably and resist manipulation
Conformity assessmentBefore market placement, either self-assessment or third-party audit

Implementation typically takes 8 to 14 months. If your system might be high-risk, start now.

Penalties

ViolationMaximum Fine
Prohibited AI practices (Art. 5)EUR 35 million or 7% of global annual turnover
High-risk AI obligationsEUR 15 million or 3% of global annual turnover
Incorrect information to authoritiesEUR 7.5 million or 1% of global annual turnover

For SMEs and startups, fines are capped at the lower of the fixed amount or the percentage. A startup with EUR 2 million in revenue faces a maximum of EUR 140,000 for high-risk violations (3% of EUR 2 million), not EUR 15 million.

Startup-Specific Relief

The AI Act includes provisions specifically for smaller companies:

Regulatory sandboxes. Each EU member state must establish at least one AI regulatory sandbox by August 2, 2026. Startups and SMEs get prioritized, cost-free access to these controlled testing environments where you can develop and test AI systems under regulatory supervision without full compliance exposure.

Reduced fees. Conformity assessments and regulatory fees are reduced for SMEs.

Simplified documentation. The European Commission is developing simplified technical documentation templates for smaller companies.

Proportionality. Fines are proportional to company size, as described above.

Timeline

DateWhat Happens
February 2, 2025Prohibited practices banned; AI literacy obligation starts
August 2, 2025Rules for general-purpose AI models (like GPT) apply
August 2, 2026Full AI Act enforcement, including high-risk obligations
August 2, 2027Extended transition for high-risk AI in already regulated products (medical devices, machinery)

The German Situation

Germany has not yet passed a national implementing law. The Bundesnetzagentur (Federal Network Agency) is the likely national enforcement authority, but procedural details remain open. This creates uncertainty about how enforcement will work in practice. The EU-level rules apply directly regardless of national implementation.

What To Do Now

Checkliste
AI Act Compliance Checklist for Startups
0/5

Bottom Line

Most startups will face only two AI Act obligations: AI literacy training (already required) and transparency labeling (if you have a chatbot or generate content). The high-risk category affects a smaller subset, mostly in HR tech, fintech, and insurtech. If you are in that group, the compliance burden is real and you should start now. For everyone else, classify your systems, train your team, label your AI, and move on.

Legal Sources

  • §Art. 5 EU AI ActProhibited AI practices
  • §Art. 6 EU AI ActHigh-risk AI systems classification
  • §Art. 4 EU AI ActAI literacy obligation for all deployers
  • §Art. 99 EU AI ActPenalties and fines

See Also

Related Reading

Compliance question?

Register filings, data protection, regulatory obligations. Let's check.

Book a call