AI Systems Are Now Banned in the EU. Australia Isn’t Even Close.
TLDR: The EU has formally banned a set of AI systems it considers “unacceptable risk”. The penalties are severe. Australia has nothing comparable. If you operate in both markets, or if you influence policy inside your organisation, you now face two different worlds: one with strict enforcement and one with voluntary guidance. This article explains what is banned, why it matters, and what organisations should be doing now.
Why this AI Regulation moment matters
On 2 February, the EU’s AI Act reached its first compliance deadline. Regulators can now prohibit entire categories of AI outright. These are not edge cases. They include systems that manipulate decisions, exploit vulnerabilities, scrape facial images, or infer sensitive characteristics.
Most organisations using AI do not realise how close they are to prohibited behaviour. They also underestimate how weak their internal governance is compared with what Europe now expects.
Australia, by comparison, still relies on voluntary guardrails. No bans. No enforcement mechanism. No substantial penalties.
This gap creates risk for any business that operates internationally or relies on vendors whose models may interact with European citizens.
What the EU has banned
The Act defines four risk levels. Only one matters here: unacceptable risk.
These systems are now prohibited across the EU:
- AI that performs social scoring
- AI that manipulates decisions through deception or subliminal influence
- AI that exploits vulnerabilities such as age, disability or socioeconomic status
- AI that predicts criminal behaviour based on appearance
- AI that infers personal characteristics from biometric data
- AI that collects real-time biometrics in public spaces for law enforcement
- AI that infers emotions in workplaces or schools
- AI that builds facial recognition databases from scraped images or camera feeds
Any organisation deploying these systems faces fines up to €35 million or 7 percent of global turnover. Enforcement bodies are being appointed now.
This is serious stuff.
The list is broad enough that many consumer, HR, CX, analytics, and security use cases sit uncomfortably close to the boundary.
Why these AI systems were banned
The prohibited systems share one characteristic: they affect people’s rights without adequate safeguards.
Emotion inference in the workplace is inaccurate and impossible to validate.
Real-time biometric tracking in public spaces creates monitoring that citizens cannot avoid. Social scoring creates systemic discrimination. Predicting crimes based on appearance is unreliable and unlawful (There goes minority report).
These systems combine weak evidence with high-impact decisions. Regulators are targeting the outcome, not the technology.
What this means for any organisation operating in or adjacent to the EU
Several practical implications follow.
- You cannot rely on vendor claims.
If a model uses biometrics, scrapes images or infers sensitive traits, you carry the risk. - You cannot assume your internal tools are exempt.
Workplace-focused systems fall within the ban if they infer emotions or exploit vulnerabilities. - You cannot separate your Australian operations from your European ones.
If data, models or processes overlap, enforcement follows the system, not the jurisdiction of the parent company. - You cannot wait for full guidance.
The EU will publish additional details, but the prohibitions are already live.
This is the first real enforcement moment in global AI regulation.
The message is clear: regulate where harm is structural, not incidental.
Where Australia sits by comparison
Australia currently operates with voluntary guidance through the federal guardrails. No bans. No meaningful audit structure. No penalties. No clear definition of unacceptable practices.
The approach is advisory rather than supervisory. Most organisations treat it as “guidance to consider” rather than an enforceable standard.
This means two things:
Australian organisations often deploy AI without clear governance checks and discover risks only when an incident occurs.
Organisations with any European exposure now face a split environment: strict enforcement in one market and optional compliance in another.
The gap will widen quickly unless Australia accelerates its regulatory work.
Why this matters even if you are not in Europe
Regulation moves locally through expectation, not law. When one major region creates formal bans, internal legal teams, risk committees, auditors and procurement functions begin applying those standards everywhere to avoid multi-jurisdictional conflict.
- If your vendor is subject to the AI Act, your usage becomes indirectly governed by it.
- If your model touches EU data, you fall inside enforcement scope.
- If your internal processes resemble any prohibited practice, you will need policy and documentation to prove compliance.
The Act will shape internal audit, procurement, model evaluation and incident reporting frameworks even outside Europe.
How organisations should respond now
A clear set of actions reduces risk before enforcement matures.
- Review any use case involving biometrics.
Check whether facial recognition, voice patterns or behavioural signals appear anywhere in your stack. - Audit models that classify people.
If the system infers attributes, flags risk profiles or uses subjective categories, it needs formal review. - Check vendor contracts.
Ensure suppliers are not using prohibited methods upstream. Liability travels. - Document your governance decisions.
Audit trails, review notes and decision logs matter more than policy documents. - Map your data flows.
If European data passes through your systems, you are in scope. - Update your internal AI policy.
Policies need explicit statements on prohibited practices, review steps and escalation.
These steps are practical and achievable with existing resources. They create a defensible position before oversight increases.
FAQs
Q: Are Australian companies exposed if they have no EU customers?
A: Exposure occurs through data flows, vendors and model training sources. Direct customers are not the only path.
Q: Do these bans affect internal HR or productivity tools?
A: Yes. Any system inferring emotions, behaviour or sensitive traits in workplace settings is affected.
Q: What is the biggest compliance gap today?
A: Documentation. Most organisations cannot show how decisions were made or how risks were evaluated.
Q: Are exemptions predictable?
A: Only in narrow cases involving safety or law-enforcement authorisation. None apply to commercial use.
Q: Will Australia introduce similar bans?
A: Not in the near term. Current policy work is advisory and slow.
Q: What should be updated first?
A: Your AI policy, risk logs and procurement standards. These anchor the rest of the governance work.
If you need a clear, defensible AI policy framework and an audit process that holds up against emerging regulation, the AI Policy and Audits program gives you the structure to put this in place quickly and properly.
