Anti-Oppressive Practice & AI

AI will not flag oppression unless you tell it to

Large language models are trained on the world as it is, not the world as it should be. That means structural inequality, institutional racism, and systemic bias are baked into every output. If practitioners do not prompt AI to surface these issues, the technology will reproduce them and present the result as neutral.


78%
of AI training data reflects Western, English-language sources
0
major LLMs that proactively flag structural oppression in outputs
100%
of the responsibility sits with the practitioner using the tool

The Problem

AI defaults to the dominant narrative

When you ask an AI tool to draft a care plan, summarise a case, or suggest next steps, it draws on patterns in its training data. Those patterns overwhelmingly reflect the perspectives, language, and assumptions of dominant groups.

💬

It mirrors the system

AI has learned from decades of case notes, assessments, and policy documents written within systems that have historically marginalised Black, disabled, working-class, and migrant communities. It does not question those patterns. It replicates them.

🚫

It does not volunteer what is missing

Ask an AI to write a risk assessment and it will produce a competent, structured document. It will not tell you that the framing centres deficit over strength, or that the language carries racialised assumptions. You have to ask.

It presents bias as objectivity

The most dangerous feature of AI-generated text is how confident and neutral it sounds. Outputs carry no disclaimers about whose perspective is centred or whose experience is absent. The practitioner must bring that critical lens.


The Prompt Gap

Same task, different prompt, different outcome

Anti-oppressive practice in AI is not a theoretical add-on. It is a prompting discipline. The difference between reproducing oppression and challenging it often comes down to a single sentence in your prompt.

Without AOP lens
Summarise this family assessment and identify the key risks.
The AI produces a deficit-focused summary. Risk factors are listed without context. Cultural practices are framed as concerns. Strengths are absent. The output reads as objective but centres institutional assumptions.
With AOP lens
Summarise this family assessment. Identify risks AND strengths. Flag any language that carries racialised, gendered, or class-based assumptions. Note whose perspective is centred and whose is missing.
The AI still identifies risks, but also surfaces family strengths and protective factors. It flags deficit-based language. It notes where the child's voice or the family's own account is absent. The practitioner gets a fuller, more honest picture.

This is the core argument

AI does not do anti-oppressive practice. Practitioners do. But if we adopt AI tools without building AOP into every prompt, every workflow, and every governance framework, we are automating the very biases our profession claims to challenge.


In Practice

What AOP-informed AI use looks like

Anti-oppressive practice and AI is not about rejecting the technology. It is about using it with the same critical awareness we bring to every other tool in social care.

🔍

Prompt with intention

Every prompt should include an explicit instruction to consider power, identity, and structural context. If you do not ask the AI to look for oppression, it will not look. That is not a flaw in the technology. It is a feature of how language models work.

🧠

Review with a critical lens

Before using any AI output, ask: whose perspective is centred here? Whose experience is absent? Does this language carry assumptions about race, class, gender, disability, or culture? Would I write this myself, or am I letting the machine write my values?

📋

Govern with AOP at the centre

Organisation-level AI policies must name anti-oppressive practice as a governance requirement, not an aspiration. Prompt templates, review checklists, and audit processes should all embed AOP as a non-negotiable standard.

🤝

Train the workforce, not just the model

AI literacy without anti-oppressive literacy is dangerous. Practitioners need to understand both how the technology works and how power operates within it. That is what TESSA Training is built to do.


The Argument

Why this matters at the AI summit

The conversation about AI in social care has so far been dominated by two camps: enthusiasts who see efficiency gains and sceptics who see risk. Both camps are missing something fundamental.

AI is not neutral. It is trained on data that reflects existing power structures. When social care teams use AI tools without an anti-oppressive framework, they are not saving time. They are scaling bias. They are producing assessments, care plans, and recommendations that sound professional and read as objective but carry the same structural assumptions that anti-oppressive practice exists to challenge.

The fix is not to avoid AI. The fix is to treat anti-oppressive practice as a core AI competency. Every prompt a practitioner writes should explicitly ask the AI to consider power, identity, and structural context. Every output should be reviewed through an AOP lens before it goes anywhere near a person's life. Every organisational AI policy should name anti-oppressive practice as a governance requirement, not a value statement.

This is not about adding a tick-box to an existing process. It is about recognising that AI adoption without AOP is not just incomplete. It is harmful. And the social care sector, of all sectors, should know better.


Train your team to use AI with an anti-oppressive lens

TESSA Training builds AOP into every module. Not as an afterthought, but as the foundation.

Start training