Large language models are trained on the world as it is, not the world as it should be. That means structural inequality, institutional racism, and systemic bias are baked into every output. If practitioners do not prompt AI to surface these issues, the technology will reproduce them and present the result as neutral.
When you ask an AI tool to draft a care plan, summarise a case, or suggest next steps, it draws on patterns in its training data. Those patterns overwhelmingly reflect the perspectives, language, and assumptions of dominant groups.
AI has learned from decades of case notes, assessments, and policy documents written within systems that have historically marginalised Black, disabled, working-class, and migrant communities. It does not question those patterns. It replicates them.
Ask an AI to write a risk assessment and it will produce a competent, structured document. It will not tell you that the framing centres deficit over strength, or that the language carries racialised assumptions. You have to ask.
The most dangerous feature of AI-generated text is how confident and neutral it sounds. Outputs carry no disclaimers about whose perspective is centred or whose experience is absent. The practitioner must bring that critical lens.
Anti-oppressive practice in AI is not a theoretical add-on. It is a prompting discipline. The difference between reproducing oppression and challenging it often comes down to a single sentence in your prompt.
Summarise this family assessment
and identify the key risks.
Summarise this family assessment.
Identify risks AND strengths.
Flag any language that carries
racialised, gendered, or class-based
assumptions. Note whose perspective
is centred and whose is missing.
This is the core argument
AI does not do anti-oppressive practice. Practitioners do. But if we adopt AI tools without building AOP into every prompt, every workflow, and every governance framework, we are automating the very biases our profession claims to challenge.
Anti-oppressive practice and AI is not about rejecting the technology. It is about using it with the same critical awareness we bring to every other tool in social care.
Every prompt should include an explicit instruction to consider power, identity, and structural context. If you do not ask the AI to look for oppression, it will not look. That is not a flaw in the technology. It is a feature of how language models work.
Before using any AI output, ask: whose perspective is centred here? Whose experience is absent? Does this language carry assumptions about race, class, gender, disability, or culture? Would I write this myself, or am I letting the machine write my values?
Organisation-level AI policies must name anti-oppressive practice as a governance requirement, not an aspiration. Prompt templates, review checklists, and audit processes should all embed AOP as a non-negotiable standard.
AI literacy without anti-oppressive literacy is dangerous. Practitioners need to understand both how the technology works and how power operates within it. That is what TESSA Training is built to do.
The conversation about AI in social care has so far been dominated by two camps: enthusiasts who see efficiency gains and sceptics who see risk. Both camps are missing something fundamental.
AI is not neutral. It is trained on data that reflects existing power structures. When social care teams use AI tools without an anti-oppressive framework, they are not saving time. They are scaling bias. They are producing assessments, care plans, and recommendations that sound professional and read as objective but carry the same structural assumptions that anti-oppressive practice exists to challenge.
The fix is not to avoid AI. The fix is to treat anti-oppressive practice as a core AI competency. Every prompt a practitioner writes should explicitly ask the AI to consider power, identity, and structural context. Every output should be reviewed through an AOP lens before it goes anywhere near a person's life. Every organisational AI policy should name anti-oppressive practice as a governance requirement, not a value statement.
This is not about adding a tick-box to an existing process. It is about recognising that AI adoption without AOP is not just incomplete. It is harmful. And the social care sector, of all sectors, should know better.