When AI Thinks for Humans, Children Die: Why ChatGPT & Domestic Violence–Blind AI Cannot Be Safely Used in Child Welfare or Social Work
By Ruth Reymundo Mandel, Chief Business Development Officer and Credible Expert, Safe & Together Institute
A March 2025 study by the Columbia Journalism Review revealed a staggering failure rate among popular AI tools: Over 60% of responses were factually incorrect when chatbots were asked to identify basic information about articles such as the headline, publisher, date, and URL. This wasn’t just a technical glitch—it was a wake-up call.
Now imagine handing those same tools to social workers, caseworkers, and family court evaluators—professionals making life-altering decisions for children and families, many of whom are navigating the complex realities of domestic abuse. This is terrifying.
In high-risk sectors like child welfare, domestic abuse response, and public health, AI hallucinations aren’t just inconvenient—they are dangerous and life-threatening. They risk:
Reinforcing systemic bias
Misidentifying danger and risk
Amplifying harmful narratives that sideline protective parents and mislabel trauma responses
These risks are not hypothetical. They’re systemic. As highlighted in CW360°’s Spring 2025 issue on “The Evolving Role of Technology in Child Welfare,” the stakes are exponentially higher in frontline social care systems. Child welfare professionals operate in high-pressure environments where time is short, documentation is dense, and decisions change lives. If we fail to embed ethical, domestic abuse and survivor-informed guardrails into these technologies, we risk automating harm, blind spots, injustice at scale.
And let’s be clear: The blind spots are glaring. Research consistently shows that domestic violence is a central factor in child welfare cases:
Yet, the majority of AI tools developed for child welfare and social services—including predictive analytics systems piloted in jurisdictions like Allegheny County and Illinois—do not meaningfully integrate domestic violence as a risk factor. Even worse, they typically:
Fail to distinguish between perpetrator and survivor behaviors
Miss the pattern of coercive control
Leave perpetrating parents invisible in their analysis
Reduce protective parents’ actions to risk factors, especially when those parents are mothers, survivors, or members of historically marginalized groups
This isn’t just a technical oversight. It’s a structural danger. AI that doesn’t understand patterns of coercive control is AI that risks punishing the protective parent and rewarding the perpetrator. It’s AI that can mistake post-separation abuse for “co-parenting conflict.” It’s AI that can’t tell the difference between a survivor documenting abuse and a so-called “high-conflict” case. And it’s AI that may recommend the removal of children from their safe parent—based on flawed logic and outdated assumptions baked into the data.
This is no longer speculative. It’s already happening.
In Australia, an investigation by the Office of the Victorian Information Commissioner found that a child protection worker used ChatGPT to draft a court report—a Protection Application outlining risk to a child. The chatbot described a sexually exploited child’s doll, which the father used for sexual purposes, as a “notable strength” of the parents. This was not a simple error. It was a complete failure of risk identification and narrative accuracy, and it ended up in a legal document. The response? A government-wide ban on ChatGPT for use in child protection settings, citing grave privacy and safety concerns.
This incident is a case study in shadow AI: workers under-resourced and overwhelmed turning to unapproved tools in an effort to cope with unrealistic expectations. And who can blame them? When systems are underfunded and workers are drowning in paperwork, the appeal of fast automation is understandable.
But here’s the lesson: The answer is not banning technology—it’s building the right kind of technology. The presence of shadow AI means the sector urgently needs safe, rapid, domestic abuse–informed, practitioner-informed alternatives. If we don’t meet that need, dangerous tools will continue to fill the void and children will die.
AI must be:
Trained by Subject Matter Experts (SMEs): Not just software engineers or data scientists, but people with real, field-tested expertise in domestic abuse–informed practice, child welfare, trauma, and coercive control.
Quality-Assured by End Users: Social workers, case managers, supervisors, legal professionals, and frontline practitioners should not just test AI tools—they should co-create them. If they wouldn’t trust it with a real family, we shouldn’t trust it at all.
Quality-Assured by Survivors and Cultural Experts: Survivors and community-based experts—especially those from marginalized communities—must be embedded in AI design teams from day one. Not as an afterthought, but as critical architects of harm detection and ethical review.
Rooted in Proven Practice, Not Predictive Guesswork: AI that tells professionals what to do is dangerous. AI that supports critical thinking, maps gaps in perpetrator behavior documentation, and documents protective actions by survivors is ethical and effective.
We need tools that mirror the Safe & Together Model—tools that start with perpetrator pattern–based assessments, that prioritize survivor partnership, and that place children’s lived experiences at the center of risk assessment. Anything less puts families in harm’s way.
AI can be transformative, but only when it is developed and governed with transparency, survivor engagement, and unwavering fidelity to ethical standards. As the CW360° report makes clear, child welfare deserves nothing less than trustworthy, inclusive technology that serves—not replaces—professional judgment.
If AI cannot identify the perpetrator, recognize coercive control, or respect survivor-protective parenting, it has no business in child welfare or social work decision-making.
Let’s build the future of ethical AI together—one survivor-informed, practitioner-guided tool at a time.
Additional Resources
Safe & Together Institute’s domestic abuse–informed trainings
Safe & Together Institute’s upcoming events
David Mandel’s book Stop Blaming Mothers and Ignoring Fathers: How to Transform the Way We Keep Children Safe from Domestic Violence