Transparent Tools, Positive Outcomes: Building Accountability Into Decision-Making
By David Mandel, CEO and Founder, and Ruth Reymundo Mandel, Chief Business Development Officer and Credible Expert, Safe & Together Institute
Artificial intelligence and predictive analytics are often promoted in child welfare as tools to reduce bias and increase efficiency. But what happens when these systems are trained on historical data and standard operating procedures riddled with different forms of bias? When the abusive parent is rendered invisible? When the logic behind decisions is hidden behind proprietary algorithms? The result is a high-tech replication of old harms—one that disproportionately targets and damages the very families and communities these systems are meant to serve.
This final post in our three-part series explores how behavior-based frameworks offer a transparent, behavioral, evidence-focused alternative to opaque algorithmic tools—especially in cases involving domestic abuse. Instead of black box technologies that erode trust, we advocate for transparent, structured, and behaviorally focused tech that strengthens practitioner decision-making and improves family outcomes.
The Black Box Problem
One of the most persistent critiques of algorithmic decision-making is its lack of transparency. Families rarely understand how risk scores are generated. Practitioners may not know what data points were used or how they were weighted. Supervisors cannot easily explain why a certain outcome occurred. In some jurisdictions, even lawyers and advocates are denied access to the logic behind the tools.
This is especially dangerous in domestic abuse cases. Because most child protection risk prediction models do not track patterns of coercive control or identify perpetrator behavior as a distinct variable, they often attribute risk to survivors and miss the driving cause of harm. Perpetrators remain invisible. Survivors are often penalized, and data is misused to justify surveillance or family separation. This compounds existing uneven responses in child welfare.
Perpetuating Error Through Invisibility
The skewed view of domestic abuse in predictive analytics is not just a technical oversight—it is a form of systemic error. When survivors are coded as unstable, noncompliant, or neglectful without the context of their perpetrator’s behavioral patterns, the data used to train future models can become more lopsided. The outputs or risk scores are then built on inaccurate inputs, creating a feedback loop that disproportionately marginalizes survivors including those living in poverty or with disabilities or system mistrust.
Unexamined gender bias plays a huge role in this. Take social media surveillance as a data point. A 2017 study suggests that many social workers surveil and assess clients through their social media posts. Research suggests that women and men have very different patterns of posting on social media (e.g., women posting more about family, relationships, and children), potentially reinforcing attention to women’s parenting as opposed to men’s. When gender bias is unexamined, it can influence decision-making of any type.
These biases compound existing uneven responses in child welfare. While inconsistent and incomplete, there is information that suggests that mothers who disclose abuse or seek help are often viewed with suspicion. Their protective efforts are rarely documented, while their involvement with the system—or “poor choices”—are.
Transparent Frameworks Support Accountability
In stark contrast, behavior-based models such as the Safe & Together Model are explicitly designed to bring clarity, fairness, and accountability to child welfare decision-making. Tools like the Perpetrator Pattern Mapping Tool help practitioners document the full range of a perpetrator’s behaviors—not just physical abuse but emotional abuse, coercive control, and interference with parenting, as well.
It also supports practitioners in capturing survivors’ protective efforts, children’s responses and functioning, and the ways that system involvement has helped or harmed families. This information is not hidden in code. It is visible in case notes, supervision sessions, court reports, and safety planning.
Shifting Culture: From Compliance to Critical Thinking
Transparency is more than just a technical feature—it is a cultural shift. Predictive analytics can encourage what scholars call “automation bias”—the tendency to defer to a computer-generated output even when it conflicts with observed evidence. Structured frameworks resist this by reinforcing practitioner critical thinking. They empower workers to ask:
What do I know about this family?
What am I assuming?
How do I center the perpetrator’s behavioral patterns and responsibility?
How do I engage with the survivor as a partner?
These reflective questions are not extras—they are essential tools for accurate, ethical decision-making.
Protecting and Strengthening the Workforce
Predictive tools are often touted as a solution to workforce inconsistency or bias. But in practice, they can do the opposite. Many frontline workers report having to do “repair work”—undoing or explaining flawed recommendations produced by algorithms.
This erodes morale, professional confidence, and clarity. It adds cognitive strain and administrative burden, without necessarily improving outcomes. In contrast, behavior-based approaches support professional development. Behavioral frameworks help practitioners build analytical rigor, clarify their documentation, and explain their decisions in supervision or court.
This scaffolding strengthens—not replaces—critical thinking. It cultivates consistency not through compliance but through coaching, shared language, and values alignment. The relationship—not the tool—is the vehicle for helping families. And workers supported with training, supervision, and practice tools grounded in real dynamics—not abstract risk scores—are more confident, more effective, and more likely to stay in the field.
Effectiveness and Efficiency Requires Visibility
When decision-making tools are transparent, they can be examined, questioned, and improved. This is especially vital for communities disproportionately impacted by child welfare interventions. A clear framework allows for external review, supports appeals, and helps families understand what’s happening in their cases.
Moreover, transparency ensures that we are collecting the right data: not just who called the hotline or who accessed services but who caused harm, who acted to protect, and how children were affected. This data tells a more honest story—and it can inform policy, training, and system design that truly centers safety and promotes efficiency and effectiveness.
Technology Alone Can’t Fix Practice
Often the push for predictive analytics is framed around a desire for more consistent practice. But consistency should not come at the expense of accurate evidence, behavioral context, complexity, or relational trust. We can achieve better outcomes when we see that investing in workers—through training, coaching, and the right practice tools—is the most powerful way to reduce bias and build a healthier, more stable workforce.
Why This Matters for the Future
As predictive analytics grow more common, the ethical and practical stakes of their use become more urgent. Do we want systems that reduce complex families to risk scores? Or do we want approaches that build trust, honor expertise, and make systemic harm visible and actionable?
Behavior-based frameworks offer that path. They support relational practice. They align with cultural humility. They value transparency and professional judgment. And they elevate—not replace—the role of the practitioner.
If we want truly accurate, effective child welfare systems, we need to design for humanity. Transparent tools aren’t just fairer. They’re smarter.
Our Blog Series In Summary
This blog series argues for a domestic abuse–informed, human-centered alternative to predictive analytics in child welfare. One that makes perpetrator behavior visible. One that sees and supports survivor strengths. One that restores and elevates practitioner judgment. And one that puts relationship—not automation—at the center of safety.
The consistency we seek will not come from algorithms. It will come from a well-supported workforce using clear, evidence-informed tools rooted in reality and relational trust. Because families deserve to be seen, and practitioners deserve to be trusted.