AI Can’t Fix Inequity—But It Will Expose It.
When people in our sector talk about AI, the conversation often starts with efficiency. How can it help us write faster, reach more people, or stretch our limited capacity? Those are fair questions. But they can also distract from a deeper one: What does AI reveal about us?
AI isn’t just a tool—it’s a mirror.”
AI isn’t just a tool—it’s a mirror. It reflects the values, systems, and inequities we’ve already built. For nonprofits, that means our adoption of AI will expose whether we’ve centered equity in our work—or merely referenced it in passing.
The Problem Beneath the Promise
AI’s appeal is easy to understand. We’re all operating with tight budgets, lean teams, and growing expectations for output. But too often, adoption skips ahead of reflection.
A 2025 sector scan by Meenakshi (Meena) Das and Michelle Flores Vryn, CFRE found that while most nonprofits are curious about AI, only a small fraction have governance policies in place. Awareness is rising, but equity practice is actually declining.¹
This isn’t just a resource issue—it’s a mindset one. Many nonprofits assume ethical governance is out of reach because they don’t have an IT department or legal counsel. But waiting for perfect capacity means surrendering decisions about equity to others.
Even small choices—how we store data, which prompts we use, whether we disclose AI assistance—shape power. They determine whose voices are heard and whose are erased.
Where Inequity Hides
Bias doesn’t start with algorithms. It starts with us.
Data used in AI tools often underrepresents the communities nonprofits serve: people of color, rural residents, older adults, and those outside major English-speaking economies. When we feed that data into AI systems, we replicate the same exclusions that made our work necessary in the first place.
Researchers at Stanford and MIT have shown that AI systems trained on unbalanced datasets regularly misidentify people with darker skin tones, reinforce gender stereotypes, and underperform in non-Western contexts.²
In philanthropy, a report by the Center for Effective Philanthropy found that while funders are exploring AI, few are addressing governance or equity directly.³ Without those conversations, the risk is that efficiency becomes the metric—and equity becomes optional.
Building Ethical Infrastructure
We can’t solve bias with goodwill alone. What nonprofits need isn’t just tools—it’s ethical infrastructure.
Ethical infrastructure is the scaffolding that helps us use technology in line with our mission. It doesn’t have to be complicated. Stanford’s Institute for Human-Centered AI and the Stanford Social Innovation Review both highlight three practices any organization can adopt:⁴
-
Transparency – Be explicit about how and when you use AI.
-
Accountability – Keep humans responsible for outcomes, even when AI assists.
-
Inclusion – Involve the communities affected by your data or decisions.
Common Ground’s AI Compact is our version of this infrastructure. It outlines where and how we use AI, how we audit for bias, and how we ensure that final decisions stay human. For smaller organizations, publishing a simple statement of principles—or even including a few sentences in a staff handbook—can achieve the same goal: showing your values before the technology defines them for you.
How to Start—Even Without a Tech Department
You don’t need a “responsible AI lab” to start building ethical muscle. You can begin with five practical habits:
-
Map your data. Identify what information you collect, where it’s stored, and who has access.
-
Run a bias check. Before using any AI tool, ask: whose voices or experiences might be missing from this data?
-
Disclose AI use. Whether in reports, proposals, or communications, name when AI assisted you. Transparency builds trust.
-
Pilot with feedback. Test AI tools in one part of your workflow. Ask staff and stakeholders what felt fair—or not.
-
Document learning. Capture what worked, what failed, and how you adjusted. That record is more valuable than the tool itself.
Each of these steps strengthens readiness without adding bureaucracy. They shift the question from “Can we afford AI ethics?” to “Can we afford not to?”
A Collective Challenge
The nonprofit sector doesn’t have the luxury of approaching AI from a position of abundance. But it does have something more powerful: moral clarity.
AI is forcing all of us to decide whether equity is just an aspiration—or the ground truth of our work.
If we treat AI as a mirror, not a magic fix, we’ll see clearly where systems are still unjust and where our missions must evolve. That’s the work ahead.
Because the real risk isn’t that AI will replace us. It’s that it will replicate us—exactly as we are.
Endnotes
-
AI Equity Project 2025 Report, Meena Das & Michelle Flores Vryn.
-
Buolamwini, J., & Gebru, T. (2018). “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research.
-
AI With Purpose: How Foundations and Nonprofits Are Thinking About and Using Artificial Intelligence, Center for Effective Philanthropy (2025).
-
Advancing Equitable AI in the U.S. Social Sector, Stanford Social Innovation Review (2024).