AI · Digital Transformation · Sector Insights · Philanthropy · Equity · Advocacy

    Foundations Love AI. Their Grantees Are On Their Own.

    By Craig Bowman8 min read
    Share:LinkedIn
    Foundations Love AI. Their Grantees Are On Their Own.

    AI is already inside philanthropy.

    It just hasn’t reached the places where power and money decisions get made.

    Foundations have gone from curious to committed in under two years, often without clear rules or meaningful support for the nonprofits they fund. [1]

    AI Is Now Normal in Foundation Back Offices

    The Technology Association of Grantmakers (TAG's) 2024 State of Philanthropy Tech survey found that 81 percent of foundations report some degree of AI usage, while the other 19 percent say no one is using AI at all. [1]

    Most of that use is deliberately boring: like taking meeting notes, transcribing conversations, drafting emails, summarizing board dockets, and cleaning up reports.

    TAG’s breakdown shows that AI is used heavily for transcribing and drafting, not for deciding who gets funded. [1]

    So AI is no longer an experiment on the side. It is quietly woven into the daily office work of philanthropy.

    But despite all the hype, very few foundations are letting AI near the “who gets money” question.

    Candid’s 2025 Foundation Giving Forecast survey asked whether foundations use generative AI to screen applicants or help decide whom to fund. Ninety-seven percent said “no.” [3]

    When asked about the near future, 65 percent still said they do not plan to use AI in grant decisions, while roughly one-third either expect to, are considering it, or are unsure. [3]

    The real question is not “Will AI screen applications?”

    It is “Who will design the systems, and whose values will they encode?”

    The Governance Vacuum

    This is the part of the equation that should make any foundation board nervous.

    TAG’s survey found that while 81 percent of foundations use AI, only about 30 percent have an AI policy, and 63 percent have neither an AI policy nor an AI advisory committee. [1]

    Data security and privacy top the worry list, followed closely by misinformation and inaccurate outputs. Roughly half are concerned about bias. More than half say their staff lack the expertise to learn about AI. [2]

    Almost two-thirds of organizations say that none or only a few staff have a solid understanding of AI and its applications, and most foundation boards rarely discuss AI at all. [2]

    Harvard University President Alan Garber has warned that “an excessive aversion to risk is a risk in and of itself.” [8] Right now, many foundations’ approach to AI fits that description.

    A few funders like the MacArthur Foundation have published explicit AI policies, but they're still outliers. [6]

    Nonprofits Are Using AI, Too. They’re Doing It With Far Less Support.

    Nonprofits are not waiting for funders to bless AI.

    The Center for Effective Philanthropy (CEP) finds that almost two-thirds of nonprofits are already using AI:

    • 84 percent use it for communications

    • 63 percent for internal productivity

    • 61 percent for development and fundraising [2]

    When you ask what nonprofits actually need from foundations, the answers are concrete: 68 percent want general AI education for staff, 63 percent want funding for AI tools, 55 percent want technical training like prompt engineering, and half want resources on how AI affects the communities they serve. [2]

    Now the punchline:

    Nearly 90 percent of foundations do not offer any financial or non-financial support specifically for grantees’ AI implementation.

    Only about one in ten provides any AI implementation support at all, and most of that is incidental rather than strategic. [2]

    Funders are convinced AI will reshape the sector. They're investing in their own AI capacity.

    But they’re barely investing in their grantees’ ability to keep up.

    That gap is not a technology problem.

    It is a power problem.

    As Diane Yentel of the National Council of Nonprofits has said, “Our silence won’t protect us. If there’s protection to be had for our sector, we’ll find it through visibility and solidarity.” [8] Leaving grantees to figure out AI on their own is one more form of that silence.

    Equity and Power Are The Real Stakes

    CEP defines “equitable AI” as AI that promotes fairness, inclusivity, and justice, especially for historically marginalized communities. Very few foundations are there yet. Only about 10 percent provide any AI implementation support to grantees, and roughly half of those say they are not intentionally funding equitable AI. [2]

    Nonprofit leaders, especially those serving low-income, immigrant, BIPOC, and LGBTQ+ communities, are clear about the stakes. They worry about:

    • Biased algorithms that replicate existing discrimination

    • AI tools trained on data that exclude their communities

    • Efficiency gains that come at the cost of voice and dignity [2][5]

    Philanthropy has a unique role in mitigating bias, ensuring nonprofit representation in AI governance, and funding public-interest AI that is not controlled by a handful of tech firms. [5]

    So the core question becomes: Will AI in philanthropy be used mainly to optimize existing grantmaking, or will it be used to redistribute data, insight, and power toward the communities that philanthropy claims to serve?

    What Forward-Looking Funders Are Doing Differently

    The good news is that there is a growing playbook for funders who want to move past pilots and into responsible, equity-centered practice.

    A few examples:

    • Adopting explicit AI policies and guardrails. MacArthur’s policy is one model, and TAG and Project Evident’s work on responsible AI in philanthropy offers a human-centered evaluation rubric grounded in transparency, accountability, and equity. [1][6][7]

    • Using AI-specific assessment rubrics for grants. Project Evident urges funders to evaluate AI proposals on technical robustness, fairness, safety, and community impact, rather than novelty or efficiency alone. [7]

    • Budgeting for grantee AI capacity. Some funders are starting to fold AI-related tools, training, and technical assistance into general operating or capacity-building grants, rather than treating them as add-ons or nice-to-have pilots. [2][7]

    • Joining public-interest AI coalitions. Foundations have begun pooling money and influence to back AI for the public good, from democracy-focused AI coalitions to public-interest AI research funds. [5][7]

    These are still early moves, and they mostly involve larger, better-resourced institutions. Smaller foundations remain more cautious, often seeing AI as important but not sure where to start.

    Five Questions Every Foundation Should Be Asking Right Now

    If you sit inside a foundation, this is the moment to stop treating AI as an internal efficiency project and start treating it as a strategy and power question.

    Here are five blunt questions to put on the next board agenda:

    • Where is AI already in our institution? Not in theory, in reality. Who is using it, for what tasks, and with what data?

    • Do we have a clear AI policy that staff actually know about? Or are we relying on unspoken norms and “do whatever Legal won’t yell about”?

    • What is our stance on grantees using AI? Do we have any written guidance? Are we screening for AI-generated content without telling people? CEP’s data suggest nonprofits are already nervous about this. [2][3]

    • How much are we investing in our own AI capacity vs. that of our grantees? Pull the numbers. If the ratio is wildly skewed toward internal spend, name it.

    • Where are community voices in our AI discussions? Are frontline organizations, especially those led by people of color, shaping our AI policies and pilots, or just being asked to adapt to them after the fact?

    AI is not optional at this point. Philanthropy is already using it, funding it, and being reshaped by it. The question is whether foundations use AI to shave hours off staff workloads, or to build a funding system that actually shares power with the communities it claims to serve.

    Grantees are already experimenting. Communities are already living with AI systems designed far from their realities. As Phil Buchanan writes in CEP’s latest message to funders, “This is not the time to look away, or to put your head down. This is not the time to hide.” [8]

    It is time for philanthropy to stop treating AI as a back-office tool and start treating it as a core part of its accountability frameworks.

    If your foundation is trying to figure out what responsible AI adoption looks like for your team and your grantees, I'd be glad to compare notes.


    Notes

    1. TAG – 2024 State of Philanthropy Tech survey

    2. CEP – AI With Purpose report

    Direct PDF: https://cep.org/wp-content/uploads/2025/09/CEP_AI_Layout_FINAL.pdf

    3. Candid – Will foundations soon use AI to screen grant applications?

    4. Bonterra / NonProfit PRO – Funders Back AI for Philanthropy, Urge Responsible Use

    5. Nonprofit Quarterly – How Philanthropy Can Lead in Building Ethical AI for the Public Good

    6. MacArthur Foundation – Policy on the Use of Artificial Intelligence

    7. Project Evident – Funding the Future: Grantmaker Strategies in AI Investment

    Direct PDF: https://projectevident.org/wp-content/uploads/2025/03/Funding-the-Future_-Grantmaker-Strategies-in-AI-Investment.pdf

    8. Phil Buchanan, “A Message for Philanthropy in the 2025 Giving Season,” Center for Effective Philanthropy

    Full Sources & Further Reading

    Share:LinkedIn
    ← Back to all articles