top of page
Search

AI in the Nonprofit Sector: Promise, Pitfalls, and Practical Guidance

  • Writer: Jacquelyn Davis
    Jacquelyn Davis
  • Mar 15
  • 5 min read

Written by Jacquelyn Davis, Managing Partner, Volution Advisors, edited by AI


Over the past year, nearly every nonprofit leader we speak with raises the same question:


What should we actually be doing about AI?


Like many leaders, I think about ethics, accuracy, and how AI can truly serve mission-driven organizations. I worry about the loss of human connection and critical thinking. 


Call me a dinosaur, but I did not even understand – much less use AI, a year ago.  Today, I enlist its help at least two times a day, usually more.  And, I jump across AI platforms, depending on the need.  From email reviews to graphics for invitations to quizzes for my son to prepare for tests, it’s fully part of my daily life.  


We cannot ignore it – AI is here, and it’s gaining traction every day. Research suggests that nonprofits are adopting AI rapidly. A 2026 benchmark study by Virtuous and Fundraising.AI found that 92% of nonprofits report using AI in some form, though most are still experimenting with early use cases. 


The question now is not whether to use it, but how to use it responsibly and effectively, especially in the nonprofit / social impact sector.


How It Helps: The Productivity Opportunity


Organizations report using AI most effectively to reduce routine administrative burdens – not to replace human work. Research suggests that professionals who use AI well report productivity gains of nearly 30% and save significant time on daily tasks. 


The most productive uses tend to fall into a few categories:


1. Administrative efficiencyAI can summarize meetings, draft communications, organize research, and automate reporting—tasks that consume large amounts of staff time but rarely require deep expertise. AI can generate first drafts of reports – and if given unpolished, original content can polish it into a well-written document. 


2. Fundraising and donor insightsAI can assist with prospect research, analyze donor patterns, draft grant proposals and develop communications. Some studies show improvements in fundraising revenue when AI assists with data analysis and donor targeting. This support can save time for development teams. 

 

3. Data analysis and program learningNonprofits frequently collect large amounts of data but sometimes lack the capacity to analyze it fully. AI tools can help synthesize large datasets and surface trends that support better decision-making.


4. Knowledge sharing and collaborationAI tools can help organizations organize institutional knowledge, helping teams retrieve information faster and collaborate more effectively. When nonprofit professionals use AI strategically as a collaborative tool, they report meaningful gains, ~30% in one study. 


When deployed well, these applications free staff to focus on the work that matters most: relationships, strategy, creativity, and community engagement.


How It Could Hurt: The Risks Nonprofits Must Navigate


Despite its promise, AI introduces real risks—particularly for organizations serving vulnerable populations.


One of the most striking findings in current research is the gap between adoption and governance. Many nonprofits are already using AI tools, yet only a small fraction have formal policies guiding their use. Some estimates suggest that fewer than 10 percent of nonprofits have established clear AI governance policies, leaving organizations exposed to ethical and operational risks. 


Many organizations are struggling with how to infuse AI into their work, set guidance, and train their teams to use it well.

 

The key challenges we see include:

  • Bias and equity concernsAI systems learn from historical data, which can contain embedded social biases. Without careful oversight, these tools may unintentionally reinforce inequities in areas such as outreach, eligibility decisions, or communications. 

  • Accuracy and reliabilityGenerative AI can produce confident and convincing but incorrect information. Drafts and “facts” generated by AI must be reviewed thoroughly, particularly when information is being relied on for grant applications, program statistics, reports, or public communications. 

  • Data privacy and confidentialityFor us, this is the biggest risk we are seeing. Many nonprofit datasets include sensitive client and donor information. Uploading this data into public AI systems without safeguards can create serious security and ethical risks.  


Many AI systems store prompts temporarily or use them to improve models. If staff enter client records, immigration status, health data, or donor information, that data could be stored outside the organization’s control. For nonprofits serving vulnerable populations, this is particularly serious, particularly at this moment.  Some further examples of risk include:


  • Case management notes entered into an AI tool

  • Donor lists uploaded for analysis

  • Internal strategy documents shared in prompts

  • Budgets and cash flows to create financial models

  • Immigrant information for cases


Key best practice: Never enter personally identifiable information (PII) or confidential program data into public AI systems unless the organization has a business account with a secure enterprise agreement.


Strategic drift


The 2026 Virtuous report found that 65% of nonprofits use AI only individually and 81% use AI informally, rather than through coordinated organizational workflows. When AI adoption happens informally—staff experimenting individually rather than organizationally—knowledge can become siloed and tools may be used inconsistently. This means many organizations are seeing small efficiency gains but not yet realizing broader organizational benefits that are possible.


Tips and Principles for Responsible and Effective Use


The most successful organizations are approaching AI not as a gadget, but as a strategic capability aligned with mission and values. Several best practices are emerging.


1. Start with purpose, not tools:


Identify where staff time is currently being consumed and where AI could meaningfully improve capacity. Are there administrative functions or others that AI could handle?


2. Keep humans in the loop:


AI should support—not replace—professional judgment. Critical decisions about people, funding, or services should always involve human oversight. While AI is a “thinking” technology, it does not replace human critical thinking. And, often the best AI results emerge from smart, thoughtful prompts and even first drafts carefully crafted by humans. 


3. Establish simple governance early:


Even small organizations benefit from basic policies covering data privacy, acceptable use, and review processes. 


4. Invest in staff literacy:


Training staff to understand both the possibilities and limitations of AI is essential to responsible adoption. Some team members will be advanced experimenters, quickly coming up to speed, while others will be later adopters and more unsure of usage.  To gain real efficiency and efficacy with AI, training the team is critical. 


5. Protect trust with transparency:


When AI plays a role in communications, analysis, or decisions, transparency helps maintain credibility with donors, partners, and communities. You can always share that a document was created with the assistance of AI.


6. Safeguard data


Most major platforms now offer opt-out or enterprise privacy protections, but organizations must ensure those settings are in place. Always use enterprise or privacy-protected versions of AI tools when handling organizational information.


7. Verify data and information


Treat AI output as a draft or research starting point, not verified information, or as an editor. AI can be a good editor to help you polish a report, but it’s helpful to put information and an initial draft into AI as a starting point.  Double check information to ensure accuracy, especially of numbers and seemingly convincing “facts.” Use multiple AI platforms and the old fashioned Internet to help confirm information. 


8. Prevent bias


Avoid using AI for automated decision-making about people. Think critically about bias that may be inherent in the AI system. Use AI for analysis or drafting, with human review. Ensure your team is trained on spotting bias and addressing it effectively to prevent the bias from your work product.


Remember: AI is A Tool, Not a Strategy


The nonprofit sector has always been defined by its people—by empathy, creativity, and commitment to community. Relationships matter. AI does not replace these qualities. If anything, its greatest value may be helping organizations reclaim time for the human work that only humans can do.


Used thoughtfully, AI can help nonprofits operate more effectively and sustainably. Used carelessly, it risks undermining the very values the sector exists to uphold.


Make no mistake: AI is here.  Thus, it’s critical that each organization determines the role it will play in its work and how it aligns to its values. Intentional, trained adoption is critical. 


The task ahead is not simply adopting AI, but doing so in a way that strengthens mission, equity, and trust.


 
 
 

Comments


bottom of page