It is a new year—one already marked not only by promise but also by conflict and uncertainty.
One source of uncertainty for the Indiana not-for-profit sector and beyond is how artificial intelligence will revolutionize the giving experience and the not-for-profit sector.
In my work, I have already attended several convenings where concerns about technological change shape dialogue and dominate conversations.
It is also important to note that this technological change is taking place amid economic volatility, polarization and lower institutional trust. In this time of volatility, many have raised alarms that AI might worsen inequality, widen social disparities and fuel misinformation.
In the past year, we have witnessed the widespread adoption of ChatGPT, with a monthly subscriber base exceeding 100 million users. The growth of ChatGPT and other forms of AI is reshaping how we work—not in some distant time but now.
Many questions exist. Some economists have noted the productivity gains that are possible with AI. Others have raised alarms about the potential for massive job losses and displacement.
One possibility for AI is its potential to reduce workloads, improve accuracy and enhance the efficiency of employees’ time in the not-for-profit sector. Observers predict not-for-profits can use AI to enhance effectiveness by creating personalized donor engagement strategies, finding new donors and engaging current donors. Not-for-profits have also started experimenting with AI-powered technologies like ChatGPT for fundraising appeals, proposal writing and grant reporting.
At the same time, only some charities have the resources and skills to adapt to the rapid pace of technology, and many are cautious about adapting to new technologies.
There are also a range of concerns about the impact of AI on the outcomes for not-for-profits, including reinforcing inequality. Additionally, there is anxiety about allowing computer algorithms to perform functions traditionally performed by professional staff and implications for misinformation and bias. Many detractors of AI have specifically emphasized how it can exacerbate discrimination and bias by reinforcing biases at scale. Finally, despite the measurable benefits, the integration of AI also presents new challenges in terms of ethics and privacy.
What can not-for-profits and funders do to navigate technological change while minimizing ethical concerns and bias? Here are three ideas.
First, funders and not-for-profit leaders can commit to learning as much as possible about AI and related areas of technological change and to understanding its implications for their organizations. The FundraisingAI Collective is leading the way in these discussions.
Second, for not-for-profits or funders that are adopting AI already should establish responsible frameworks and processes that can check for and mitigate bias. Possibilities include regular audits and third-party, human-centered evaluations.
As James Manyika of Google and the McKinsey Global Institute has noted, AI has many potential benefits for society. However, this potential can be realized only if people have confidence in these systems to produce unbiased results.
Third, not-for-profits must deploy AI and other new technologies in a way that builds trust. Privacy, transparency and accountability concerns must be addressed. Securing donor information and privacy ensures not-for-profits can deepen and broaden personal relationships with donors.
In this era of unprecedented change, AI should be designed to help not-for-profits achieve their mission and improve their effectiveness—but in a way that upholds the trust their donors have placed in them.•
Osili is professor of economics and associate dean for research and international programs at Indiana University Lilly Family School of Philanthropy. Send comments to firstname.lastname@example.org.
Click here for more Forefront columns.