By: Melissa Bank Stepno, President & CEO
Artificial Intelligence (AI) has become one of the most talked-about topics in recent years. Rapid advances in generative AI, such as the release of tools like ChatGPT in late 2022, have accelerated conversations across industries. But here’s the truth: AI is much broader than these headline-grabbing tools, and many of us have been interacting with AI for years without realizing it.
What Is AI, Really?
There’s no single, universally accepted definition of AI. Ask ten people, and you’ll likely get eight-and-a-half different answers. At its core, and the way I like to think about it, is that AI refers to technology that enables machines or software to “think like humans.” The ultimate goal is to perform tasks faster, more accurately, and more robustly than humans can.
Historically, AI systems were programmed by humans and performed the specific tasks they were programmed to do – but frequently on a more complex level or faster than a human could. But modern AI leverages computing power to learn and improve on its own, without direct human involvement.
AI as an umbrella term includes:
- Machine Learning (ML) – systems that learn from data patterns and frequently uses algorithms to make predictions or decisions about future behavior/events.
- Natural Language Processing (NLP) – enabling machines to understand and process human language, in the form of text or speech.
- Generative AI – tools that synthesize, summarize and create new content, whether text, images, or music, based on existing content.
So, in some ways, machine learning and natural language processing are mechanical and facts-driven, producing results exactly as they are calculated by the AI tool. On the other hand, generative AI is more creative and opinionated, making assumptions and drawing conclusions based on the inputs that it receives.
Historical Perspectives
The term AI was first introduced in the 1950s, but at that time, it was acknowledged that computing power wasn’t sophisticated enough to truly achieve “artificial intelligence.”
Fast forward to the 1990s when IBM built a tool called Deep Blue that was able to defeat a human chess champion. This is frequently considered one of the first “real” achievements in artificial intelligence.
Fast forward just a bit more to the 2000s, and like it or not, the prospect development field has been using AI ever since. How can this be? Even the simplest of Google searches incorporates the use of some AI via the algorithms that Google uses to produce your search results.
Many of our other common research tools employ AI as well: wealth screenings that use algorithms to provide scores and ratings, predictive models that are created to suggest future major gift prospects, the order of the posts in your LinkedIn newsfeed – all incorporate AI.
When I hear blanket questions like: “should we use AI in prospect development?” or “is AI allowed at your organization?” or “is it appropriate to use AI in our work?” I admittedly cringe a bit because all of us – every single one of us reading this article – have been using AI for years.
Current Events
A lot of hype today is about the new generative AI tools that are exploding onto the market. And, somehow, when the generic term “AI” is thrown about, it is frequently referring specifically to generative AI, not the entire spectrum of artificial intelligence.
Because of this, when I am asked about “AI,” I immediately seek to clarify what the person I’m interacting with specifically means.
There’s also a lot of controversy around AI, much of it rooted in the political and societal implications of it. These include concerns about regulation, potential workforce disruption, potential bias and misinterpretation of AI tools, and the broader impact these technologies may have on public trust.
Against this backdrop, prospect development professionals are being challenged to move beyond the hype and fear and toward informed, intentional decisions. The most important questions are no longer about if prospect development should utilize AI, but about when and how it fits into their work.
Using Generative AI in Prospect Development
So, let’s talk about the elephant in room then: when and how should we be using generative AI in prospect development?
As you can imagine, in my role at HBG I interact directly with fundraising professionals at a lot of different organizations. I also follow listservs like PRSPCT-L (aka: the Apra Exchange), keep up with related posts on LinkedIn, read articles in industry publications such as the Chronicle of Philanthropy, and generally try to stay up to date on the ‘chatter’ around this topic.
Well folks, here’s the truth: the jury is still out on this one.
Some organizations, and prospect development professionals by extension, are leaning into generative AI for its ability to do what it sets out to do: perform tasks faster, more accurately, and more robustly than humans can. Some examples include:
- Building an initial prospect profile
- Creating a template that can be used for prospect management tracking purposes
- Summarizing news articles that mention a specific organization’s name and suggesting the tone and tenure of the results
- Indexing contact reports for donor themes across multiple gift officer portfolios.
Other organizations, and prospect development professionals by extension, are adamantly against incorporating generative AI into our work, typically citing data confidentiality and the lack of accuracy in the data as the two primary concerns. Let’s look at each in turn:
Data confidentiality
Generative AI tools are designed to improve their responses over time by learning from the information they receive. This means that if you enter confidential donor data into one of the “open” tools available on the market, you are effectively sharing that information with the system itself — the underlying engine — which may then incorporate it into its ongoing learning process.
Beyond the ethical concerns with this, it’s important to recognize that much of the confidential data used in prospect research is also subject to data protection laws, such as GDPR, FERPA, HIPAA, CCPA and the myriads of state-level privacy laws that are popping up across the country.
It’s also important to note that “closed” generative AI systems can offer a safe environment that doesn’t expose sensitive data to external platforms. A “closed” system is one that is designed with strict data governance and privacy protections. In practical terms, “closed” systems are typically available only when an organization subscribes to a platform designed for this purpose.
Data Accuracy
- Data Quality: with any AI application, not just generative AI, there is an old adage: garbage in, garbage out. What this means is that if you put data of poor quality or incomplete data into an AI tool, the results of whatever it is you are trying to accomplish with AI will be flawed, incomplete and sometimes entirely misleading. Since the primary data source for most generative AI tools is the internet, and since we know that not everything we see on the internet is accurate, you can see where this can be problematic.
- Data Bias: AI tools learn from the data that they have been previously fed, which means they can unintentionally produce bias responses if the data baked into them is bias itself.
- Hallucinations: often related to data bias that is within an AI environment, hallucinations are instances when a tool produces content that sounds convincing but isn’t actually accurate. Essentially, it is jumping to the wrong conclusion based on faulty assumptions.
- Timeliness: does the generative AI tool have the most up-to-date information on a topic? Maybe, maybe not. If it has only been trained on older data, then the output it provides may be outdated as well
- Context: for me, this one – context – is probably the most important. This is also where I get on my soap box and talk about the value that prospect development professionals bring to our industry. Even if generative AI could provide the most accurate data and even if confidentiality were not a concern, it does not interpret nuance, understand situational analysis or comprehend what the consequences of something could be. For this, we will always need human intelligence and discernment.
In Closing
In 2024, I published a blog post suggesting that we rename our field from “Prospect Development” to “Prospect Strategy & Fundraising Intelligence.” While my message was sincere, I wasn’t seriously suggesting that we adopt that phrase. However, it is 100% relevant to today’s post.
We are still in the middle of the evolution from Prospect Research, to Prospect Development, to Prospect Strategy & Fundraising Intelligence. We are positioned to be indispensable partners in fundraising success and that will not change.
Generative AI won’t replace humans, but it does have the potential to augment our capabilities.
The potential.
We can’t ignore the bullet train that has already left the station. AI is here to stay and will revolutionize our world. But we also don’t have to board the train, at least not quite yet.
We’re still at the beginning of the change curve, but just as the internet didn’t replace the need for (what was then called) prospect research, neither will generative AI.
And, just as the internet has become ubiquitous, so too will generative AI.
But, until we can trust that it can perform tasks faster, more accurately, and more robustly than humans can, and in a safe, ethical and confidential way, then we can’t simply just use it as a replacement for the work that we do.