“All models are wrong, but some are useful.”
This is a quote (from George E.P. Box, I believe) that I use pretty much any time I’m talking about our statistical models that I create for our major and planned gift fundraising programs. It’s nice in that it covers my rear and makes clear the fact that, while our models are helpful and do point us in the right direction, they are not foolproof, and we’re going to get some false positives and false negatives from time to time.
I found myself reiterating the quote again today as I defended my models to one of our frontline fundraisers.
We had found a couple of prospects who had scored well in the modeling process, and they came to our attention on a wealth screening. From a numerical perspective, these folks looked great! They were predicted to be major donors, and we confirmed that they likely had the assets to be able to make big gifts.
But the problem was the fact that, while they looked great on paper, each of them had none of the hallmark intuitive indicators of a good prospect. In fact, many of their attributes raised red flags for this particular gift officer:
- poor (if any) giving history;
- had attended few (if any) college events; and
- one of them had even let the College know back in ’97 that he wasn’t too fond of us.
This gift officer had a pretty hard time wrapping his head around the idea that he should even try to see this person.
I did my best to sell the model: “It takes a lot of non-intuitive things into consideration, so it catches things we’d never think of!” and “We do have some major donors who match some of these criteria, so it’s not entirely out of the question that he could be a major donor!” But ultimately, I know he wasn’t convinced.
So I was left in an unresolved quandary this afternoon: what do we do about people who score very high on predictive models, but who look terrible according to all the traditional “good prospect” attributes? We can’t just write them off, because then we’re essentially throwing out the model because it doesn’t match our fundraising paradigm. (And this is precisely one of the key benefits of statistical modeling: it brings to our attention those people we wouldn’t think to find on our own.)
On the other hand, don’t instinct and experience play a role in the prospecting process? I truly think there is a place for both science and art in prospect research, so shouldn’t we embrace this notion and let this prospect get vetoed, so we avoid wasting staff time and energy on someone who probably won’t pan out? (And for the record, if it wasn’t for the models, I’d discount these prospect entirely – they had few redeeming qualities as potential donors. This gift officer had a pretty good point.)
I’m not sure what my conclusion is about it yet, but I feel that I lean towards at least giving these ugly duckling prospects a shot. Sure, I’m not the guy who has to make the cold call or sit in these folks’ living rooms, but it seems like we should at least make a respectable effort to give it a shot.
Now if I can just convince this gift officer…