How big should your comprehensive campaign be?

I like to say that feline taxidermy is a multi-solution discipline. When we were trying to land on a reasonable goal amount for our comprehensive campaign, we didn’t have enough information to confidently decide using traditional approaches. With a bit of external data and helpful tools to make sense of it, we found a path forward.

Shortly after I started my current role as AVP for Development, my boss (the VP) casually mentioned an upcoming Campaign Committee meeting.

“What’s our campaign goal, by the way?” I asked.

He paused for a second with a slightly uncomfortable look on his face before replying, “Well… we’re still working on that.”

“Oh. Okay. What about timing? Has the Committee landed on a timeline at least?”

Another pause.

“Ohhhhhhh….” I trailed off. “Well… you hired me to help solve problems, so let me see what I can do.”

I won’t get into the details about what brought my employer to this point, because it didn’t matter. We just needed to make an informed decision about a campaign goal amount, in a relatively short period of time.

Of course part of the goal-setting process is looking at how much capacity you have in your prospect base, building a pyramid, etc. etc. We did all this, but the what we were left with was unreliable. The assumptions were questionable, and we didn’t have enough data to affirm or modify the standard yield rates. (The rule of thumb that seems to prevail is that for every gift successfully closed you need to make three asks, and for every ask, you need to cultivate about three prospects. This is what you can hire an expensive consultant to tell you, anyway…) Maybe the standard yield rates applied, but maybe not; with the information at hand there was no good way of knowing. We needed something else to help guide us.

Back when I was regularly building predictive models, I found that giving behavior was consistently a good predictor of other kinds of giving behavior. What if we bastardized that idea and applied it here? Presumably, the annual fundraising totals leading up to a comprehensive campaign should relate to that campaign’s goal amount. If we figure out that relationship and apply it to our own annual totals, we should be able to generate a reasonable comprehensive campaign goal figure.

So, armed with my newly acquired access to CASE’s Voluntary Support of Education (VSE) survey results via AMAtlas Data Miner, I went hunting for data. Between this and the Inside Higher Ed listing of campaigns at colleges and universities, I was able to compile a couple files that I could mash up into something useful. One was a list of campaigns, with their respective goal amount, start date, and end date. The other was fundraising totals of current-use dollars across a range of years for all the schools in the listing of campaigns.

I put this all together into a Tableau workbook that we could use to explore the data, see what patterns looked like at various other institutions, and play around with the parameters to come up with a possible campaign goal amount.

A version of that workbook is available on Tableau Public. The narrative within the file steps through the thinking that the analysis is based on. Go check it out! (I recommend viewing in full-screen mode.) Take it for a spin and imagine you’re planning a campaign for one of the institutions in the dataset! Explore all the possibilities with the parameters! Sit back and let the beauty of the trellis chart wash over you!

Ultimately, the analysis provided support for a recommended campaign target that supported the results of the internal analysis. We weren’t stuck with a single approach to the problem, wondering if one errant assumption would lead us to a problematic conclusion. This was a huge relief, and it gave us a lot of confidence (and credibility) in providing a recommendation to the Campaign Committee.

The icing on the cake for me was the fact that I had this snazzy tableau work book to show off, which was a really effective tool to illustrate to the Committee how we arrived at our recommendation.

And did I mention that super cool trellis chart???

Understanding Partner Compensation at Goldman Sachs

I recently had the privilege of presenting a “Flash Class” for DonorSearch titled “Understanding Partner Compensation at Goldman Sachs.” Check out the recording here if you are interested.

The deck used in the presentation is also accessible via this link Understanding Partner Compensation at Goldman Sachs March 6 2019.




Doing More with More

As part of “Research Pride Month,” this post is a reflection on why I think Prospect Research (and more broadly, Prospect Development) is so great and why I’m proud to be a part of this field.

If you’re reading this, I’m willing to bet you work in the nonprofit sector. And I’ll double down on that bet and say that you probably also have heard – and have to work under the constraints of – that great old phrase, “doing more with less.”

It’s such a trope for those of us working at nonprofits. When I arrived at my current job, I asked around about where the supply closet was for pens, paper, etc. The answer? “There is none.” We don’t have the luxury of a standing inventory of supplies at our fingertips, because we are not a for-profit organization! These things are only purchased when we actually need them, so “go talk to Mary if you need an order placed.” Of course, I’m one of the lucky ones: folks at much smaller organizations sometimes have to come to work with their own supplies.

So yeah, it’s safe to say you are familiar with, and likely work under, the phrase “doing more with less.”

This obviously makes sense in our sector. Any dollar we spend on things like supplies is a dollar that isn’t going to help fulfill the mission of the organization, and for many of us, a big reason we love working for nonprofits is because of how much we value that mission-driven focus.

Regardless of how much sense this makes, it’s tiring and, at times, discouraging. What we really want for our organizations is to do more with more. This is where Prospect Research comes in.

Prospect Research is the key to doing more with more. It’s all about finding more donors (with more dollars). It’s about helping our fundraisers know more about their donors, so they can tap into more of those donors’ passion for our work. It’s about making our fundraising efforts more efficient, and more effective. The organization that invests in Prospect Research essentially is leveraging donor dollars to secure yet more donor dollars. And I use the word “invest” deliberately: this investment provides the donor a compounded charitable “return” on their gift.

This is what makes me most proud to be a Prospect Researcher. The work that we do makes our donors’ dollars go farther. It transforms their contributions from gifts to investments that will continue to pay a return. (I would even go so far as to say that when an organization does not invest in Prospect Research, it borders on simply being inefficient.) Prospect Research is the key to helping our organizations do more with more.

Special thanks to the brilliant Helen Brown, who conceived of and initiated Research Pride Month. Helen has a great blog, and she has compiled a list of other Research Pride blog posts and writings, which I encourage you to check out!

The Worst Thing about the APRA Conference

Next week, 1,000+ of the smartest professionals in the fundraising industry will convene in New Orleans for the 2015 APRA International Conference.

I love this conference. It is hands-down the best professional development opportunity for anyone working in the Prospect Development field. If you do anything with Prospect Research, Relationship Management, or Fundraising Analytics, you’d be crazy to miss this event. This is especially true if you are new to the field – the New Researchers’ Symposium is simply fantastic. (In the interest of full disclosure, I should say that I’m co-chairing the New Researchers’ Symposium… so please forgive the temporary immodesty…)

All of this said, there is one thing that I dread about conferences: small talk.

Every couple of weeks I have a conversation with the guy who lives across the street from me, wherein we discuss recent weather or local sports teams, about which I must feign interest. This is like the third circle of hell for me. Thankfully, the APRA conference is a place where I can usually engage in more “big talk.” It’s relatively easy to dive in to any work-related topic and have a really meaningful conversation with someone I just met.

Knowing what I know about those of you who will be at the APRA conference, I’m fairly confident saying many of you are very much like me in this regard: pretty classic introverts* who would rather not talk about meaningless garbage with a bunch of strangers, but who can be quite happy doing a deep dive on a topic you’re interested in.

The hard part about this is getting the conversation started. I’m still mostly terrified of striking up conversations with people I don’t actually know, but I’ve accumulated a list of questions I like to use to get the other person talking. I thought I’d share these with ya’ll in case you find them helpful too.

First, the basics: cliché questions

Nothing wrong with using these, although they are super unimaginative. You may get into a decent conversation eventually, but it might take a bit before you get to anything especially interesting.

  • Where do you work?
  • How long have you been in that role?
  • How did you get started in the field?
  • Which other conferences have you attended?

Let’s move to the next level.

These questions are slightly more interesting, and may open up a respectable conversation. They’re still pretty standard. (They maybe don’t even warrant inclusion in a blog post about how to make small talk better.)

  • What are your favorite resources?**
  • What presentations have you attended so far? How were they?
  • What are you looking forward to learning at the conference?
  • Who is the best speaker you’ve seen, at this conference or at another conference?***
  • What’s your favorite prospect research book?

Now let’s go to eleven.

No risk, no reward. These questions can be helpful in turning the conversation on its head, although you may be regarded as a weirdo if you ask them. (All the more reason to do so, in my book!)

  • What’s your favorite palindrome?****
  • If you could change one thing about this whole conference, what would it be?
  • We’re going to make a movie about this conference. Which celebrities should play whom?*****
  • If APRA had a members-only handshake, what would it be?
  • You can either fly, or you can be invisible. Which superpower do you choose and why?****** How would you use this superpower in your job?
  • If, when you died, you were required to have your body preserved in some fashion (instead of burying [six feet under, at sea, or otherwise] or cremating) what would you have done to it?
  • What dinosaur is most suited to doing your job and why?
  • You have to come up with a new name for APRA using only a color, the name of a wild animal, a b-list celebrity’s name and some sort of gesture. Go.
  • What pirate phrases do you think should be standard in the prospect development lexicon?*******

There you go. You might never be able to tolerate true small talk, but hopefully these brilliant nuggets can help supercharge the interestingness of your conversations at conferences! (And maybe seem like a bit of a weirdo in the process…)

* What’s the difference between an introverted researcher and an extroverted researcher? The introverted researcher looks at his shoes when talking to you; the extroverted researcher looks at YOUR shoes when talking to you. (yuk yuk yuk…)
** Hopefully the person you’re talking to doesn’t just respond to this question by shouting “NERD!”
*** Let me know if anyone responds with “Mark Egge. Definitely Mark Egge.”
**** Mine is “Egge,” by the way.
***** Let me know if anyone says that I should be played by Brad Pitt.
****** Yes, I know, I stole this from John Hodgman.
******* Did you really think I would miss an opportunity to make a pirate reference?


Capacity Ratings are Actually Small Sedans

I read a recent blog post entitled “Why Capacity Ratings are Bunk and What You Can Do About It,” (check it out here) which suggested to me that some folks in the Development world are thinking about capacity ratings wrong.

I’d like to start with an analogy. (I love analogies.) Imagine if someone purchased a small sedan, drove it for a while, and then had the following complaints:
– “It will not seat 366 people comfortably!”
– “The car tops out at 80 miles per hour. Why can’t it go 567 miles per hour?”
– “It does not fly. What is the point?”
– “This vehicle is visible. It should be entirely transparent.”
– “Sure, it travels through space, but it should also travel through time, forward AND backward.”

The purchaser of this car, of course, is a lunatic.

I would venture a guess that many (most?) Prospect Researchers have a story or two about how some of the fundraisers and senior leaders they work with are lunatics too.

To be fair, really none of us would suggest we know reasonable people who would complain like the purchaser in my example. These are fantastical complaints.

However, a person could strip away the last two requests (invisibility and time travel), and what we’re left with is actually REALLY reasonable: the person wants something that seats 366 people, goes 567 miles per hour, and FLIES.

This description fits a 747 pretty well.

And this, I think, is where we have a problem with capacity ratings. They really are just a small sedan, but if you’ve got someone thinking they will behave like a 747, that person will be disappointed.

In “Why Capacity Ratings are Bunk…” very early the author says the “capacity rating is … adjusted to calculate ‘the ask amount.’” Right off the bat he’s talking about a jet airplane, not a small car.

For all the reasons the author lists later in the article – we usually don’t know much of anything about a prospect’s “hidden” wealth, about their debts and liabilities, about their health issues, etc. – it would never be advisable for a researcher to use a rating to calculate “the ask amount.” The information a researcher is able to find can help create some estimates of philanthropic capacity, but certainly not an ask amount. I’d liken this difference to that between a small sedan and a jet plane.

So first, let’s get clear about that: Researchers will never be able to gather – on their own – the information necessary to come up with an ask amount. The Researcher can gather information that helps inform the ask amount, but to come up with a good solicitation number requires information that only a Development Officer or Volunteer can provide.

In other words, Researchers have access to a small sedan, not a 747.

Second, and perhaps more importantly, we need to do a better job of communicating about capacity ratings. If gift officers and senior leadership parade around thinking they can hop into a large passenger jet, and no one makes clear the fact that all we have is a sporty little four-seater, then it’s a very high likelihood that there will be some disappointment. Research staff are responsible for being clear with everyone about what their work does, and does not, mean.

Ultimately, I’d say that capacity ratings are no more “bunk” than a small sedan is a terrible transportation option. They each have their purpose and can be tremendously helpful tools. It is incumbent on the Researcher to be fully aware how they can legitimately use capacity ratings, and to effectively communicate that legitimate use to fundraisers.

In closing, I’ll mention that the capacity rating definition I prefer to use is something like this: the capacity rating represents a rough estimate of the best possible gift we could get from the prospect, assuming our organization is their top philanthropic priority and that they do not have any other negative factors limiting their ability to give. The rating takes into consideration only the information available to the Researcher.

This definition provides a reasonable explanation of what the rating really is and how we can think about it. On many occasions I have trotted it out to fundraisers to remind them of how they can use the ratings, and in doing so, I’ve made things a lot easier for everybody.

Hiring Prospect Researchers

Every once in a while I hear Prospect Research managers and directors wondering about what interview questions they should ask when hiring a prospect researcher. It’s a fairly specific query. Probably too specific, and it’s tough to answer it well without lots and lots of details.

Trying to find out what “the questions” are to ask when interviewing a potential new-hire prospect researcher is kind of like asking someone “What kind of car should I buy?” and expecting a decent answer. If you’re trying to help someone who poses this question, surely there are scads of questions that come to mind in response: “Well, what do you need it for? Do you have to haul stuff? Are you commuting a lot? Do you have to take lots of people with you? Do you need good gas mileage? Do you want something more stylish?” and on and on.

These questions are the kinds of things I’d ask myself when I was trying to decide what kind of car to buy, and when I’m hiring a prospect researcher, I have a similarly expansive set of questions. But in this case, my questions follow a sort of hierarchy, starting waaaay up at the organizational level, and working my way down to the individual/personal level. Additionally, they address organizational/departmental needs, the distinction between the things we can train a person on vs. how they “are,” and how exactly I can assess the candidate on those things. The questions go something like this:

  • What organizational needs does the Research department meet?
  • How well do the existing Research staff cover those needs? What “gaps” in the department does this new hire need to fill?
  • What specific skills are required to be able to fill those gaps?
  • Which of these skills are actually trainable, and which are difficult/impossible to develop?
  • How will I assess these attributes? (Via the application materials? Through a phone interview? In my in-person interview? Via the in-person interviews they will have with my colleagues?)

Let me walk through each of these questions, using the last hire I made to demonstrate them. In 2011 I was Director of Prospect Research in a small, high-performing Research shop that was part of a mature, sophisticated development operation at a small, private, liberal arts College. I was hiring a Prospect Research Officer, who would be the only other Research staff member. Here’s how I worked through my questions to determine how I would evaluate candidates.

What organizational needs does the Research department meet?

In my shop, we were responsible for traditional biographical research (profiles, new prospect identification and research qualification, etc.), prospect management, and analytics. In addition to handling these specific work areas, one of my priorities was for my shop to have strong relationships with our “clients,” the front-line fundraisers. So these were the areas where I and my new researcher would need to be able to cover all of the bases.

How well do the existing Research staff cover those needs? What “gaps” in the department does this new hire need to fill?

At the time, my biggest need was capacity. I was able to DO all of the different types of things in our department, but I couldn’t handle the volume all by myself. This was especially true with our traditional research and with running the prospect management system. (The analytics piece, on the other hand, was something I could keep up with if needed, though it would be a nice bonus if my new hire could take some of this on as well.)

What specific skills and traits are required to be able to fill those gaps?

Because my needs were in the areas of traditional Research and Prospect Management System facilitation, I basically needed to determine what the specific critical skills and traits are for someone to be able to do that kind of work. This is one area where I’m certain there are varying opinions among Researchers regarding what exactly those are, but for me they include the following:

Traditional Research
Clarifying/identifying the question(s) to be answered
Finding information
Evaluating information
Summarizing/synthesizing/tailoring information
Appropriately packaging/delivering information

Prospect Management System Facilitation:
Getting information from gift officers
Building rapport with gift officers
Attention to detail
Understanding reporting, basic data relationships
Translating between gift officers and the tracking system

This is by no means a comprehensive list, but it helps lay out many of the things I needed to be looking for and evaluating in candidates.

Which of these skills are actually trainable, and which are difficult/impossible to develop?

Some of the skills and traits listed above are easily trainable. Others are more of a stretch (and some might say they are impossible to train). Among the former are things like finding information; evaluating information; summarizing/synthesizing/tailoring information; packaging/delivering information; understanding reporting and basic data relationships; translating between gift officers and the tracking systems. Others are the kinds of things that one might consider “how a person is wired.” These are the kinds of things that can be a lot more difficult to train someone on (in my experience, anyway), and include tenacity/curiosity; building rapport with gift officers; attention to detail. This is all debatable, of course, but I found that I always had a really difficult time training people on these sorts of things. So I would rather hire someone with those traits, and train on the other qualities.

How will I assess these attributes?

Once I know what exactly I’m looking for in my candidate, it’s really helpful to decide ahead of time how I can assess those traits and skills. The hiring process is really challenging in that I don’t have a lot of information to go on in order to make my decisions about a candidate: their application materials are likely no more than a couple of pages, and I probably won’t have more than a few hours of total interview time with them. So it’s critical that I use the time and information I do have as efficiently as possible.

Some of the traits I am looking for will be readily apparent in their application materials. Any typos or errors show they may not be so great with details. Additionally, their cover letter and resume are a perfect sample for how they summarize, synthesize, tailor, package, and deliver biographical information. (This assumes, of course, that they are responding to a well-written job description. Job descriptions could be another blog post entirely in and of itself!) If they write a rambling, off-point cover letter and include extraneous information in their resume that doesn’t help show me how they are a good candidate for the job, they’re probably not so great at making decisions around what should be included in a profile on a prospective donor. So these pieces of the puzzle can be found through the applicant’s materials, and I don’t need to waste any time in the interviews trying to assess them.

Some of the skills will require some questioning to evaluate. For example, I like to find candidates who are naturally curious and are persistent in their efforts. I might ask them about their hobbies and see if anything like genealogical research or puzzles (e.g., crosswords or sudokus) come up. These kinds of interests tend to go along with curiosity and tenacity. I can ask things like this in a phone interview.

The in-person interviews, with myself as well as with some of my colleagues, are great for evaluating the “soft skills.” A candidate’s demeanor and how well they interact with others can help indicate how they’ll do with rapport building. Additionally, the in-person interviews are a great setting for asking behavioral questions and asking them to work through a role-playing exercise with you. (An example of such an exercise is to have them pretend to be the researcher, gathering information from them for the prospect management system, and you play the role of the gift officer who gives really vague answers to questions that require specificity. How does the candidate go about trying to tease out the necessary information from the gift officer?)

In a lot of cases, we’re hoping to find the “right” questions to ask a candidate. But there really are no “right” questions. Just as in Prospect Research the questions depend on the situation at hand, so too do the questions change in a hiring situation, all depending on the nature of the organization, the Research shop, the staff involved, and the skills and characteristics required. By considering all of these components, we set ourselves on a path to find the best questions and hopefully to hire the best prospect researcher!

Note: This blog post is a joint cross-post, appearing simultaneously on and the newly unveiled APRA-MN blog at

How to Get Ahead in this C-Average World

Prelude: When I started thinking about this post, I was very much in a cranky-old-man/what-is-wrong-with-people/ “well-if-I-ruled-the-world…” kind of place. But as I thought through it more, it became clear that it presents some real opportunities for managers. So be warned: the post itself follows this kind of a path. I’m starting off kind of ranty here, but stick with me – I hope to make it worth your while!

Lately I have been in more than one situation where someone wasn’t doing something as well as they could have been doing it. This is fine – we all have our limitations, nobody is perfect, and besides, we generally should avoid letting the perfect be the enemy of the good anyway!

The problem, however, was that there was basically no interest in doing it better.
We see this all over the place. Again and again I find myself sitting through a less-than-engaging presentation dominated by text-heavy powerpoint decks; there’s always a terrible data visualization or two floating around the office (heck, check out if you want to see a hall-of-fame of bad visualizations); when was the last time anyone ever got training on how to interview job candidates?; and who knows how many managers basically try to “wing it” when giving feedback, doing annual reviews, delegating, etc. (I go into more detail about this concept in a previous post.)

No matter where you are, people are doing lots of things at a pretty mediocre level, and there doesn’t seem to be a pervasive urge to improve. (When my brother and I talk about the workplace, I often hear him explain this by saying “It’s a C-average world.”)

It’s completely nuts! How can we go to work every day and NOT ask the question “How can I make this thing better? How can I do this better? How can I be even better at whatever it is that I’m doing?”  Why does it seem to be that the default setting for most everyone is to be complacent with the way things are?

Maybe it’s laziness. Or lack of vision. Or maybe even misaligned rewards structures. (“What’s the point in really improving this? I won’t get recognition/a raise/promoted anyway…”) I have no idea what the root cause is. It’s probably some combination of these factors.

Can we do anything to address the problem? Can we find an opportunity to take advantage of the situation in some way?

Yes and yes.

Now don’t misunderstand me: I’m certainly not saying I know how to fix this. I don’t. But as a manager, I can use this understanding to alleviate this problem in my little corner of the workplace and beat “C-average” in the process. Here are two ways to do so:

     1. Hire those people who have a hunger for continuous improvement.

Some people are just kinda lazy; some don’t have a well-developed sense of “vision;” some rely on an extrinsic rewards structure to be motivated to work. Avoid those people at all costs.

Some people – a rare few – are always asking the questions: How can we improve this? How can I do a better job at X? These people have an internal drive to make everything – including themselves – better. They hunger for it; it motivates them. Find these people and hire them.

Not only will these folks help improve everything around them, just as a habit of their being, they likely will also influence others to do the same. Which brings us to my second point.

     2. Make continuous improvement part of the culture.

If you can hire someone who has this mindset, it will be a great step toward getting everyone thinking about continuous improvement. It will move the culture in the right direction.

However, hiring opportunities are preciously rare, and this is too big of a problem to simply wait to address. So we have to mold the clay given to us.

Make continuous improvement part of your culture by setting the expectation that everyone plays a part in making it happen. I have a coworker who requires everyone on his team to come to their weekly meeting with at least one idea of how they can do things better. They all agree that it is important, commit to doing it, and hold one another accountable for it. They have a wildly successful program, and there is no doubt that this plays a major role in that success.

Don’t want to implement this with your whole team/program? Too big of a change for you to try? Fine. Then try having one employee do this: At each of your weekly/monthly one-on-one meetings, require that they come with an idea about how to improve something, anything: a process, a report, some specific skill of theirs, the layout of their desk setup. It doesn’t really matter. You just need to start getting people into the habit of thinking about how things can be better.

What difference will of this make? Maybe not a ton at first, but its effect will surely be cumulative. Eventually, you (or your program) will rise to a B or A level, and in a C-average world, that will be a big deal.

Three Reasons Why Research Request Forms Are a Terrible Idea

Okay, so first let me issue a caveat and say that my title is intentionally provocative: I don’t necessarily think research request forms are always a bad idea. There are contexts in which they make sense. But let me give a few reasons why I prefer not to use them in my shop.

They strip away the personal interaction

In previous posts I have made it pretty clear that it’s important to maintain a good relationship between the researcher and the gift officer. Personal interaction goes a long way in helping to develop and maintain that relationship. When personal interaction decreases, the relationship suffers, and while the work might still get done, I question whether it will be as effective as it could be. When you make a gift officer fill out a form to make a request, they don’t get the benefit of a face to face (or telephone) conversation with you. (And why would you deprive them of that? You know how charming and wonderful you are.)

They strip away the context and background of the request

When someone asks for something, if I understand the context and background of their request I am best positioned to be able to meet their needs. When I know why they’re asking for what they’re requesting, I will have a much better sense of what I really need to do to help them.

For example, there may be a couple of different reasons why a gift officer might need a rating assigned to a prospect: they might be trying to advise a volunteer on how to steer the gift conversation with that prospect; or they might be trying to prioritize some leads referred to them. In the latter case, I would likely take a very quick, “chainsaw cut” approach to estimating capacity. We’re just trying to pit some prospects against one another to see whom we should call first. In the former case, I might spend more time understanding the complete financial picture of the prospect (What visible assets are there? What other obligations and liabilities could this person have? Is it likely that market conditions are impacting this person’s self-perception of their wealth?) The context and motivation behind a request makes a big difference in how I approach the task at hand.

They strip away viable solutions

One of the benefits of a request form is that it helps simplify the process of asking for something: the requester can select option A, B, or C. This is great when three options will suffice. But the reality of fundraising is that the kinds of information we need at any given time vary widely. Our needs often do not fit into neat little boxes. So when people have just a few options to choose from, they select the one that seems like it will get them what they need (and cross their fingers and hope they are right!)

Here’s an example: imagine you have three levels of “profiles” that people can request. Only one of these – the “full biographical profile,” which includes everything AND the kitchen sink – contains information about the boards a prospect serves on. So if a gift officer is trying to get a visit with Joe Potentialdonor, and they want to figure out who might know Mr. Potentialdonor, they will request the “full biographical profile” so they can get information about what boards he is on. The researcher spends eight hours putting together the profile, when the gift officer really only needed some information that could have been compiled in 45 minutes.

If the gift officer has no form to fill out, she’s free to ask for exactly what she needs: people who might help her reach Mr. Potentialdonor. The gift officer has the great idea to start by looking at people who are on the same boards as Mr. Potentialdonor, and the researcher might have other ideas of who to look for (classmates, co-workers, etc.). In this scenario, the gift officer gets even more of what she really needs, and the researcher spends far less time procuring it for her.

If you’re thinking about using a research request form, be sure you consider the trade-offs. They can help streamline the process of asking for standard products, but they strip away some valuable things in the process! Is it worth it?

The five best free prospect research resources

I have the luxury of working in a well-supported prospect research shop, which means that I typically don’t have to worry about finding free prospect research resources. But a couple years ago, I started doing a lot more freelance consulting and research work on the side on a shoestring budget and I realized I needed to brush up on free prospect research resources. There are a lot out there, but I’ve found that there are just five that I really, really rely on. If I were stuck on a desert island and could access just five resources, these are the sites I’d access:

1. (specifically, the campaign finance disclosure portal advanced search page

The FEC’s advanced search page is a pretty powerful search tool that lets a person search on a number of different criteria so you can query as broadly or as narrowly as you’d like. They’ll even let you drill down in the search results to see the actual original filing. Even when I’m using a paid resource, like a vendor that will aggregate FEC contributions attributed to a particular donor, I will still go directly to the FEC site to verify that the vendor got it right.

One of the things I really like about the FEC filings is that you can often get employment information and home addresses from their filings.

And here’s a tip for searching the filings: use just the donor’s name and their city and state, and when you do, try using the city for their home address AND their work address (assuming you have them).

2. (specifically, the full-text EDGAR filings search page

I do use a vendor for my SEC filing searches in my day job, mostly because their search interface is really, really powerful. However, the SEC’s search interface for their EDGAR database actually isn’t far behind in terms of its robustness. The four-years full-text search can be used in advanced mode, which allows for a lot of flexibility.

3. County assessors’ offices (or more helpfully,, which lists many of the assessors’ office websites from around the United States

Each county assessor’s office is different: some let you search online on a whole bunch of different datapoints; some only let you search on a few; some don’t let you see the property owner’s name; some don’t even let you query their property rolls online. Thankfully, there are enough that provide reasonably good access to make it worth my while to check them out.

There are several benefits to looking up an individual’s property records, two of which I find particularly helpful: (1) you can often confirm that your person owns the property in question (and potentially when they bought it and what they paid for it) and (2) you can often get the name of their spouse. The spouse name goes a long way in confirming info found in other places (appearances in donor lists, for example); the property valuation and ownership info helps shed some light on how wealthy a prospect might be. However, to get a better sense of a property’s value, I avoid relying on the assessor’s market value, and instead prefer my fourth most-valuable resource.


County assessor’s offices are all over the map in terms of how they assign a market value to a property. Some stay pretty close to actual market value (Minnesota is decent) others have specific laws and regulations in place that make it really hard for them to do so (California comes to mind). For this reason, I much prefer to get an estimate of the current property value, and eppraisal is my favorite source for doing so. Not only does eppraisal provide their own property value estimate, but they also show you what value Zillow assigns to the property!

5. The National Center for Charitable Statistics (

I used to be big on Guidestar and the Foundation Center. They were the only games in town for an easy way of getting to 990 forms.

No more.

The National Center for Charitable Statistics has a free, slick search tool that lets you look up information on pretty much any nonprofit organization in the United States. (And you don’t have to register to use it.) Their query tool is very easy to use but flexible enough to do very specific searches, and the results include lots of summary information about nonprofits. The BEST part though is their collection of 990 filings: NCCS provides filings going back seven years (in many cases).

Those are my five! What free prospect research resources do you like?

The problem with data visualizations

You’ve probably heard the skeptical aphorism about statistics: “There are three kinds of lies: lies, damned lies, and statistics.” Unfortunately, I worry that we may soon hear “data visualization” tacked on to that list as the fourth and most deceiving way of communicating information. This is a problem.

A couple months ago, I was reading through some materials about a company’s financial health, and they included a dual axis chart showing the company’s Revenue and EBITDA from 2008 to 2014 (projected).  (See below.)

Dual Axes exampleLooking at the chart, it appears that the two measures increase in a near parallel fashion. The slope of their lines is pretty comparable, particularly in the later years on the chart. Problem is, this misrepresents what is actually happening. The axis on the left increases in increments of 20 while the one on the right does so in increments of 10, which means the relationship that appears between the lines is a misrepresentation.

When we chart the same data on a single axis we see that EBITDA fails to increase nearly as dramatically between 2012 and 2014 as revenue does. (See below.) If I’m evaluating the health of this company and its future prospects, that difference may be important!

Single Axis example

Adding a second axis seems like such a simple, innocuous thing, but it changes how the data might be interpreted and understood. This is just one example of the substantial impact a seemingly small design decision can have.

Why should we care about this? There are two main reasons:

  1. As consumers of more and more visual data, we need to be aware of situations like this where visualization design decisions may obscure (or at least distract from) certain critical pieces of information. Just because it’s data (data never lies!) and you can see it (my eyes would never deceive me!), doesn’t mean it is presented in an objective way.
  2. As more of us are in roles where we create data visualizations, we need to be aware that if we are careless, we run the risk of misleading our audience or imposing (hopefully unintentionally) our own viewpoint on the information we present.

Data visualization likely will be one of those things many of us try to do without any formal training, and I worry that, as a result, a lot of folks will do it badly. Am I being overly paranoid? I hope so. But this particular example doesn’t do much to allay that paranoia.