Frequently Asked Questions

American donors report caring strongly about impact. Yet no major nonprofit rater assesses results. ImpactMatters is filling this gap with a new rating system that rewards nonprofits that are highly impactful (i.e. cost-effective). To issue ratings, we gather information published by the nonprofit, calculate its cost-effectiveness and assign 1-5 stars. In November 2019, we will release a new website with 1,000+ ratings.

For more information, see our nonprofit FAQ.

Cost-effectiveness is a measure of the impact of a nonprofit’s program relative to the cost to run that program.

Impact is the measure of the change caused by a nonprofit’s program, net of counterfactual change (see more below).

A cost-effective program makes good use of resources — compared to possible alternative uses — to improve the lives of the people it serves. ImpactMatters awards four- and five-star ratings to nonprofits that are highly cost-effective.

Cost-effectiveness is often viewed as anathema to the nonprofit sector. We see it as the opposite. Take a simple thought exercise: A program has a limited budget of $100,000 to improve literacy in a community. It can choose between two approaches to do so: one that can boost literacy by a grade level for 100 students and a second that can also boost literacy by a grade level but for 200 students. All else equal, a sensible program administrator would choose the second, as of course it reaches twice as many students. This is a cost-effectiveness decision. We have limited resources and unlimited need. Cost-effectiveness is a decision tool that makes those resources go further — helping more people in more ways.

We rate “service delivery” nonprofits, i.e., nonprofits that deliver a program directly to people to achieve a specific health, anti-poverty, education or similar outcome. We do not rate two types of nonprofits: (1) advocacy and research nonprofits; and (2) “donor use” nonprofits.

Advocacy and research nonprofits. Nonprofits that seek systems change through advocacy, research or similar activities may be highly effective, but they are much harder to measure. The link between the nonprofit’s work and the final outcome is longer, and often there are alternate explanations for why that particular piece of legislation passed or those minds changed. We do not (yet) have a good method for consistently estimating the impact of these programs, and so we do not issue ratings for them.

"Donor use" nonprofits. For some nonprofits, the donor herself is a user of the charity, e.g., religious organizations, community associations and most arts and culture institutions like museums. We neither encourage nor discourage donating to such nonprofits; we just do not rate them. With these “donor use” nonprofits, the donor decision to donate is largely driven by her personal experience with the charity. As such, we do not see the same value-added from applying our methods to such organizations.

Revenue from donations

Nonprofits can get funding from individual contributions, foundation and government grants, investment income and other sources. Because the audience for our ratings is donors, we only rate nonprofits that receive at least some funding from individuals or foundations. A nonprofit that is less reliant on donor dollars is neither worse nor better; just less relevant to donors seeking guidance and confidence when giving.

We will release our first set of ratings in November during the Giving Season.

To understand the impact of a program, we must ask the counterfactual question: What would have happened to beneficiaries if the program had not, counter to fact, been there to serve them? We then measure the difference between what actually happened and what we think would have happened if the program had not been around. That difference is the impact of the program. Just looking at what actually happened is not sufficient for understanding impact because many factors besides the program could affect how beneficiaries fare over time. For example, an economic boom affects both beneficiaries of a job training program and non-beneficiaries. An observed increase in employment among beneficiaries is insufficient evidence to conclude that the program — and not the economic boom or other factors — caused an increase in employment.

Most communication about impact today inadvertently ignores the counterfactual. But ignoring the counterfactual, in effect, assumes the counterfactual to be zero. In other words, it assumes that in the absence of a program, the outcomes of beneficiaries would not have changed at all. This may well be the case for some programs in certain settings. But for many others, it would be extreme and erroneous to assume, for instance, that without a program, no children would have graduated from high school.