Impact Rating Standard

ImpactMatters is a new charity rating agency that helps donors find high-impact nonprofits. To make smart decisions, donors need to know the impact of the nonprofits they are asked to support. Little information readily exists. Often, nonprofits communicate only through stories, without accompanying evidence. ImpactMatters is filling this information gap with a rating system that takes explicit account of how much good the nonprofit achieves per dollar of cost.

To assign impact ratings, we use publicly available information to estimate the actual impact the nonprofit’s program has on people’s life. We then compare those impact estimates to benchmarks to determine if the nonprofit is cost-effective.

Rating is a complex and inherently imperfect exercise and we urge you to read on for details of how and why we issue these ratings.


Why cost-effectiveness

Cost-effectiveness is often viewed as anathema to the nonprofit sector. We see it as the opposite. Take a simple thought exercise: A program has a limited budget of $100,000 to improve literacy in a community. It can choose between two approaches to do so: one that can boost literacy by a grade level for 100 students and a second that can also boost literacy by a grade level but for 200 students. All else equal, a sensible program administrator would choose the second, as of course it reaches twice as many students. This is a cost-effectiveness decision. We have limited resources and unlimited needs. Cost-effectiveness is a decision tool that makes those resources go further — helping more people in more ways.


Our rating methodology

Each nonprofit rating is based on an estimate of the impact of a program or programs operated by the nonprofit. To estimate the impact of a program, we follow a methodology developed for that type of program. To develop the methodology, we research the problem and how nonprofits address it. We identify methods for measuring impact. We then explore individual programs to understand context, such as beneficiaries and geography, and how nonprofits typically report impact data. After drafting the methodology, where appropriate, we solicit expert advice from academic experts. In some cases, we create a custom methodology for a nonprofit. We do so if the nonprofit is sufficiently large and has impact data that cannot be analyzed under an existing methodology.

Learn More

For our program methodologies, see here. For our overall methodology for conducting cost-effectiveness analysis, see here.


Nonprofits we rate

We rate “service delivery” nonprofits, i.e. nonprofits that deliver a program directly to people to achieve a specific health, anti-poverty, education or similar outcome.

Learn more about who we rate

Programs we are currently analyzing

Food distribution
Emergency shelter
Postsecondary scholarships
Cataract surgery
Water purification
Tree planting
Financial assistance for patients with medical conditions
Veterans disability benefits

Nonprofits we do not rate

We do not rate two types of nonprofits:

  • Advocacy and research nonprofits and “donor use” nonprofits. Nonprofits that seek systems change through advocacy, research or similar activities may engage in important work. We have not yet developed a methodology for analyzing these groups, though we are hopeful it is possible.

  • Membership-based organizations where the donor is also the user of the charity, e.g., religious organizations or institutions like museums. As “donor use” charities, decisions to donate are commonly driven by the donor’s personal experience at the charity, and thus impact estimates are not particularly useful in understanding that value.


How we apply the methodology

Once we have completed a methodology for a program type, we create a list of nonprofits implementing that program. For each nonprofit, we search for impact-related information available from public sources, including the nonprofit’s website, tax forms, annual reports, GuideStar data and audited financial statements. If we find enough information, we create an impact estimate by applying a set of standard algorithms.

We ask each nonprofit to review our work and, if desired, provide critiques and new data. After incorporating this feedback, we double check our work and then make the rating public. You can report errors in our work by emailing us at info@impactmatters.org.

Our Process

  1. Search public information

  2. Estimate impact

  3. Rate program based on estimate

  4. Get feedback from the nonprofit

  5. Publish the rating to donors


We analyze nonprofits continuously

We are moving systematically through the nonprofit sector, analyzing one type of program at a time. We will periodically release batches of new ratings.


Assigning stars

We rate nonprofits on a 1 to 5 scale. A 5-star nonprofit is generating high impact. Each nonprofit starts with a minimum of 1 star. We add an additional star for each criteria that the nonprofit meets. Each new star is a “leveling up” from the previous star — in other words, to get 3 stars, the nonprofit must meet the criteria for both 2 and 3 stars. If we analyze more than one program for a nonprofit, we issue a rating based on both estimates (discussed further below).

We assign stars as follows:


1 star   stars1

Nonprofits receive 1 star if they show signs of improprieties. By screening out nonprofits that are evidently mismanaged, we provide confidence that nonprofits with higher star ratings have passed an essential litmus test of trustworthiness. The criteria for 1 star are at least two warning signs – excessive overhead, paid non-staff directors or no financial audit (for large nonprofits) – or one indication of impropriety – excess benefit transactions, material diversion of assets or a moderate or high Charity Navigator advisory.

For more information on our criteria, see here.


2 stars   stars2

A nonprofit receives 2 stars if they have no indications of improprieties but has not provided sufficient public information to estimate the impact of its programs. Impact calculations typically require a small set of data already on hand, such as the number of meals served. See the image below for an example of data requirements, or an individual program methodology for more detailed requirements. If a nonprofit has made this data available, they receive at least a third star. More details on nonprofit reporting are available in the nonprofit portal.

We believe it is reasonable to expect nonprofits to publish basic impact information. However, as we have not conducted any sustained education or advocacy efforts, we believe it is unreasonable to punish nonprofits for a lack of transparency today. We will not issue 2-star ratings until we have done more to communicate the importance of impact information.


3, 4 and 5 stars   stars3   stars4   stars5

Nonprofits earn 3, 4 and 5 stars based on their cost-effectiveness. We define cost-effective as producing good outcomes relative to costs. We follow some general principles, laid out below, when determining what is and is not cost-effective. However, for a precise understanding, we need to understand the specifics of each program type, which are detailed in program-specific methodology summaries here.

A nonprofit receives 3 stars if it fails our benchmark (not cost-effective), 4 stars if it passes a standard benchmark (cost-effective) and 5 stars if it passes a high benchmark (highly cost-effective).

Donors often care about a specific cause and therefore seek out recommendations only within that cause. We sometimes adjust the star ratings of nonprofits within a particular cause to show donors the top nonprofits for that cause.


Outcomes

When determining what is cost-effective for a particular type of program, we consider the main mission of that program type. For example, we analyze the cost-effectiveness of a soup kitchen in reference to a mission of reducing hunger. We do not compare one mission (say, increased literacy) to another (say, arts appreciation).

We recognize that many programs have complex missions, and those missions may diverge from the mission we set for a program type. Where a nonprofit has evidence that it is boosting outcomes in addition to the one we are analyzing, we give the nonprofit a 1-star boost.


How we rate cost-effectiveness

For each program type, we set thresholds for “cost-effective” and “highly cost-effective.”

Setting thresholds is a judgment call. We are answering the question, could the resources used in the program serve substantially more people or serve them better? Each nonprofit can choose between alternative types of programs to achieve the same mission. For example, a food program could choose to serve meals itself or distribute meal vouchers. To set benchmarks, we rely primarily on two sources. First, we look to the price of the good or service in the market. For example, we compare the nonprofit’s cost per meal to the cost to buy a meal in the county. Second, we look to established norms and authoritative bodies. For example, we base our health benchmarks on the WHO.

See here for the full discussion of our criteria.


Our rating philosophy

Our mission is to reward high-impact nonprofits and help all nonprofits better communicate impact. We want to see the sector thrive, and believe a rating system that rewards success is an important tool for growing social impact. Our ratings show many excellent nonprofits – the majority of our nonprofits are 5 stars. We think this reflects an important finding: nonprofits are doing a very good job.

We do not generate ranked comparisons that aim to identify the best nonprofit anywhere (Princeton is #1, Harvard is #2) but rather to inform a donor whether that particular nonprofit is effectively helping people and therefore deserving of support. For example, we are not looking for the top food bank in America. Instead, we are helping donors in Missoula decide whether their food bank is doing a good job.


How we address complexity

We recognize the complexity of social work and impact measurement. We address this in a few ways. First, we narrowly define each program type we analyze, thus avoiding misleading comparisons. If there is an important variant of a program type, we treat the variant as its own, separate program type. We recognize that there are factors that influence the cost-effectiveness of nonprofits. As we discuss above, we identify these factors and adjust ratings as appropriate. Finally, we recognize that the world is complex and our work only carries so far. We want to be a positive force. So if we can’t make it work – say, accounting for a nonprofit’s mission of fostering an appreciation of baroque music – we don’t do it.


Rating major nonprofits

Ratings of the major nonprofits are particularly valuable, because they provide guidance to many donors. We analyze major nonprofits using the same cost-effectiveness and rating principles as all other nonprofits. However, our process differs in a few important ways:

  • Rather than starting with a particular program type, we identify individual major nonprofits using resources like the Forbes 100 list. We include nonprofits that have high charitable contributions.

  • For major nonprofits we may develop a custom methodology that is based on the specific data that nonprofit makes available.

  • Where we don’t find data, we provide guidance to the major nonprofit on how to report impact.