Evaluation Process
This page provides an overview of how we currently decide which charities to recommend out of the small group that makes it through the application and selection stages. For more details on earlier stages of the process, visit How We Evaluate Charities. To find details on our process from previous years, see the Process Archive.
How We Gather Information
Once a charity is selected for evaluation, we contact them and share the Charity Evaluation Handbook to explain the process, policies, and expectations. Over the following few months, we send charities a series of questions based on our charity evaluation criteria, study the materials submitted to us, and survey other materials published by or about the charity to develop our understanding of their work. We also send an engagement survey to the charity’s staff to help us understand the charity’s work environment.
We ask all charities under consideration questions on the following topics:
- the organization’s activities, the intended outcomes of those activities, and the ultimate change for animals they seek to achieve (collectively, their theory of change);
- evidence of any benefits the organization’s work has brought about or is expected to bring about for animals, and the amount of money spent to achieve those benefits;
- their staff size;
- their historical revenue and expenditures;
- their current reserves held, reserves policy, and reserves target;
- how they would adapt their operations and programs under different funding scenarios;
- the organization’s human resource policies and processes; and
- the organization’s governance and structure.
How We Evaluate Charities
Using the information we collect, we assess charities on three criteria: Impact, Room for More Funding, and Organizational Health. For more details, please visit our Charity Evaluation Criteria page.
To make assessments, we identify uncertainties, conduct research, consult experts, create quantitative models, produce qualitative analyses, and ask increasingly tailored and detailed questions of the charities until we have sufficiently resolved those uncertainties.
At various points during the assessment phase, we conduct red teaming: Members of ACE’s evaluation team stress-test the arguments, claims, and decisions made in their coworkers’ assessments to detect errors and counterbalance any biases that other team members may hold. Additionally, we compare charities’ assessments with one another for consistency, and the charities themselves also have the opportunity to see intermediate versions of our assessments, challenge the content, and provide additional information before the assessments are finalized.
How We Make Recommendation Decisions
Once assessments are complete, we consider all the information we have about each charity and explicitly compare them against others being evaluated in the same year.
Each member of the evaluation team scores each charity according to the decision guidelines below on a scale of 1–3 (1 = comparatively weak, 2 = unclear/middling, 3 = comparatively strong), using the assessments as their guide. Then, each member uses those scores to decide on a final score for each charity on a scale of 1–7 (1 = strongly do not recommend, 4 = neutral, 7 = strongly recommend) based on whether they think ACE should recommend them. This is done independently and anonymously. Though quantitative, these scores are not treated as objective measures of impact. Rather, they help us express and compare our judgments more precisely, providing structure and transparency to the qualitative elements of our decision-making process.
The evaluation team then comes together to share their scores and discuss why each charity should be recommended or not. Afterward, team members adjust their final score for each charity based on the discussion and submit a second set of scores independently and anonymously.
Finally, the team reviews the updated scores and arrives at final recommendation decisions via consensus. We define consensus as general agreement among the members of a group. It implies that while not everyone may fully agree with a decision, they are willing to accept and support it because they believe it is the best option, or because they respect the group’s collective expertise.
Decision guidelines
We used the following guidelines to decompose our decisions into a predefined set of sub-judgments.1 They roughly correspond to our charity evaluation criteria:
- Does the charity have a strong theory of change with evidence and reasoning that supports their strategy, taking into account assumptions and risks?
- Does the charity’s work have the potential to significantly improve animal wellbeing or prevent suffering, especially in a comparatively cost-effective way?
- Is the charity’s leadership adequately sensitive to cost effectiveness when making strategic decisions?
- Are we confident that their future work will compare favorably to that of other charities being evaluated?
- Does the charity have sufficient room for more funding (i.e., are there any concerns about the charity being able to effectively deploy ACE-influenced funding)?
- Are we confident that an ACE recommendation would make a significant counterfactual difference when considering the likelihood of support from other funders and evaluators?
- Are we confident the charity has no organizational health issues substantial enough to undermine their future effectiveness and stability?
The relative importance of these guidelines is not fixed (e.g., cost effectiveness can play a large role for one charity but a smaller role for another), but in general, the first four (which are more directly related to positive impact for animals) play a larger role. Because these guiding questions are intended to be comprehensive in terms of decision-relevant factors, once these are answered, all other information is considered irrelevant. This allows us to make decisions entirely based on the merits of the charities. We explicitly do not take into account the following:
- whether a charity was previously recommended,
- ACE’s past relationship with the charity, and
- the potential impact of recommending/not recommending a charity on ACE’s reputation and public relations.
What do we publish?
Once decisions are made, we write a detailed review for each charity we choose to recommend, so that donors and charities may follow our reasoning. We write a summary review for charities we evaluate but do not recommend. The detailed reviews for Recommended Charities include an overview followed by sections explaining how well the charity performs on each of our evaluation criteria. The summary reviews of the charities only include an overview. We also prepare supporting documents, such as cost-effectiveness analyses and theory of change tables for all evaluated charities and ‘Financials and Future Plans’ spreadsheets for Recommended Charities.
We then share these detailed and summary reviews with the charities for feedback and approval to publish them. Because our evaluations rely on information that may be confidential, we sometimes make substantive changes to our reviews as a result of feedback from a charity to protect private information. We also correct factual errors or alter wording or emphasis, without affecting the substance of the review.
If the charity agrees, we publish the detailed or summary review and approved supporting materials on our website and list the charity as either “Recommended” or “Evaluated.” If the charity does not agree to their review being published, we list them on our site as “Declined to be Reviewed/Published.”
Finally, we award participation grants of at least $2,000 to all charities that participate but do not end up being recommended. These grants are not contingent on charities’ decision to publish their review; we also award participation grants to charities whose reviews we do not publish, assuming they made a good faith effort to engage with us during the evaluation process.
See Ploder (2024) for additional context considered in our decision making.