How to evaluate the next round of PEPFAR funding is the focus of a report just released by the Institute of Medicine. Design Considerations for Evaluating the Impact of PEPFAR is the summary of a 2-day workshop on methodological, policy, and practical design considerations that included staff of the U.S. Congress; PEPFAR officials and implementers; major multilateral organizations such as The Global Fund, UNAIDS, and the World Bank; international nongovernmental organizations; evaluation experts; and representatives of PEPFAR focus countries.
The outcome of the workshop was a set of questions to guide the evaluation of the impact of PEPFAR programs that clustered into nine categories: cost-effectiveness, logic of conceptual approach, health impacts, impacts beyond health, capacity building and health systems strengthening, coordination and harmonization, sustainability, equity and fairness, and unintended impacts. The major elements contained in each category are really too numerous to list here, and they never got to individual items and scales. Suffice to say that the experts came up with a million dollar (or more) set of design principles that will either never see the light of day or will not have an intervention to evaluate because they sucked all the money up designing the study.
Workshop participants highlighted that evaluation efforts must be viewed within the larger context of workforce development and building local capacity in countries to respond to their own public health crises through the collection, analysis and application of program information. The recommendations on how to do this were the usual fare - dissemination of methodologies, provision of technical assistance, recruitment and training of personnel in evaluation methods. As often happens when evaluators get together to talk about programs, they overlook the necessity to embed evaluation as a critical part of overall program architecture and design – one of the hallmarks of the social marketing approach. Somehow, this attitude that there must be a separation between implementation and evaluation activities (despite the rhetoric about the importance of integration) needs to be overcome if we are ever going to be able to apply all of our resources to HIV/AIDS prevention in a conscious and deliberate way. If evaluators do not understand implementation models, and the implementers cannot understand evaluation methodologies, we are destined for mediocrity (and a quick look at the workshop participants did not assuage my feeling that the implementation voice was, shall we say, underrepresented).
The workshop came up with general principles to guide the development of PEPFAR evaluations that do not break any new ground.
- Prioritization to narrow down what needs to be measured. While they offer that this prioritization should be informed by who needs the information and when, they do not address the issue of prioritizing the stakeholders (who gets listened to first and last) – though implicitly there seems to be an emphasis in the report towards policymakers/donors and program planners.
- On-going (or what they termed formative – I would call process) evaluation to improve programming and to inform decision making, as opposed to evaluations that only focus on outcomes at the end of a program to judge their success or failure.
- Multiple and complimentary methodologies that employ both qualitative and quantitative methodologies such as case studies, working papers, interviews, models, literature reviews, surveys, fieldwork, participatory approaches, and theory.
- The use of newer methods of randomization to improve the value and credibility of the evaluation. They point out that while many perceive randomization as difficult to implement and impractical at the country level, new methods are available to more easily incorporate randomization into a study. They must not have read, or believe, the advice of the Global HIV Prevention Working Group that advocates for fewer randomized studies and more attention to effectiveness research and scaling projects up rather than draining resources into more randomized studies.
- Consultation and communication with decision-makers throughout the evaluation process to understand their needs for evaluation (a topic others have explored in some detail from a social marketing POV).
- Limitations in data collection capacity and the models that are used to guide the design of evaluations. In the latter case, they call for more empirical validation of existing models and improving the accuracy of these models by adding more variables. Sounds to me like (academic) evaluators making more work for themselves rather than focusing on implementing what we already know with models already demonstrated to be effective (See the Working Group report again).
- A call for ‘early design’ work for evaluations so that they can use randomized approaches more often and detect the appropriate impacts early in the program’s existence.
Comparisons across contexts is an important attribute of evaluations, but change may be highly contextual…Factors independent of program interventions may have a significant influence on change. SO we should spend more time and resources early on designing randomized studies that have little or no transferability or generalizability?
What is the most glaring problem of this report from my POV is that a 10 page table detailing variables to measure HIV/AIDS specific and general impacts has only one item for “Behavioral change - Modification of sexual, injection, and drug-adherence practices." That is a scary commentary on what is being valued and pursued by people informing international policies for ‘what’s important’ to measure and ‘how’ in confronting the HIV/AIDS epidemic. There seems to be no appreciation for what behavioral research has identified as effective prevention strategies to aggressively pursue in PEPFAR and other HIV prevention programs. More PEPFAR time and resources should be spent on perfecting laboratory assays and model testing? Welcome to the public health – academic complex. Why change things when there’s always more to learn?
And this kicker - “Speaker Shannon Hader of OGAC asserted that measuring behavioral outcomes is much more difficult than measuring treatment outcomes” (page 45). Right, why tackle the hard stuff.
To make the distinctions clear and how much the IOM workshop participants took their eye off the ball, here are the Global HIV Prevention Working Group recommendations for research:
HIV prevention scale-up will help ensure readiness of countries to rapidly introduce new prevention approaches as they emerge. In the meantime, national governments and communities should provide strong support to research efforts to develop new prevention tools and to improve on those that already exist. National governments and research agencies should prioritize social research to improve understanding of factors that increase vulnerability, identify and characterize programs and specific policy actions to address such factors, and inform the development and adaptation of national HIV prevention strategies. Operational research should focus on optimal, cost-effective strategies to accelerate scale-up, ensure sustainability, and maximize the impact of HIV prevention strategies.
If you do take a look at this IOM report (and despite my protests, if you are working with PEPFAR better to know the context than not) the download is free with registration or can be read online. The most useful discussion for planning your own efforts is in Chapter 4 that includes several case examples of what they consider to be exemplary efforts.
[Ed Note: Type 3 errors are finding the right answers to the wrong questions.]
Comments