×

Search form

Better aid

Cash Programming Metrics— Are We Talking About the Same Thing?

IRC research and development officer

We blogged previously about our latest research efforts to make humanitarian cash transfer programs (CTPs) more efficient, timely, and sensitive to the costs borne by the programs’ clients. The subsequent planning and design phase of these efforts have since exposed a lack of industry-wide benchmarks and clear guidance on measurement for CTPs, and without them, organizations do not have defined standards to work from and toward. It makes sense to establish these benchmarks and definitions for common indicators and allow donors and clients to hold the industry accountable. Here, we use two metrics—time to delivery and cost efficiency—to highlight the everyday measurement challenges faced by CTPs.

Time to delivery is the length of time an organization takes to deliver cash to recipients after a crisis occurs. Cash, and getting it to people quickly, allows people to meet their basic needs without resorting to negative coping mechanisms, such as selling assets or taking on hazardous work to make ends meet.

But measuring time to delivery is tricky. From the perspective of the client, the start and end points may be obvious, but it is less clear to an implementing organization such as the International Rescue Committee (IRC). Does the time-to-delivery measure start when the crisis occurs, or when the government assigns the IRC to work in a specific location, or when the IRC conducts a needs assessment in a particular community? Should the IRC count the time delays that are beyond our control?

Similarly, what should the end point be? If it is when the clients receive the cash transfers, how do we define “receive”? Is it when the clients receive the message via mobile money transfer, or when the first client cashes out, or when the last client cashes out? Delays can also occur between when the message is sent to when the clients actually cash out; the mobile network might be down, or the agent could have run out of cash. In these scenarios, using client receipt of a message as an end point does not equal cash in hand.

The second metric—cost of delivering cash transfers—is better documented, but still often an after-thought for organizations when the project ends. Although organizations can generate reasonably accurate estimates of the costs incurred by the organization, other important factors related to the cost-efficiency measurement, like those to the community, are often not included. A community’s time can have important implications for being cost efficient when, for example, trying to determine whether to follow a community-based or agency-led approach. Factoring in the community’s time allows a more accurate estimate of the trade-off for switching from one method to the other.

In the interest of improving CTPs, what, when, and how we measure is critical, as is establishing concrete benchmarks. The scenarios discussed here require relief organizations, including the IRC, to make decisions on how to define these metrics. If humanitarian organizations are making different assumptions about how to measure the same things, then the comparison of performance across organizations, or even across programs within the same organization, is inaccurate. Greater transparency and documentation on performance and how performance is measured, even at the risk of greater scrutiny, would allow the humanitarian community to establish the benchmarks, create the space for constructive feedback, and effectively deliver CTPs for crisis-affected clients.

This project was funded with UK aid from the UK government.