The Lancet Global Health’s International Advisory Board

The Lancet Global Health‘s recently announced International Advisory Board reminds us a of another internet sensation: http://stuffwhitepeoplelike.com/.  That’s only because no one has started http://stuffbillgateslikes.com/ …at least, until this journal came along. Just to be clear, this advisory board doesn’t even rise to the level of reminding us of http://stuffexpataidworkerslike.com/.

We don’t normally play the identity politics game, but The Lancet Global Health’s International Advisory Board is just too laughable to pass up. The careful choice of academics “originally” from middle income countries (and occasionally–gasp–even low-income countries) creates the appearance of representation from “The Global South.”

Yet our firsthand recollection of conference agendas suggests that most representatives on this list typically have spent more time in fancy Geneva hotels, on business class flights to or from Seattle, and holed up in fancy Western universities than anywhere you’d find sick and/or poor people. Our first pass over the list suggests that at least 3/4 have mysteriously neglected to list their Gates/USAID/DFID affiliations. Shocking!

Still–there are a number of truly excellent academics represented, including several who are quite close to the authors of this blog. But these choices were rather clearly made to leave readers with a gestalt of diverse geographies, viewpoints, and perspectives being represented in this new journal. But this group is not that, and speaks to much larger problems of group-think in global health and thought leaders who are divorced from the realities of patients’ lives. In other words, what looks like the perspective of South Africa or Pakistan boils down to that of the Gates Foundation. And perspectives that appear to come from India or Rwanda or Mexico or China in this group are really just Harvard academics. Yet despite being mostly brilliant, these people all think alike because they’re all at the same meetings, on the same planes, giving the same presentations one another, and staying at the same expensive hotels year after year after year…in London and Beijing and Bangkok and Johannesburg and Seattle. Like the old “fat AIDS”/”slim AIDS” dichotomy, these panel members’ careers are about as distant from the lives of patients as humanly possible.  In our view, this makes suggestions that the journal will be different sound hollow.

So, is this panel–and this journal–likely to produce the sort of “grassroots” research that Zoe Mullan speaks of in the accompanying video? Rather, is everyone involved–from the Advisory Board members to the way the beloved Richard Horton in the accompanying online video–portrayed to appear slightly darker than they are in reality?!  Is this journal actually being launched in Seattle at a Gates meeting?! Same as it ever was.

Staffing and Value for Money at The Global Fund

In a previous post, we mentioned the not-very-surprising fact that at least according to a 2011 Global Fund Board Meeting Document citing 2009 figures–which to us appear strikingly “behind” for an organization that at the time was in the midst of a major budget and governance crisis, The Global Fund only spent 37% of its $21.9 USD billion portfolio on “the procurement and management of health commodities.”

Where does the other 63% of The Global Fund‘s budget go, you might ask?

We hope to discuss The Global Fund at much greater length in future posts and this post doesn’t begin to answer that question, but we received a document from The Global Fund’s 3rd Audit and Ethics Committee Meeting on 30-31 October 2012, which discussed the Global Fund’s Office of the Inspector General (OIG).

The OIG is a small ( but quite important, in our opinion) part of the overall Global Fund budget, and readers should also note that other internal/Board decisions may have been made since the October 2012 meeting document we’re referring to.

The OIG’s 2013 budget proposes a 15.7% reduction over 2012 funding for the office, an increase an in-house staff, and a decrease in reliance on contractors.  A total of 45 staff have been budgeted for 2013, including:

  • 5 FTEs for the Inspector General (IG) and a support team consisting of “Senior Legal Advisor, Communications Officer, Business Process Manager, Executive Assistant” at an average staffing cost of US $221,715.20 per FTE
  • 18 FTEs for the in-house Audit Committee at an average cost of US $242,316.33 per FTE
  • 22 FTEs for the in-house Investigation Committee at an average cost of US $185,586.14 per FTE

So the money is in auditing, rather than investigating!  Yet with a total 2013 budget of $US 21.2 million, this office still has budgeted annual costs (amortized over the 45 FTEs planned) of:

  • travel costs averaging an impressive US $57,444.44 per in-house FTE
  • communications costs averaging US $2,888.89 per in-house FTE
  • IT costs averaging US $9,333.33 per in-house FTE
  • meeting costs averaging US $2,433.33 per in-house FTE

Ignoring the costs of consultants and indirect costs totaling US $7.4 million and US $1.07 million (respectively) for the entire office, these costs still add an additional annual average of $72,100.00 per FTE in travel, communications, IT, and meetings.  Perhaps the IT costs can be dismissed as data services of institutional value that aren’t tied to FTEs in that department, but the bulk of this figure is plain old travel.

We know firsthand how expensive living and working in Geneva or retaining the top talent can be in practice, but are these sorts of departmental budgets a good Value for Money proposition?

Is something wrong when the per-FTE direct costs in a department are about USD $300K and management concludes that it’s nevertheless cheaper to expand in-house staff than outsource more work to contractors?  And sure, extensive travel may be required for many of these roles, but that doesn’t mean it shouldn’t still be a potential cost-cutting opportunity.

The Global Fund’s New Funding Model certainly points to some important reforms, but we wonder if moving certain departments closer to the countries they serve, flying coach, or working to reduce staffing costs as much as commodity costs might further improve The Global Fund’s overall Value for Money proposition.

Political discussions concerning The Global Fund tend to focus on corruption and the financial sustainability of HIV treatment, but numbers like this make us wonder about the financial sustainability of Global Fund staff salaries, which are not transparently collected and made publicly available in the same granularity as commodity price data.

Despite paying salaries averaging almost $185K in the 2013 proposed budget, the Audit Committee still had nine unfilled vacancies last year and a significant backlog of cases.

Salud Mesoamerica 2015 (SM2015) and Conflicting Goals: Priority Setting for Global Health

This post was updated on March 23rd to fix a few minor typos (in blue) and link to a few thoughtful tweets we received in response.  In addition to acknowledging Amanda Glassman’s incredible efficiency at expressing complex thoughts in 140 characters (seriously!), we should have done a better job of emphasizing that our goal was mainly to highlight how related strategy goals can create tensions amongst one another when it comes time to implement and evaluate global health programs in general, rather than pick any bones with SM2015.  Here are links to our twitter responses.

Inspired Foreign Aid Reform, or Just Another Race to the Bottom?

The Center for Global Development recently blogged a new public-private partnership known as Salud Mesoamerica 2015.  As a collaboration funded on the donor side by Carlos Slim, Bill Gates, and the Government of Spain, this exciting initiative incorporates several popular policy ideas in global health.

We’re excited to see this program get off the ground and believe it may hold significant promise, but it also seemed like a timely opportunity to spotlight some of the ways conflicting policy objectives in global health can undermine strategy execution.

Our goal here is not to criticize this particular program or its design per se, but rather to point out how seemingly related programmatic goals can conflict with one another when put into practice.

Targeting the Poorest (Goal #1) and a Cost-effective Package (Goal #2)
The poorest populations (goal #1) are almost never the most cost-effective to target–and for the same reason, most cost-effectiveness data pertaining to services (goal #2) does not come from high-quality evaluations of programs targeting services to the poorest populations in developing countries.

CGD smartly points out that delivering interventions cost-effectively (goal #5) can be learned through iterative evaluations.  But the results of cost-effectiveness analyses and impact evaluations could still be quite misleading–and potentially for quite some time–if:

  • the effects of an intervention outlasts the time horizon for program evaluation, but are not captured or estimated with uncertainty–as you might expect when only 18 months elapse between baseline and endline evaluations
  • the marginal cost of the intervention itself is tiny relative to the costs of delivering, managing, or evaluating interventions in general, and especially when rolling out an intervention quickly—as is arguably the case with The Global Fund, which only spends ~37% of its budget on actual commodities, including the second-line treatments that are “100 times or more as expensive as first-line regimens” as mentioned by CGD
  • an intervention is expensive to deliver because its coverage is low, whereas economies of scale can “bend the cost curve” in ways that may not be obvious from cost-effectiveness analyses, or are impossible to observe over an 18-month time horizon

Why are these trade-offs important?  The poorest populations are usually also the most expensive to target because they fundamentally lack access to resources.  Without first building or strengthening a functional “delivery channel” for interventions in general–which inevitably implies significant start-up costs and risks–many high-impact services will not be cost-effective to deliver, at least as long as priority setting exercises assume the status quo.  And yet by focusing on the most cost-effective interventions (and by extension, avoiding investments in health systems whose “results” or “impact” may be much more difficult to predict or measure in a specific context), establishing a track record of “rapid impact” further undermines the investment case for longer-term, structural reforms to health systems because all of these decisions are made on the margin, based on  overlaps between donor priorities, government priorities, and patient priorities.   What’s cost effective given this “sweet spot” of overlap is rarely the most cost effective for governments in the long term.

Cost-effectiveness analyses allow analysts to generate cost effectiveness estimates for diarrhea interventions, for example, but the sensitivity of these estimates to context is often ignored, unknown, or underestimated.  Another issue is the role of underlying secular trends in the dynamics of cost effectiveness estimates over the lifecycle of its deployment vs. the individual investment decisions made upfront, which are typically based on data that are out of date by several years.  By improving coverage of oral rehydration salts to reduce diarrhea mortality, for example, the investment case for sanitation improvements may look weaker in comparison based on analyses of survey data from 2005 or GBD estimates from 2010.  By investing in rotavirus vaccines–one of ~70 known etiological causes of diarrhea–the investment case for sanitation also becomes weaker in comparison.  What PPPs relying on cost-effectiveness analysis often forget, however, is that improving coverage of oral rehydration salts also undermines the investment case for rotavirus vaccines over time, and not just the investment case for sanitation.

In other words, the interventions deemed cost-effective based on data from 2005 may not still be the most cost effective in 2013, and 18 months between baseline and endline is still plenty of time for the poorest to experience real effects of vaccine roll-out or even plain old economic growth, which can hit 7-10% annually in some developing countries and whose sub-national effects on the purchasing power of the poorest families might be even larger.  These hyper-local economic effects and the geographical prioritization of the scale-up of other interventions mean that even randomized evaluation–and especially cluster randomized evaluations–may still be significantly biased and/or have limited external validity…potentially even within a country.

The effects of underlying secular trends on the cost, coverage, and effectiveness of competing interventions often means that what appears to be cost-effective when an investment decision is made based on data from 2005 may be much less so by the time a program’s baseline data are collected in 2013, which in turn may be further diluted by secular trends and competing interventions inside and outside the health sector between 2013 and 2015.

Targeting the Poorest (Goal #1) and Money Attached to Results (Goal #4)
A similar tension exists between targeting the poorest (goal #1) and attaching money to results (#4).  Performance-based financing is a powerful and potentially promising concept, but implementers (both before and after transition) will always be incentivized to prioritize the most cost effective way to deliver the most cost-effective interventions, which (again) almost never involves reaching the poorest except to the extent that it’s mandated.  This leads us to another set of conflicting goals.

Incentives for Governments to Take Over the Job (Goal #3) and Targeting the Poorest (Goal #1)
Incentives for governments to take over the job (goal #3) and target the poorest (goal #1) imply what might be a politically implausible “theory of change” in the long-run because it assumes that governments will experience sustained incentives to spend public funds on their poorest citizens, enabling donors to “get out of Dodge”.

Are the poorest not being reached by health interventions because the government doesn’t know which interventions are cost-effective, because the government doesn’t have enough money to provide them, or simply because the poorest aren’t an important political constituency?  Even if the answer is “all three,” who will hold the government’s feet to the fire once donors have “gotten out of Dodge?”

The poorest are often politically disenfranchised, and getting half of a country contribution back for a successful program may not be a strong incentive when the counterfactual is the government spending almost nothing on the poorest to begin with.  A government accepting half-free or mostly-free aid money is not the same thing as creating and sustaining political pressure from its poorest citizens (or even non-poor citizens) to ensure that programs deemed a success continue.  Many foreign aid programs emphasize the accountability of developing country governments to donors in the short-run, which is not the same thing as those governments becoming and remaining directly accountable to their poorest citizens in the long-run.

Cost-Effective Package (Goal #2) and Money Directly Attached to Results (Goal #4)
The goals of focusing on cost effective intervention packages (goal #2) and attaching money directly to results (goal #4) also conflict.  Will the intervention packages selected (ignoring the non-additive effects of bundled interventions for now) be determined by cost-effectiveness and what can be delivered cost-effectively as described, or will governments and/or donors instead opt for the interventions or intervention packages whose results are easiest to measure over 18 months?  Will performance-based financing favor “soft” metrics like survey-reported behavioral observations on, say, bednet use—or will these programs gravitate towards “hard” outcomes like medical autopsies documenting reductions in mortality attributable to malaria (instead of, say, verbal autopsy)?

We certainly know which evaluations are cheaper, but do the Hawthorne effect and social desirability bias disappear from household survey evaluations if they’re independently conducted, especially if beneficiaries know that continued funding depends on their survey responses?  Is it even possible to hire an independent evaluator whose business model fundamentally depends on having additional programs to evaluate?  We wonder.

World’s Poor Thrilled by Gates Foundation Funding for Interactive Graphics of Their Suffering

Since the $105M given to start the Institute for Health Metrics and Evaluation in 2007 proved inadequate to publish the 2010 Global Burden of Disease on time,  Bill Gates recently topped them off with another $8.2M–apparently to put the data into interactive charts or perhaps as a reward for only being two years behind schedule.

This new website saves those of us working in global health the few seconds it would take to produce PivotCharts in Excel, use the graph command in STATA, or graph data in R.

This amount of funding ($105M + $8.2M, or $113.2M total) also could have been used to:

  • Supply 226.4M courses of Oral Rehydration Salts at $0.50 per sachet, or 32M courses for each of the seven years since 2007
  • Hire 161 physicians paid $100,000 per year for each of the seven years since 2007
  • Procure lifesaving antiretroviral drugs for 80,000 people at a cost of $200 per patient per year for each of the seven years since 2007.

We’ve heard that the 32 million children who didn’t receive oral rehydration salts, the 80,000 people who didn’t receive antiretrovirals, and the tens of thousands of patients who weren’t able to see a physician are all delighted that the ability of wealthy researchers to measure their suffering has improved.

The world’s sickest and poorest have also informed us that they’re similarly glad to hear that American taxpayers–who subsidize these “charitable activities” in the form of tax deductions to the Gates Foundation and its mostly American grantees–will be able to maintain their moral distance through interactive graphics instead of having to bear witness to the reality of human suffering.

And at the end of the day, why would anyone want to focus on addressing human suffering when measuring it instead is so much more interesting?

Awkward Times at the Gates Foundation

This gem of a podcast series signifies that McKinsey or BCG or whomever is now milking the foundation for multimillion-dollar strategy consulting projects has apparently managed to drag the foundation’s leadership–no doubt kicking and screaming–to bump up communications spending and shack up with the Center for Effective Philanthropy.  It’s part of an effort to rehab the foundation’s reputation with their own grantees...only two years after commissioning a report on the problem, and only five years since it was first identified in a survey of grantees in 2008. And there’s some juicy data in a September 2012 “Progress report” that we may have to discuss in more depth later.  But the 2012 “Progress Report” is clearly progress on the transparency front, especially since the data from the 2008 and 2010 surveys have never seen the light of day.

Despite some obvious attempts to imitate the CGD Global Prosperity Wonkcast–complete with the kitschy music clips between segments–in structure, the Gates Foundation podcast doesn’t do as well as CGD on substance.  The series puts some unlucky staffers in the difficult position of having to discuss the “sometimes opposing views” the foundation encounters from outsiders.  This requires the psychological gymnastics of attempting to convince grantees that they’re being listened to via a one-way communications medium while simultaneously maintaining enough distance from the problems for their bosses to escape blame for ignoring the extent, spirit, and letter of criticism received thus far.

So how did the most important foundation in global health end up in such an awkward position? 

A foundation needing to convince its own grantees that this time, they’re trying to become a better partner with an internal communications campaign is a great example of the unintended consequences of institutional–or “soft”–corruption.  Deliberately or not, buying up (or intimidating) the vast majority of the relevant talent pool in consulting, “global health journalism”, and civil society capable of delivering critical feedback to the foundation and holding it accountable to its stakeholders backfires as a strategy when the institutional roles of “checks and balances” can only be played by individuals and organizations that are directly or indirectly dependent on the foundation for funding or political support.

The foundation sees no shortage of consulting firms willing to deliver these sorts of inconvenient truths to the management team much more quietly, gently, and expensively than an investigative journalist or a rival foundation with opposing views might.  But once the institutional ecosystem has been poisoned in this way (hopefully inadvertently), the foundation is ultimately left with few options for addressing the problem other than “fake it until you make it”…one podcast at a time. But sustaining internal buy-in for meaningful reforms to the way the foundation engages with global health grantees may prove much more difficult to sustain than these awkwardly upbeat podcast narrators suggest.

Especially after years of enormous outlays to major media outlets to outsource stealth PR in support their global health grantmaking under the auspices of “global health journalism” (with conflict of interest disclosures that are almost completely lacking, even when highly relevant), an internally-produced podcast series represents an acknowledgment that an organizational culture of secrecy and ineffective communication can still have a very real cost where the rubber meets the road:  the foundation’s own implementing partners.  After waiting so long to start paying attention to the transaction costs and other inefficiencies that these problems create for the foundation and its partners, it may be a lot harder to change course now.  If the only perspectives they can really consider are those of reluctant grantees whose jobs depend on keeping foundation staff happy, we’re all in a lot of trouble.

Perhaps more importantly, giving a few mildly sassy words from one of their grantees some airtime on a podcast for other grantees as an example of receptiveness to criticism shows that the foundation has failed to execute on the “learning organization” concept, which would require management to accept that the universe of useful perspectives on the foundation’s strategy, performance, and impact may somehow still be broader than the vast network of their current grantees.

The foundation’s commitment to the concept behind these podcasts is most likely genuine.  But once reform starts getting real–for example, once it becomes obvious that norms for communication and transparency with partners are driven by the management team and someone suggests the wrong staffing replacement–this type of reform suddenly becomes much more difficult to sustain from within, by which point the consulting firms holding their hand through the process will have wrapped up their contracts.

A rapidly vanishing pool of independent, unaffiliated voices may have achieved other objectives for the foundation, but those successes may have also come at the cost of any real source of external pressure for reform.  And if grantees can’t be trusted or incentivized to walk the delicate line on messaging between mea culpa and a twelve-step program, there may be many more podcasts to follow.

And we can’t resist some of the ironies here:

  • There appears to be no way to link to the page with the podcasts directly from the foundation’s “For Grant Seekers” pages that are reachable via gatesfoundation.org, and the link was only sent to grantees–and quite possibly only select grantees.  The direct link to their SoundCloud page is here.
  • In typical Gates Foundation fashion, this podcast isn’t available through iTunes, where many of the world’s podcast listeners actually find and listen to podcasts.  SoundCloud streaming doesn’t work so well for grantees in developing countries with poor internet service, either…!  Even the downloads are a tough slog in implementing partner territory.
  • A related page accessible via gatesfoundation.org invites feedback via the twitter hashtag #gateschat, but a search for it on twitter reveals zero results.  Not much of a dialogue!
  • The web form for submitting feedback is probably a lot less anonymous than it’s made to appear (think IP address collection from countries where there are only a handful of grantees), but their use of EthicsPoint is interesting (albeit mostly meaningless without transparency to the outside world).