Jan 01, 2013
By Sue Cavanaugh

Making Sense of Indicators

The collection of health indicators has increased in recent years in the pursuit of accountability and quality improvement. But an overabundance of measurements without a coordinated plan can lead to indicator chaos

Pick up any newspaper on any given day and it’s a good bet at least one article will feature an indicator — maybe flu vaccination rates or the most recent wait times for MRIs. If it’s particularly compelling, the stat may even make the headlines. But what do all these numbers really mean? And how do we know when to pay attention to them?

The Canadian Institute for Health Information defines a health indicator as “a single measure (usually expressed in quantitative terms) that captures a key dimension of health, such as how many people suffer from chronic disease or have had a heart attack. Indicators also capture various determinants of health, such as income, or key dimensions of the health care system, such as how often patients return to hospital for more care after they are treated.” Commonly reported health indicators include BMI, smoking prevalence, diabetes rates, incidence of hospital-acquired infections, percentage of births that are caesarean sections, and infant mortality rates, to name just a few.

National summit calls for action

In 2011, the provincial health quality councils, along with national agencies such as Accreditation Canada and the Canadian Patient Safety Institute, held a national summit on the topic of indicator chaos. According to the summit report, Think Big, Start Small, Act Now: Tackling Indicator Chaos, participants supported a vision for a pan-Canadian health measurement system that is structured, transparent and accessible, and they identified key steps for working toward it:

Start with the patient. A nationwide system for developing and disseminating health-care indicators should be shaped and guided by patients’ needs, priorities and potential benefits.

Don’t talk. Act. Success in tackling indicator chaos demands quick action on two or three pilot projects to figure out how to set priorities, build frameworks for creating indicators and test the validity and usefulness of the indicators we develop.

Name leaders. Success depends on establishing a national consortium of dedicated stakeholders who will keep momentum from the summit going. Choosing the people to work together and providing a small and flexible secretariat to support them needs to be done quickly.

Create a clearing house. Our collective failure to communicate is causing tremendous waste. Efforts are duplicated, good ideas aren’t shared and work done nationally may never filter down to people and organizations that could use it.

Agree on priorities. Indicator chaos comes from a lack of common priorities and coordination and the impossibility of planning without them. Always ask: Who are we measuring for? Why are we measuring this?

Source: Think Big, Start Small, Act Now: Tackling Indicator Chaos, Saskatchewan’s Health Quality Council, 2011. Used with permission.

Indicators can provide information about people, such as their life expectancy, or about systems, such as pharmaceutical drug coverage rates. Indicators are extremely useful for capturing what is happening, but they do not explain why those things are happening. Without a back story or context, indicators have no real significance.

Indicator collection has increased in recent years to satisfy the demand for accountability and quality improvement in the health-care system. But along with all of this measurement comes a challenge — a challenge experts are calling indicator chaos. The term refers to both the overwhelming amount of data being collected and the lack of a coordinated plan across the health system on what numbers to collect and how to interpret and use them. Without a plan, there can be duplication of effort and a waste of already scarce resources. Even worse, money may be spent on developing programs and services that aren’t useful or appropriate because the data has been interpreted incorrectly.

“It’s an issue everyone should be thinking about and aware of,” says John Abbott, CEO of the Health Council of Canada. “We have all these data coming at us, in both our professional and our personal lives, and we have to try to make sense of them so we can make the right decisions, whether in the workplace or in our own family.” Abbott gives the example of childhood obesity indicators. While most in the health sector, he says, agree on how to measure childhood obesity, mere knowledge of the national child obesity rate doesn’t tell us much. “We need to look behind the indicators to see what the contributing factors are.” A closer look, says Abbott, is likely to reveal that children from poorer families, lower-income neighbourhoods and aboriginal communities have much higher levels of obesity than children from urban, middle-class neighbourhoods. “It’s important to put the numbers in context; otherwise we’re masking the real issues and challenges, and we end up spending a lot of resources pitching the wrong message to the wrong audience.”

But that doesn’t mean we shouldn’t be measuring those numbers. “The indicator can be used to draw people into the conversation,” says Abbott. “We can let everyone know we have a problem with childhood obesity and within what specific circumstances and context. Then, we can target the appropriate communities and their support systems.”

Health-care organizations, and particularly their overloaded managers and staff who are tasked with collecting data, are on the front lines of indicator chaos. Some may lack the necessary expertise to properly analyze and interpret the information once it’s collected. One thing they all need to ensure is that they use the most appropriate indicators for their circumstances; a program manager has to look at specific data sets that will help her accurately determine whether the outcomes for clients are positive or negative and whether any changes to the program are warranted.

If organizations can’t answer why they are measuring something, they should question whether they should be collecting the data at all. This opinion was echoed by attendees during a colloquium on indicators at CNA last spring for leaders from many national health-care associations. There was agreement that the information indicators provide has to be relevant for all levels of the health system.

The Ottawa Neighbourhood Study

Since 2005, the Ottawa Neighbourhood Study has used a combination of health, environmental and safety indicators to provide public health, city planners and residents themselves with valuable information about the overall health of their neighbourhoods. The ongoing collaborative project, led by Elizabeth Kristjansson of the school of psychology and the Institute of Population Health at the University of Ottawa, has spawned programs and services tailored to the needs of individual neighbourhoods. The city’s community health and resource centres have created a strategy called No Community Left Behind, which helps residents develop solutions to address the issues identified in the study. One community was able to use the indicators to get support to create new playgrounds, reduce gang activity, make its streets safer and rejuvenate its community association.

In 2012, the study received the prestigious CIHR Partnership Award.

When indicators are developed from self-reported data, their reliability may be questionable. For example, the Canadian Index of Wellbeing, which tracks multiple quality-of-life indicators, found an interesting disconnect between self-rated health and clinical data collected by Statistics Canada in 2003. People from Newfoundland and Labrador had the lowest life expectancy and among the highest rates of diabetes and obesity in the country, yet they were most likely of all Canadians to consider themselves to be in excellent or very good health. Residents of British Columbia, who had the longest life expectancy, lowest rate of obesity and average levels of diabetes, had given a much lower self-rating of their health.

In her job as a senior epidemiologist with Ottawa Public Health, Amira Ali analyzes and interprets data sets to ensure that public health programs and services are evidence informed and address local population health needs. She acknowledges that using self-reported measurements of health can be problematic. “For example, we know that self-reported BMI likely underestimates the true BMI. So we have to keep an error or inflation factor in mind and be conservative in how we use those types of data sets.” However, Ali believes there is a place for self-reporting: “If we didn’t allow for it, we would have next to no data on behavioural risk factors — because how else are we going to determine them?”

There’s no question that indicators can be manipulated for various purposes, political and otherwise; numbers are often thrown around to place blame for poor system performance or patient outcomes. Certainly, political parties have been known to interpret data selectively to score points or to bolster their own platforms. For example, if data showed that 25 per cent of Canadians had difficulty accessing health care, one politician might say this means that most Canadians do not have difficulty with access; another, that one in four Canadians have problems accessing care.

The father of modern indicators

In 1662, John Graunt published a booklet of indicators of the health and size of the population of London, U.K. Natural and Political Observations Mentioned in a Following Index, and Made upon the Bills of Mortality was the first publication of its kind. Using the christening and burial records (bills of mortality) for each parish in London for the previous 70 years, Graunt produced a clear picture of the health issues of the population by tracking the causes of death. In his findings, he argued for policies that would combine medicine and social responsibility with good government. One of his recommendations was that beggars should be taken care of with public funds.

Graunt was concerned about the reliability of the causes of death stated in the records, whether through the ignorance or the corruption of the recording clerks or the inspectors who determined the causes. For example, he was convinced that consumption was often recorded as cause of death to cover up for the far more shameful “infection of the Spermatick parts.” He also knew that the christening records did not accurately capture the number of births in a parish because Catholics and Puritans, in particular, were reluctant to have their children baptized into the Church of England.

Graunt also published a collection of the bills of mortality for 1665, the year of the Great Plague of London. Week by week, parish by parish, he tallied the rise of the plague, from its first real appearance in May (28 recorded cases) to June (340), July (4,400), August (13,000), September (32,300), October (13,300), November (4,100) and December (1,060).

For his work, Graunt was elected to the Royal Society of London — quite an achievement for a man who was a haberdasher by trade.

Ali has some advice on how to make sense of indicators reported in the media. First, look to see whether a source is provided. If the source is not reputable or is missing, be suspicious of the accuracy of the data. Second, keep an eye out for words like astronomical, skyrocketing, of epidemic proportions over-the-top descriptors like these should be interpreted with caution. Third, look for the context. If no context is provided, the data should be disregarded or, at least, investigated further. “We often have people calling our information line after they’ve read a story that quotes an alarming statistic,” says Ali. “We make sure the public health nurses operating the lines have access to the right information to explain how the article was misleading or what the number really means.”

Sue Cavanaugh, is a freelance writer in Ottawa.
comments powered by Disqus
http://www.canadian-nurse.com/en/articles/issues/2013/january-2013/making-sense-of-indicators