Does the name “Nate Silver” ring a bell? It should: he is the statistician who correctly predicted the winner in all 50 states and DC for last fall’s presidential election.
Before that, he developed a powerful system for predicting the performance of Major League baseball players. And he’s at it again: forecasting what team will win the NCAA basketball tournament (read on if you want an advantage in your office pool!). But the point of my article this month is not about predicting sports or political races – it’s about the predictive power of data in general, and the impact it can (and will) have on organizational and community decision making…
In any type of organization, data are everywhere. Think about it: for any given organization, there is customer-related information (expectations, satisfaction, engagement, loyalty, complaints, service levels, social media likes, referrals), market-related information (market share, competitor positions, buying behaviors), financial information (revenue, margin, profitability, cash flow, and all sorts of ratios), operational information (productivity, cycle time, errors, waste, safety…just to name a few), workforce-related information (skill acquisition and training effectiveness, satisfaction, engagement, attendance, injuries), and the list could go on. Furthermore, data can be sliced and diced in many ways — by customer segment, by market, by service line, by department, by work site, by shift, by division, by employee segment, by process.
Indeed, most organizations do not lack sufficient data. But most organizations also struggle to turn that data into information so that is useful in decision making, optimizes resources, and truly improves outcomes. And very few organizations effectively use data to predict the future — to anticipate customer or employee needs; to foresee market trends before the competition; to detect changes in technology and other environmental factors; in short, to “see around corners.” In many ways, however, that’s what sets pretty good organizations apart from excellent ones.
At an event a few weeks ago, and reported by Forbes, IBM CEO Ginni Rometty said that the use of data (and the technology that supports it) will transform the future of business. Rometty predicted that data will be the basis of competitive advantage going forward — the “next big natural resource” for enterprises. She believes it will change how decisions are made and how value is created and delivered. Rometty: “…decisions will be based [more] on predictive elements versus gut instincts.” She claims that today, even in scientifically-oriented fields, decisions are still being made based on “anchoring biases” — that leaders interpret information through their own lenses, which introduces subjectivity and preconceptions.
But this is changing. Rometti cited examples of where seemingly disparate data are connected, analyzed, and used to inform decision making and to solve real problems. In one example, Rometty shared how IBM is using technology in collaboration with the Memphis police department to study correlations between incidences of rape and access to outdoor payphones (something I didn’t even know still existed!). Moving payphones indoors reduced these crimes by 30%. That’s an example of a “Smarter Planet” in my opinion! I know similar initiatives are taking place in Minnesota, as Minneapolis, Duluth, and perhaps other police forces are studying crime patterns and trying to predict – right down to city blocks – where the next crime will take place, so they can prevent them from occurring.
Think of the implications that predictive analytics could have — the impact on individuals and the major community problems that could be addressed. Some of them are mentioned by Eric Siegel in his book, “Predictive Analytics.” As a society, we could better predict (and therefore react to and/or proactively manage):
- Crime—The causes, the neighborhoods most at risk, the time of day, day of week, season or economic cycle where crime patterns peak.
- Educational attainment—Such as the probability of graduating (or, better yet: the chance of success later in the life) for kids that read by age five (or three or some other number); the factors that make certain teachers more successful than others; the impact class size has on student achievement and the achievement gap; and so forth.
- Weather—Not just better forecasting to suggest whether you need a jacket or umbrella, but how and when and where major catastrophes (hurricanes, floods, tornado, drought) might occur, and how weather patterns affect a business’ cost, transportation, demand for product.
- Economics—Such as predicting where emerging jobs/skills are needed, what impact tax changes have on employment and employer decisions (as well as the overall economy), how other political policy changes impact business.
- Individual and community health—Such as the risk of death from surgery; the risk of disease or injury given certain lifestyle selections (and/or diet, family history, genetic pre-disposition); the probability of suicide from certain psychiatric profiles; and the way diseases and contagious illnesses are spreading.
On the latter, I saw an article recently in the Washington Post that explored how epidemiologists and public health officials are beginning to use Twitter to track where diseases are breaking out and how (and how fast) they spread. What once would take the CDC weeks on which to report could take literally minutes or hours for Twitter to uncover.
The benefit of predictive analytics on society is significant. And we’re even beginning to label the phenomenon of analyzing/synthesizing large, complex, and seemingly disparate data sets as “Big Data.” Think cures for cancer and other diseases, reduction in crime, increases in economic vitality and job creation, reduction in the educational achievement gap — major problems being solved because of new insights generated from new ways of understanding data.
But I believe that organizations can also benefit from thinking about the data and information they already have at their disposal in a different way, and thus greatly improve decision making, innovation, and all sorts of enterprise outcomes.
Take for example the recent homerun by Netflix. In the TV business, there is no such thing as a “sure thing” — each show carries a certain amount of risk. But Netflix, which has 27 million subscribers in the US (and 33 million worldwide) did some interesting analysis on the behaviors of that large pool of customers. They found that director David Fincher’s work usually had a good following, that films featuring Kevin Spacey usually did well, and that the British version of the show “House of Cards” also did well. Those three pieces of data suggested that buying the House of Card series would be a good bet. It was.
As summarized by David Carr of the NY Times: “While careers and entire networks have been made and lost based on the mysterious alchemy of finding a hit, Netflix seems to be making it look easy, or at least making it a product of logic and algorithms as opposed to tradition and instinct.”
I think that logic can apply to just about any organization in any industry trying to improve any outcome.
A few weeks ago, I was in a conversation with a member organization of the Performance Excellence Network, exploring ways they could improve performance. One of the key findings our Performance Excellence Award Evaluator team offered was the potential to better use the data they already collect. Specifically, our team recommended that this organization think through the cause and effect relationships in the metrics they already use — to identify the links between their leading and lagging indicators, so that they might better predict outcomes by studying upstream metrics.
Imagine for a moment, for example, that your organization could accurately predict next year’s bottom line by “connecting the dots” between an investment in a new employee development initiative, because of the impact it would have on employee productivity, which would impact employee morale, which would impact employee satisfaction, which would impact customer service levels, which would impact customer satisfaction, which would impact customer retention, which would impact sales, which would impact profit! You could make similar predictive hypotheses for other strategic initiatives, such as a new product line, or new technology, or a specific process improvement, or a new market entered, and so forth.
If you could better understand the linkage between measures, you’d be able to better predict how “upstream” measures impact “downstream” outcomes. Further, you could more precisely influence the ultimate outcome by focusing more on leading indicators and intervening earlier, where your actions have more impact.
In short, leaders could use data to “see around corners” — to predict future outcomes and therefore design interventions before things occur. Leaders would know on what buttons to push today to create changes in behaviors, in outcomes in the future. Decisions — by leaders and by all employees throughout an enterprise – would improve, because they would be based on facts rather than intuition and gut.
What do organizations need to do to better leverage the power of predictive data? A few things come to mind:
- First, organizations need to have a good grasp on their strategy — they need to know where they currently are (in terms of performance) and where they’d like to go (in terms of vision and direction).
- Then, organizations need to spend time really thinking through the metrics they NEED (not necessarily the ones they conveniently have) to monitor progress of their strategy. For the sake of simplicity, I’ll call these “outcome measures” — and they should be focused in several domains, such as financial/marketplace performance, customer-related performance, product/ service/ program related performance, workforce performance, leadership and governance performance, and operational performance.
- Then, organizations need to think about what leading indicators would help predict those outcomes — on a process and/or behavioral level. In other words, answer a question similar to this: “for my organization to achieve X outcome, we would need to see progress in A, B, and/or C metrics.” In this step, you are creating a set of hypotheses — connecting outcomes to leading indicators. If you can use correlation (regression) analysis or some other predictive type of analytics, fantastic. But if not, just use your best judgment to draw those connections.
- And then test them — test those assumptions first on a small scale pilot by implementing changes that impact process, behaviors, and/or other interventions and seeing if the leading indicators really do move the dial on lagging outcomes. Learn from those tests (were the hypotheses indeed correct or not?), adjust, and then roll out the changes on a broader scale.
Most organizations (and most communities) have no shortage of data. However, what oftentimes is missing is the ability to turn that data into useful information, not just to better understand current performance but to predict — and manage — future outcomes. Using the data you have in different ways will help leaders predict, innovate, solve problems, and make better decisions. Ultimately, using data in a different way will lead to much improved outcomes.
And, oh, by the way: Nate Silver predicted that Louisville will win this year’s NCAA tournament (a 32.4% likely outcome), followed by Florida (21.3%), Indiana (10.9%), Ohio State (6.8%), and Duke (6.0%). Looks like my bracket still has a small chance!
Want to participate in a discussion on this topic?? Visit our LinkedIn group and/or post a comment below!
Yours in Performance Excellence,
Brian S. Lassiter
President, Performance Excellence Network (formerly Minnesota Council for Quality)