OR/MS Today - April 2006

Probability Management

Probability Management — Part 2

Small models linked through their input and output distributions create coherent networks of models that identify enterprise-wide risks and opportunities.

By Sam Savage, Stefan Scholtes and Daniel Zweidler

In the first article in this series [1] we presented the seven deadly sins of averaging. To counter them, we introduced the concept of Probability Management, which focuses on estimating, maintaining and communicating the distributions of the random variables driving a business. We presented the three underpinnings of probability management as follows:

1. interactive simulation: illuminates uncertainty and risk much as light bulbs illuminate darkness.

2. centrally generated stochastic libraries of probability distributions: provide standardized probability distributions across the enterprise, much as the power plants provide standardized sources of electricity to light bulbs.

3. certification authority: analogous to the power authority that ensures that you get the expected voltage from your wall socket. We refer to the person or office with this authority as the Chief Probability Officer or CPO.

In this article, we discuss each of these areas in more detail, and then finish with a short discussion of the potential for Probability Management in regulation and accounting.

Interactive Simulation

Analysts in academia and industry have never been shy of creating large and complex models, but they often fail to address how senior executives are going to interact with them. This is particularly difficult when the output of the model is probabilistic.

In "Action in Perception" [2], the philosopher Alva Noë argues that without action on the part of the observer, there can be no perception. He describes an experiment in which two kittens are presented with the same visual environment, but only one of the two can interact with it — by walking on a turntable. The other is suspended just above the turntable. By the end of the experiment, the suspended kitten has not learned how to process visual information and is effectively blind. No wonder managers have so much difficulty understanding and communicating uncertainty and risk. After all, how do you interact with a probability distribution?

Interactive simulation may be the answer. The "exploration cockpit" at Shell, described in our earlier paper, allowed managers to select or deselect projects with a mouse click. The resulting portfolio was then driven through repeated copies of Excel formulas, where each repetition was driven by a separate row of pre-calculated Monte Carlo trials in the stochastic library. The statistical properties of the portfolio were immediately apparent through the graphical interface. If, however, as part of a sensitivity analysis it is desired to change underlying econometric parameters like the future distribution of oil or gas price, the stochastic library and the associated universe of portfolios has to be regenerated. This procedure is currently too slow from a computational standpoint to qualify as interactive in a decision-making setting. It is hoped that new simulation technology described below, coupled to ever-faster computers, will expand the envelope of interactive exploration.

Another firm using interactive portfolio simulation is Bessemer Trust of New York. Bessemer has a model for

displaying the implications of various wealth management strategies for its clients. According to Andrew M. Parker,

managing director and head of Quantitative Strategies at Bessemer, "one significant drawback with most simulation software is that it can be time consuming. This can overwhelm the potential to easily compare and contrast different scenarios. Having an interactive model dramatically solves this problem."

Although the interactive portfolio models at Shell and Bessemer have proven successful, they were complex to develop and maintain. Furthermore, it would not be easy to generalize the approach beyond the modeling of portfolios. This appears to be about to change.

One of the founding fathers of the spreadsheet revolution has developed technology that automates the process of interactive simulation. Dan Fylstra, CEO of Frontline Systems and co-developer of VisiCalc, has introduced technology (see story on page 62) that almost instantly runs thousands of Monte Carlo trials every time you modify an input to a normal spreadsheet model. This will allow a large managerial audience to start interacting with — and hopefully sharpening their perception of — probability distributions.

Stochastic Libraries

Interactive simulation makes Monte Carlo simulation so effortless that virtually any Excel user should be able to master it. But simulations without acceptable input distributions are like light bulbs without electricity. Only a few people within an organization have the expertise to estimate probability distributions, and even fewer have the managerial authority to get their estimates accepted on an enterprise-wide basis. For this reason, the authors expect interactive simulation to reach its full potential only in organizations that invest in the capability to generate and manage probability distributions centrally. The entire discussion assumes that the distributions and business models involved have statistical properties that ensure that simulations will converge. Although in theory there are examples where this is not the case, it is rare to find them outside of a class in probability theory.

Coherent modeling — preserving relationships.

In the last article, the authors presented their coherent modeling approach to managing stochastic libraries. This offers the benefits of enterprise-wide modeling of statistical dependence, the roll up of probability distributions between levels of an organization and a stochastic audit trail.

The multivariate distributions driving the firm are stored in a stochastic library unit, with relationships preserved, or SLURP. In its simplest form, this is a matrix of pre-generated Monte Carlo trials, with one column for each uncertain business driver, and one row per trial.

Demographers use SLURPs as a matter of course. They call them "representative samples." A representative sample of, say, 10,000 U.S. citizens can be used to generate a SLURP for such quantities as income, education, family size, voting behavior, etc. — with all relationships preserved. One can think of a SLURP for business planning as a "representative sample" of the possible futures.

Modeling dependence: We come to bury correlation, not to praise it.

The simplest sorts of statistical relationships are measured by covariance or correlation, and in fact these terms have become synonymous with statistical dependence. However, there are many other types of relationships that can be represented in a SLURP. For example, Figure 1 displays the scatter plot of a SLURP of two random variables with a correlation of only .075, extremely low. Yet a relationship clearly exists, and has been preserved in the SLURP. In practice, structural econometric models may be used to generate SLURPs with more complex relationships than the linear one implied by correlation.

Figure 1: The scatter plot of two uncorrelated variables.

Time, the third dimension.

If the input uncertainties are time series, then it is convenient to represent the SLURP as a three-dimensional data structure analogous to a cube. Consider a model that takes as input the average annual oil prices and GDP over each of the next five years. The SLURP has a column for both oil price and GDP, a row for each trial, and a third dimension for the five time periods (see Figure 2).

Figure 2: The SLURP as a three-dimensional data structure analogous to a cube.

There are relationships between oil price and GDP, and oil price from one time period to the next. One trial is a rectangular "slice" in this three-dimensional cube, with oil price and GDP defining one side, and time defining the other.

Coherence and the fundamental identity of SLURP algebra.

A SLURP is said to be coherent, in that the statistical relationships between variables are preserved. Furthermore, this property of coherence propagates through models. For example, consider Division A of a firm that wishes to project revenue and costs one year ahead. Their spreadsheet business model is relatively simple in structure, but quite sensitive to both the future price of oil and level of the S&P 500. Assume that the CPO of the organization has developed a SLURP of 10,000 trials of oil price and S&P growth for the following year. This might be accomplished through a combination of structural econometric modeling and observed derivative prices [3]. When this SLURP is run through Division A's business model, it results in 10,000 pairs of revenues and costs. But these revenue/cost pairs are a SLURP in their own right because of the propagation principle (see Figure 3).

Figure 3: Through propagation, 10,000 pairs of revenue/cost pairs become a SLURP in their own right.
As a consequence of coherence, separate divisions of a firm can each build stochastic models of their own business metrics, whereupon the output SLURPs can be merged into a central model that calculates enterprise-wide metrics (Figure 4).

Thus, stochastic models may be rolled up to higher levels. SLURPs can in theory be propagated, horizontally across hierarchies of organizations, vertically through supply chains, as well as dynamically forward in time.

We summarize this in what we call the fundamental identity of SLURP algebra as follows.

Let X = (X1 ... Xn) be a vector of uncertain inputs to a model represented by SLURP S(X), and let Y = (Y1 ... Ym) = F(X) denote the outputs of a model, F, that depends on X.

The SLURP of the outputs of F is found by evaluating F for each of the trials in the SLURP of X, or symbolically, S(F(X)) = F(S(X)).

The crucial argument is simple: The output SLURP F(S(X)) inherits the sample property from the input SLURP S(X), i.e. if all trials in S(X) have the same probability of occurring, then so do all trials in F(S(X)).

Figure 4: Output SLURPs merged into a central model that calculates enterprise-wide metrics.

This identity is in stark contrast to the strong form of the flaw of averages (closely related to Jensen's Inequality), which states that E(F(X)) ≠ F(E(X)), where E(X) is the expectation of X and F is a non-linear function. It is this inequality that leads to many of the systematic errors embodied in the seven deadly sins of averaging, when single numerical values are propagated through an organization. Thus, the use of SLURPs cures the flaw of averages.

The Chief Probability Officer

You won't yet find this title in corporate organization charts, but some managers are already playing the role, and more will undoubtedly follow. A pragmatic trade-off between complexity and practicality must be applied to developing and certifying a firm's stochastic library. There are decisions in which a distribution of an uncertain business parameter, even if inaccurate, provides valuable insight. For example, when you shake a ladder to test its stability, you are essentially simulating the behavior of the ladder using a distribution of forces that differs from that when you actually climb on it. Nevertheless, you would be foolish to stop shaking ladders now that you have discovered you have been using the wrong distribution all these years. It is in this spirit that we encourage aspiring CPOs to be as precise as possible in estimating distributions. However, where precision is not possible, instead of reverting to point estimates, consider driving corporate models with either a less than accurate distribution, or simply through scenario analysis without any reference to probability [4]. The experiences at Shell and Bessemer are illuminating.

At Shell, the stochastic library had to be assembled from vast amounts of data gathered worldwide. The first decision was the level of granularity at which to model projects. The level chosen was the "exploration venture," which included a number of projects within a single geographical region. As the first step towards creating a stochastic library, the exploration engineers within each venture were responsible for providing initial estimates of the distribution of oil and gas volumes in that venture. When assembling distributions of possible hydrocarbon volumes and economic value of exploration, it is important to acknowledge the consequences of the "Prospector Myth" as described by Pete Rose and Gary Citron [5]. Explorers by their very nature are not only very optimistic, but also often fail to recognize the full range of possible outcomes of an exploration venture. Painting the numerical picture of an exploration venture and its various execution alternatives is a mélange of art and science underpinned by experience.

The distributions of hydrocarbon volumes were assumed to be independent across ventures. Conversely, the economic evaluations of the ventures have strong relationships resulting from global oil and regional gas prices. The volumetric distributions were converted to coherent distributions of economic output by using discrete distributions of oil and gas prices and associated drilling and development cost assumption. For the economic evaluations, the input parameters are distributed globally through a shared library updated on an annual basis.

To provide the assurance that the ventures and their execution alternatives are not only feasible as described, but also portray the cost and value elements appropriately, seasoned explorers and economists review the input to the coherent simulation that generates the stochastic library of outcomes for exploration ventures and their alternative execution plans. They will also engage in further dialog with the engineers and managers in the field to ensure consistency across ventures

At Bessemer, the situation was quite different. First, with financial portfolios there is rich historical data and a number of accepted approaches to modeling asset growth. The second difference was that the ultimate consumers of the information derived from the simulations were Bessemer's individual clients.

"In the wealth management business, it's extremely important to assure that clients understand the risk in their investment portfolios," says Parker, "and the only way to do this effectively is to use probabilistic modeling. To this end, having a centrally managed process with a shared library of asset distributions assures uniformity across the organization." Parker periodically updates this library, and distributes it to others in the organization to use in the simulation models that he also oversees. "This allows our client account managers to give robust, consistent answers without requiring a deep knowledge of statistics," Parker adds.

Probability Management in Regulation and Accounting

One typically thinks of simulation and stochastic analysis as pertaining to the core areas of management science, in particular production and finance. However, if the concepts and technologies behind probability management take root, probability management might eventually have an even more dramatic impact in the areas of financial regulation and accounting.

Regulators of financial intuitions and other organizations are concerned not only with the stability of individual firms within a given industry, but also in the stability of the industry as a whole. Establishing coherent benchmark distributions of global economic factors would provide a uniform basis against which firms could be stochastically compared.

After the Enron fiasco, the U.S. Congress moved to increase transparency into the risks faced by publicly traded firms. The resulting Sarbanes Oxley legislation [6] mandated tighter adherence to Generally Accepted Accounting Principles (GAAP). Unfortunately, GAAP itself is permeated with examples of the flaw of averages [7, 8]. Although the accounting industry by its nature does not change quickly, there may be opportunities in this area for those with training in accounting, law and stochastic modeling [9].


As Terri Dial, CEO of Lloyds' retail bank puts it: "P&L statements help to manage historically; business models help to manage currently." Yet too often, management science models, in their fixation with the right answer, grow so complex and rigid that they cannot keep up with current events. To manage "currently," the authors believe that asking the "right question" is more important than seeking the "right answer." Rather than a department of computer programmers devoted to building one big deterministic model of the enterprise, what is needed is a management culture that embraces the creation of many small stochastic models as a way of asking questions. We like to think that probability management will ultimately allow such small models to be linked through their input and output distributions into coherent networks of models that illuminate enterprise-wide risks and opportunities.

• Read the accompanying story: Interactive Simulation

• Read Part 1 of Probability Management


  1. Savage, Scholtes and Zweidler, 2006, "Probability Management," OR/MS Today, Vol.33, No.1 (February 2006), www.lionhrtpub.com/orms/orms-2-06/frprobability.html
  2. Noë, Alva, 2004, "Action in Perception," The MIT Press.
  3. Melick, William R., and Thomas, Charles P., 1997, "Recovering an Asset's Implied PDF from Option Prices: An Application to Oil Prices During the Gulf Crisis," Journal of Financial and Quantitative Analysis, Vol. 32, No. 1 (March 1997).
  4. Schwartz, Peter, 1991, "The Art of the Long View: Planning for the Future in an Uncertain World," Doubleday.
  5. Rose, P. R. and G. P. Citron, 2000, "The Prospector Myth vs. Systematic Management of Exploration Portfolios: Dealing with the Dilemma," Houston Geological Society Bulletin (October 2000).
  6. www.aicpa.org/info/sarbanes_oxley_summary.htm
  7. Johnson, L. Robbins, B., Swieringa, R. and Weil, R., 1993, "Expected Values in Financial Reporting," Accounting Horizons, Vol. 7, pp. 77-90.
  8. Savage, S.L. and Van Allen, M., 2002, "Accounting for Uncertainty," Journal of Portfolio Management (Fall 2002).
  9. Savage, S.L. and Van Allen, M., 2006, "The Flaw of Averages in Law and Accounting," "Litigation Services Handbook: The Role of the Financial Expert, 4th Edition," published by John Wiley & Sons (Spring 2006). Editors: Roman L. Weil, Michael J. Wagner, Peter B. Frank, Christian Hughes.

Sam Savage is a consulting professor of management science and engineering at Stanford University, a collaborator on the spreadsheet optimization package What's Best, and founder and president of AnalyCorp Inc., a firm that develops executive education programs and software for improving business analysis.

Stefan Scholtes is a professor of management science and director of research of the Judge Business School at the University of Cambridge. His theoretical research interest in mathematical programming is complemented by applied work that seeks to help managers and engineering designers in their understanding of system values in a complex, uncertain and dynamic environment.

Daniel Zweidler is head of global exploration planning and portfolio for Shell, where he helps define the exploration investment case for Shell, merging regional exploration realities and imperatives with new country access opportunities and the competitive landscape. He is responsible for delivering the global exploration and EP growth business plan.

  • Table of Contents
  • OR/MS Today Home Page

    OR/MS Today copyright 2006 by the Institute for Operations Research and the Management Sciences. All rights reserved.

    Lionheart Publishing, Inc.
    506 Roswell Rd., Suite 220, Marietta, GA 30060 USA
    Phone: 770-431-0867 | Fax: 770-432-6969
    E-mail: lpi@lionhrtpub.com
    URL: http://www.lionhrtpub.com

    Web Site Copyright 2006 by Lionheart Publishing, Inc. All rights reserved.