Sunday, June 2, 2019

Moonshot


Moonshot is a term associated with a herculean undertaking of epic proportions and/or profound significant.  A moonshot typically requires marshalling vast resources sustained over time to achieve the result.  The 20th of July, 2019 commemorating the 50th anniversary of mankind setting foot on the moon – and returning safely to Earth – was a moonshot.  Reviewing some of the resources required to accomplish man on the moon helps define a moonshot.
The National Air and Space Administration (NASA) achieved the goal for the United States to reach the moon.  The moonshot began in 1961 and, including the six Apollo space flights following the successful 1969 Apollo 11 mission, was concluded in 1972 as NASA “shifted emphasis from manned space exploration-typified by Apollo-to space activities focused on direct practical down-to-earth benefits…” (Fletcher, 1974 p. 4).  The NASA expense to accomplish the moonshot was about $25 billion, equating to about $146 billion in 2019 (Official Data Foundation).  NASA estimated that the Mercury, Gemini and Apollo U.S. astronaut missions “employed 400,000 Americans and required the support of over 20,000 industrial firms and universities” (NASA, 2008).  The apex Apollo mission comprised the three-stage Saturn V rocket with escape rocket and three spacecraft: Command Module Columbia, Service Module, and two-stage Lunar Module Eagle, with associated life support, propulsion, propellant, flight control, communication, experiments and support equipment operating successfully within the space environment lacking gravity, heat, and atmosphere.  There was no precedence and very little science upon which to base the new moonshot.  For example, creating the on-board compact and lightweight electronic instrumentation for complex and precise guidance, navigation and control instruction processing required handmade fabrication of integrated circuits.  All hardware, software and personnel required the highest standards for testing.  The enormity of the Apollo project and associated investment was highly controversial, which some opposition even labeled as a boondoggle.  In May 1961 the surveyed American indicated that just 33% of respondents believed that the Apollo program of sending a man to the Moon, was a good investment of the estimated cost (Gallup Organization, 1961).
There are many moonshot examples that have been slow to be embraced, though few have reached the level of resource commitment as did the Apollo program.  Economically self-sustaining nuclear fusion reactors propose to yield more energy output than sustaining input.  Quantum computation would rely use quantum-mechanical qubit superposition and entanglement to compute multiple possible concurrent combinations of 1 and 0 states, as compared to the serial process of conventional instruction computing, resulting in significantly increased problem solving and time savings advantages. Artificial intelligence would endow machines with the capability to make autonomous cognitive decisions, surpassing individual human problem solving ability, leading to independent from human forms of action rationalization.  The Allied Operation Overlord involving about 14 months of planning culminating with Operation Neptune on 6 June 1944 on a massive scale which committed about 156,000 Allied troops, 6,939 Allied vessels, and 11,590 Allied aircraft, resulting in about 4,413 dead and over 10,000 casualties (Rank, 2014).
With so many worthy moonshot programs competing for investment resources a qualifying justification prerequisite is the return on investment in terms of number of people benefited, impact on the environment, capital savings, quality and extension of life improvement. Consider curing cancer as one such worthy moonshot.  According to the National Cancer Institute (NCI, 2018), cancer is among the leading causes of death worldwide; in 2012, there were 14.1 million new cases and 8.2 million cancer-related deaths worldwide with the number of new cancer cases per year is expected to rise to 23.6 million by 2030; there will be an estimated 606,880 deaths in 2019.  Many of us have relatives or friends who are suffering with – or who have died from – cancer.  The cost in both human morality and economic outlay is not measurable.  Against this tide of tragedy, the total NCI appropriated funds spent on different cancer sites, cancer types, diseases related to cancer, and research totaled $5.74 billion in 2019.  The proposed federal budget request for fiscal year 2020 totals $4.746 trillion (White House, 2019).  Against phenomenal odds America rose to the challenge (Kennedy, 1962),
We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one we intend to win, and the others, too.

America’s next moonshot should focus on curing cancer.

References
Fletcher, J. (20 March 1973). 1974 NASA authorization, p. 4. Retrieved from (https://babel.hathitrust.org/cgi/pt?id=mdp.39015084762734;view=1up;seq=8)

Gallup Organization (17-22 May, 1961). Poll question: It has been estimated that it would cost the United States $40 billion -- or an average of about $225 per person -- to send a man to the moon. Would you like to see this amount spent for this purpose, or not? Retrieved from https://ropercenter.cornell.edu/sites/default/files/2018-07/55088.pdf

Kennedy, J. (12 September 12 1962). John F. Kennedy moon speech - Rice Stadium. Retrieved from https://er.jsc.nasa.gov/seh/ricetalk.htm

NASA, (22 April 2008). NASA Langley Research Center’s contributions to the Apollo program. Retrieved from https://www.nasa.gov/centers/langley/news/factsheets/Apollo.html

National Cancer Institute (27 April 2018). Cancer statistics. Retrieved from https://www.cancer.gov/about-cancer/understanding/statistics

Official Data Foundation (2019). U.S. dollar inflation calculator. Retrieved from http://www.in2013dollars.com

Rank, S. (2014). D-Day statistics: Normandy invasion by the numbers. Retrieved from https://www.historyonthenet.com/d-day-statistics

The White House (11 March 2019). A budget for a better America. Retrieved from https://www.whitehouse.gov/wp-content/uploads/2019/03/budget-fy2020.pdf

U.S. House of Representatives (1974). Hearings before the Committee on Science and Astronautics, Ninety-Third Congress, First Session on H.R. 4567 (superseded by H.R. 7528) (https://www.congress.gov/bill/93rd-congress/house-bill/4567?q=%7B%22search%22%3A%5B%2293rd+Congress+1973+4567%22%5D%7D&s=8&r=2)

Sunday, May 5, 2019

Multidimensional Opportunity Analysis


Introduction
Identifying, evaluating and communicating opportunities should be a best practice for every business investment analysis.  Fundamentally, business investment decisions are initiated for two reasons – to increase the potential for gain or to reduce the possibility of loss.  Investments result in a range between two possible outcomes, incurring either a benefit or a loss; the break-even point, no net loss or gain, may be considered a positive result for preservation of resources.  Investment decisions, even when proactively seeking an improvement to add value, typically focus towards mitigating potential negative outcomes and, deliberately or not, apply a risk mitigation approach intended to reduce the possibility of loss.  The Project Management Institute (2017) defines project risk as, “an uncertain event or condition that, if it occurs, has a positive or negative effect on one or more project objectives” (p. 307).  The positive effect may be classified as opportunity which could support value maximization – the return outweighs the cost (Dixit & Pindyck, 1995).
The Department of Defense (2017) states, “Opportunities are potential future benefits to the program’s cost, schedule, and/or performance baseline, usually achieved through proactive steps that include allocation of resources” (p. 43).  Kendrick (2015) also stated, “The primary meaning of opportunity related to a project involves the value anticipated from the project deliverable”.  In addition to project opportunity, Frederick, Novemsky, Wang, Dhar and Nowlis (2009) defined opportunity costs as, “The unrealized flow of utility from the alternatives a choice displaces” (p. 553).  Examining forgone potential opportunity Baratta (2007) stated, “The basic premise behind this metric is that as soon as we are aware that there is an opportunity to realize a defined business benefit by making some change in our business, then every moment that passes without that change being in effect, represents a lost benefit, a missed opportunity.  Therefore, we need to measure lost opportunity just as much as adherence to an estimate, which may actually be wrong.”
Though risk identification and associated risk management plans are often mandated in order to minimize loss, an opportunity model – to include opportunity costs and passed up potential opportunities – is typically not included in decision-making apart from possibly a lean component of SWOT analysis.  Formalizing the opportunity model would not only better inform investment decisions, but may also improve economic efficiency if selected opportunities result in resources more productively employed.  For example, the opportunity to reduce redundancies among business units might yield greater value than the potential return on alternative investment decisions taken by the individual business units.  An opportunity model should illuminate such strategic possibilities for decision maker consideration.  Strategic business decisions to further organizational objectives must go beyond narrowly considering the downside based on the traditional risk analysis paradigm.  Risk is characteristically evaluated as the relationship of the scales for two variables – probability and impact, represented as conventions (x) and (y) respectively.  Applying a similar approach for opportunity analysis, with the inclusion of a third variable scale (z) for investment resources, would enable more detailed decomposition into multidimensional components to identify potential options or reveal hidden value.
Traditional Approach
Investment decisions focusing on risk tolerance often base decisions on simplistic single-point green (low), yellow (medium) and red (high) visual indicators which are typically represented in the form of a five-cell-by-five-cell color-coded table derived from probability and impact scales.  Each of the 25 cells in the risk matrix heat map provides the measure of risk derived from associated qualitative translations of likelihood from “not likely” to “near certainty” and consequence ranging from “minimal impact” to “critical impact.”  As a business investment decision support tool, the color-coded representation is rudimentary for presenting opportunity and ineffective for articulating quantified risk.  Industry employs more complex models, often accompanied with statistical analysis; for example, weather forecasting calculates the likelihood for a significant number of variables to estimate impacts such as precipitation, temperature, and cloud cover (National Weather Service, n.d.).  Variables having linear relationships also allow for linear optimization programming models to minimize total risk while meeting goals (Lawrence & Pasternack, 2002).
Depending upon the type of investment – i.e., real estate, financial instruments, capital equipment, etc. – compound opportunity or risk statements should be normalized into single actionable events or conditions, and complex scenarios may be further decomposed into interrelated components.  Deconstructing opportunity and risk into constituent elements down to root cause(s) can be time consuming and may not be practical for every situation.  Subject matter experts (SME) are a good source for identifying and assigning accurate values to the opportunity decision variables, and recommending key variables to focus on.  The fundamental risk variables of probability and impact are important two-dimensional approach starting points, yet remain exiguous without correlation to investment constraints for informed allocation of resources.
Improving cost, schedule and/or performance is often equated to offsetting or decreasing negative impacts.  Risk mitigation is a reactive plan against potential loss or under performance while an opportunity strategy is proactive for possible gain or over performance in which – for the purpose of this review – the opportunity model axes boundaries separate mutually exclusive positive and negative return space.  Fundamentally, risk is based upon the likelihood of an event occurrence and the resulting impact of something bad.  Worst case risk outcomes occur in negative space, though scenarios could be evaluated that while remaining in positive space the results attain less than optimal goal outcomes.  In such situations, risk and opportunity overlap in the positive space.  For example, a risk might be stated as, “The probability that 1st fiscal quarter earnings will not reach the 5 percent estimate.”  Earnings of less than zero would be negative (loss), while greater than zero but less than 5 percent – though below the target – would still be positive (gain).  Net positive values could be examined for opportunity.
Risk and opportunity are inverse – risk worsens and opportunity improves as probability and impact increase.  Euclidean three-dimensional cube space (x, y, z axes) affords an infinite number (n) of two dimensional plane values; incorporating time (t), which will not be presented here, could also provide a measure of change order.  While brainstorming often hypothesizes possibilities based upon unconstrained resources, the practical application of available investment resources for risk mitigation and opportunity exploitation, while remaining defined as the area within the three-dimensional space, limits the collection of useful points to a smaller range of numerical values narrowed by the resource scale.  Uncertainty (x), which exists for both risk and opportunity, is the probability expressed as a positive value between 0.0 (cannot occur) and 1.0 (certain occurrence); as risk is a tentative future event, 0 and 1 denote definitive results and are not included.  Impact (y) may have a negative (risk) or positive (opportunity) effect, calculated as an interval numerical value (which may be derived from an ordinal scale such as low, medium and high).  Available resources (z) further bounds opportunity within the positive impact space.
Opportunity Defined
Opportunity (O) represents the value placed upon a potential benefit, such as typical project control metrics that deliver net savings derived from improved cost or schedule results.  More difficult to specify are intangible benefits derived from faster decision-to-answer time or improved organizational performance in essential planning, organizing, directing and controlling operations.  However, reducing investment uncertainty for intangible benefits is possible as indicated by Hubbard (2014) and applied to an investment risk simulation example (Frum, 2017).  SME knowledge, supplemented with historical and industry statistics, may be a reliable source to both articulate the benefit variables such as cost/schedule/performance and also to accurately define the probability and impact variable numerical values.
Lowercase theta ᶿ from the Greek alphabet can represent the opportunity model, supported by the mathematical proof for the existence of an opportunity when defining the opportunity statement.  Calculating the quantitative value for an opportunity ᶿ – the opportunity score – permits plotting the x, y and z coordinates.  Though missed opportunity may also be quantitatively determined, as no actual loss to the balance sheet occurred, only the value of potential opportunity will be studied.  Achieving the potential opportunity may require either explicit (actual) cost, or implicit (tradeoff) equivalent cost had the resources been applied to alternate benefit options if available; implicit value would equate to (opportunity cost = selected option cost – not selected next best alternate option cost).
Equivalent opportunity cost examination in which diverse opportunities are compared based upon some common value, at least at an exploratory order of magnitude, is necessary to more definitely determine that the selected opportunity is valued above any alternate opportunities.  For example, an investor examining certificates of deposit at various financial institutions may consider not only the interest rates but also 24/7 account access, customer support, length of deposit requirements, physical versus on-line only commerce, etc. – all factors evaluated against a common and consistent quantitative scoring method – in choosing one institution over another.  The opportunity model ᶿ assumes to be true that for each type of opportunity, whether comprised of one or more constituent criteria, there is only a mutually exclusive single best choice possible to maximize return value, the selection cannot be divided such as a linear programming feasibility solution for constrained optimization or as an investor choosing to place a combination of funds in two or more CDs.
Opportunity Model
After a potential opportunity has been identified, the opportunity must be correctly stated – concisely and accurately defined – to proceed to numerical evaluation.  Simply stating that the organization wants to increase production while becoming more efficient is a general goal too ambiguous for measurement.  The opportunity must provide a specific, well defined and assessable value.  Adding specificity such as increasing production by 10 percent while becoming 5 percent more efficient within 12 months is a quantified outcome objective.  If the organization pursues an opportunity, then accurate, relevant, practical and computable performance measures must be applied to quantified resources for input, process, output and outcome to evaluate expected results.  The organization’s strategy is an appropriate starting point in which mission priorities, objectives and initiatives – all predetermined to be important to the organization – can inform opportunity construction, leading to selecting the most promising from possible alternatives.
Should investment in some combination of each opportunity be possible, then a linear programming maximizing/minimizing function might be applied to derive the optimal mix of resources.  As stated by Martinich (1997), “Constrained optimization models are mathematical models that find the best solution with respect to some evaluation criterion from a set of alternative solutions” (p. B2).  Comparing opportunities may be further challenging when the units of measure (M) are not equivalent.  For example, an investment that leads to improving senior leader decision time may need to be compared against employee training that improves processing time; for the former, direct access to data may reduce decision time from days to hours by visualization of metrics, while the latter might improve customer support by faster turn-around time for product purchase requests.  The organization would benefit from each alternative but resource constraints allow for just one investment selection.
Decision Variables.  Opportunity options – primary and alternate, if applicable – may be dissimilar, making comparison and choice decisions more challenging.  To calculate opportunity cost among dissimilar options, the quantity of resources required for each possible selection becomes the basis for comparing alternative choices.  The options must first be reduced to their equivalent per unit basis, such as the miles per gallon gasoline equivalent (MPGe) used by the Environmental Protection Agency to compare electric to gas vehicle fuel economy and average distance traveled per unit of energy consumed.  Reduce the opportunity expression to equivalent terms.  For example, when selecting the best value (most cost effective purchase) for protein among beef, fish or chicken, the protein per 100 grams of each does not provide all the necessary information for a decision.  Setting aside all other factors, consider on average that per 100 grams, beef contains approximately 36 grams of protein, fish provides 26 grams, and chicken has 18 grams – the logical component among the variables is protein.  Then determining the equivalency based upon cost per gram of protein enables improved opportunity selection among multiple candidates, indicating beef would be the best value; such as:
Opportunity equivalency (Oe), where ó is defined as logically equivalent.
Protein equivalency An ó Opportunity Bn ó Opportunity Cn
·         per 100gm: meat36g ó fish26g ó chicken18g
Oe, setting the national average prices per 454 grams (1 pound):
·         beef = $2.49 = $0.55 per 100gm with 36 gm protein = $0.015 per gm protein
·         fish = $3.99 = $0.88 per 100gm with 26 gm protein = $0.034 per gm protein
·         chicken = $3.18 = $0.70 per 100gm with 18 gm protein = $0.039 per gm protein

Opportunity cost (Oc) = (selected option cost) – (not selected next best alternate option cost).
·         Oc = | (beef at $0.015 per gm protein) – (fish at $0.034 per gm protein) |
·         Oc = $0.019 per gm protein; selected O = beef

Objective Function.  
·         Opportunity cost Oc = |Oc – !Oc|, where O := !O and the quantity n of unit of measurement M = n x [M] = n[M]
·         O ó !O, opportunity alternatives that may or may not be homogeneous but allow for like or equivalent comparison and analysis based upon a common equivalent unit of measurement
·         unit of measurement M, allows for the multiple n of cost M for the same base units

Constraints.  Examples of limits or restrictions include: How much can be afforded to invest; How much can be afforded to lose to attain the desired effect; Is the opportunity time-bound; How many units of each type can be processed per person per unit of time; Are there legal specifications?  Opportunity constraints for a business with billion dollar earnings are likely not the same as a small owner-operator business.  Constraints must be well documented to determine if opportunity equivalency comparisons are appropriate.  Opportunity identification should also establish threshold cost-benefit performance measures such as those taken from strategic objective targets, existing performance and practice measures, resource or capacity constraints, legal or policy standards, analysis of similar organizations, industry statistics or best practice benchmarks.  The threshold should define that point at which if the organization can do better, then how much better per unit invested?  The costs avoided or dollars gained by a program must be defined.
Monte Carlo simulation is an excellent quantitative method for determining the likelihood of a potential opportunity over a range of values.  The subject-matter expert (SME) plays an essential role in determining opportunity, uncertainty, impact and constraints within their areas.  Figure 1 illustrates a hypothetical business opportunity simulation, indicating that for 10,000 simulations there is a 90 percent likelihood that the annual ROI will exceed about $46 million and a 10 percent probability that the annual ROI will exceed about $50 million, with a median (50 percent likelihood) expected annual ROI of about $48 million.
Opportunity Triplet
Representing the opportunity theta ᶿ values as ordered triplets (x, y, z) consisting of a scenario that characterizes the business objective and numerates the variable quantities will permit mathematically solving the ordered triple in ways meaningful to real world applications.  For example, the organization might determine the risk associated with a phishing and social engineering attack as the average impact cost of $1.6 million (M) and the likelihood of a targeted user clicking on the malicious attachment or link at 0.02736, with the risk value = ($1.6M x 0.02736) = $43,776.  Examining potential opportunities to reduce or offset the attack risk may yield increased cost savings (spend less than $43,776 for training) and training benefits (more flexible training scheduling) for performing security awareness and training in-house rather than outsourcing; for example, where,
x = possibility expressed as a probability; based upon best empirical data.
y = impact requires an accurate, meaningful and quantitative measurement in order to answer the business impact question: How significant would the benefit or value of the opportunity be to the organization?  A basic impact scale similar to the risk likelihood criteria described by DoD (2017) in Figure 2 would assess opportunity based upon Likert scales ranging from ‘minimal impact’ to ‘critical impact’.  Likert scales are ordinal, meaning the importance can be ranked but not accurately interpreted mathematically and would require corresponding numerical (interval) levels ranging from 1 (minimal) to 5 (critical).  The organization should develop standard impact rating criteria for evaluation consistency.  The SMEs as domain knowledge authorities provide insight in assessing impact as they are typically more knowledgeable than others regarding consequences within their area.
z = investment resources in which the z axis represents the positive range of potential opportunity values based on assets available such as funds, personnel and time to pursue objectives.  As previously expressed, a common unit of measurement is required to determine opportunity value (opportunity cost = selected option cost – not selected next best alternative option cost).




The (x, y, z) ordered triplet of numbers in this model represents point P on the axes within the positive coordinate space (+,+,+) corresponding to the top-right-front first octant.  In Cartesian geometry with three mutually perpendicular axes, the area of positive z coordinates with equal probability would take the shape of a cube as depicted in Figure 3.  After establishing the ordered triplet variable values, determining the opportunity value is a relatively straightforward point computation as shown in Table 1; which shows that Opportunity 1 would require a higher investment but could yield a higher result with Oc = ($22,500 – $12,000) = $10,500.  Varying the value of z represents a range of opportunity values along the z coordinate axis.  As with the example illustrated in Figure 1, more detailed analysis and simulation could be performed to arrive at population mean, standard deviation, confidence intervals and similar evaluations.
  



Table 1
Comparing Two Different Opportunities with Cost as Unit of Measure
Variable
Opportunity 1
Opportunity 2
x (likelihood)
0.75
0.60
y (impact)
3
4
z (investment)
$10,000
$5,000
ᶿ = (x) x (y) x (z)
$22,500
$12,000
Note. Opportunity Cost Oc = (O1 – O2) = ($22,500 - $12,000) = $10,500
  
Opportunity Plane
Another consideration of positive opportunity space could be the property of opportunity aggregate group (g) in which each z coordinate, should more than one exist, would be elaborated to include differentiating detail(s) for further examination and exploitation.  The (x, y, z) ordered triple numbering convention would share the same z value with either or both x and y values, varying to produce differentiating characteristics of the specific individual opportunity.  Each initial triplet such as (x8, y8, z8) and additional triplet such as (x1, y3, z8) extending from the initial triplet will determine a straight line, and all differentiating characteristic triplets in which z remains constant (i.e., z = 8) will lie in the same unique (x, y) plane surface as spatially illustrated in Figure 4.  The range of an individual opportunity option would be the difference between the worth assigned to the highest and lowest triplet values for which z = constant as listed in Table 2.  For example, a gambler with a single $100 chip to wager must decide whether a win with one roll of the dice (O1), one card draw (O2) or one slot machine spin (O3) would yield the best desired outcome.
  




Table 2
Comparing Three Opportunities in One Group with Same z Value
Variable
Opportunity 1
Opportunity 2
Opportunity 3
x (likelihood)
0.75
0.60
0.40
y (impact)
3
4
5
Z (investment)
$100
$100
$100
ᶿ = (x) x (y) x (z)
$225
$240 (highest)
$200 (lowest)
Note. Opportunity Range = (O2 – O3) = ($240 - $200) = $40
  
Conclusions and Recommendations
Commercial enterprises in competitive markets continually seek opportunities to grow their businesses; both opportunity recognition and opportunity exploitation are positively associated with innovations (Kuckertz, Kollmann, Krell, & Stöckmann, 2017).  Opportunity is incorporated into the capital budgeting decision based upon economic analysis of investment projects such as cost-benefit analysis (Bierman & Smidt, 2012, p. 8).  The absence of market forces generally deprives the motivation to pursue opportunity and innovation which is often accompanied by change and disruption.  Opportunity as described by DoD (2017a) covers 5 pages including a single opportunity management vignette, whereas risk management covers 23 pages.  DoD acquisition program guidance states risk 212 times and opportunity 4 times (DoD, 2017,b).  The difference allotted to the two topics reveals the prominence placed on mitigating the downside and lack of emphasis on exploring and quantifiably articulating the upside, effectively incentivizing risk aversion, apart from offices specifically focused on innovation.
The basic high-level risk model typically depicts a 5x5 cell matrix in which the combination of likelihood and impact determine a color coded risk level for low (green), moderate (yellow), and high (red).  As a business investment decision support tool, the color-coded representation is ineffective for articulating quantified risk probability distributions for a range of possible outcomes for any meaningful choice of action.  Beyond signaling candidate areas for in-depth analysis, the same approach is also an insufficient opportunity framework, particularly to facilitate goal achievement.  Opportunity estimation must become more than a cursory mirror approach to risk management, communicated as abstract qualitative concepts represented in terms of three colors.
Multidimensional opportunity analysis offers an improved model to quantitatively explore initiatives that may yield potential improvements in cost, schedule reductions, and/or performance.  Measurable opportunity evaluation should become an integral part of Defense acquisition program management and systems engineering, elaborated in a range of policy, regulatory, and statutory directives.
To begin improving cost, schedule, and performance opportunity benefit analysis I recommend the following:
(1) Incorporate opportunity equivalency and opportunity cost in Department of Defense guides that address opportunity particularly in the Acquisition field;
(2) Require the opportunity theta model based upon ordered triplets in which x represents possibility expressed as a probability, y represents impact requiring an accurate, meaningful and quantitative measurement, and z represents investment resources for all potential investment decisions exceeding $1 million. 

Published in Defense Acquisition: July-August 2019 (http://www.dau.mil/library/defense-atl/DATLFiles/Jul_Aug2019/Frum.pdf)
References
Baratta, A. (2007). The value triple constraint: measuring the effectiveness of the project management paradigm. Paper presented at PMI® Global Congress 2007—North America, Atlanta, GA. Newtown Square, PA: Project Management Institute.
Bierman Jr, H., & Smidt, S. (2012). The capital budgeting decision: economic analysis of investment projects. Routledge.
Department of Defense (DoD). (2017a). Risk, issue, and opportunity management guide for defense acquisition programs. Retrieved from https://www.acq.osd.mil/se/docs/2017-rio.pdf
DoD. (2017b). Operation of the Defense acquisition system. Retrieved from http://acqnotes.com/wp-content/uploads/2014/09/DoD-Instruction-5000.02-The-Defense-Acquisition-System-10-Aug-17-Change-3.pdf
Dixit, A. K., & Pindyck, R. S. (1995). The options approach to capital investment. Real Options and Investment under Uncertainty-classical Readings and Recent Contributions. MIT Press, Cambridge, 6.
Frum, R. (2017, November-December). How to improve communication of information technology investments risks.  Defense AT&L, 44-48.
Frederick, S., Novemsky, N., Wang, J., Dhar, R., & Nowlis, S. (2009). Opportunity cost neglect. Journal of Consumer Research36(4), 553-561.
Hubbard, D. W. (2014). How to measure anything: Finding the value of intangibles in business. John Wiley & Sons.
Kendrick, T. (2015). Project opportunity: risk sink or risk source? Paper presented at PMI® Global Congress 2015—North America, Orlando, FL. Newtown Square, PA: Project Management Institute.
Kuckertz, A., Kollmann, T., Krell, P., & Stöckmann, C. (2017). Understanding, differentiating, and measuring opportunity recognition and opportunity exploitation. International Journal of Entrepreneurial Behavior & Research23(1), 78-97.
Lawrence, J. A., & Pasternack, B. A. (2002). Applied management science: modeling, spreadsheet analysis, and communication for decision making. New York: Wiley.
Martinich, Joseph S. 1997. Production and operations management: An applied modern approach. New York, New York: John Wiley & Sons, Incorporated
National Weather Service. (n.d.). Welcome to the statistical modeling branch. Retrieved from https://www.weather.gov/mdl/StatisticalModeling_home
Project Management Institute. (2017). A guide to the project management body of knowledge (PMBOK® Guide) - sixth edition. Retrieved from https://www.pmi.org/pmbok-guide-standards/foundational/pmbok#


Wednesday, April 24, 2019

The Value of Data Visualization


     Does information visualization provide sufficient return on investment (ROI)? Let us examine the investment value—the estimated financial worth—of providing end-user data visualization. For an organization desiring to depict complex or large data sets in various pictorial or graphical formats for 10,000 users, a commercial visualization product subscription model in which each seat license may cost $50 per month equates to $600,000 per year, not including the labor fee to prepare, implement and maintain the tool.
     Consuming the information on a proliferating number of endpoint mobile devices could further increase costs to more than $1 million, a nontrivial amount in any organization’s budget. The potential outlays escalate when spanned across the federal enterprise.
     Data visualization value, whether expressed directly from straightforward monetized return or subjectively derived from intangible benefits, needs to be assessed quantitatively to determine the economic return to permit a comparison with expected losses and gains from other organizational investments. Without an operationally relevant ROI performance metric, any project expense could be justified to counterweigh the risk of loss.
     Extracting value from data typically focuses on the larger and more expensive issues of management and use of big data, where it is assumed that information visualization is a derived byproduct. Yet when tallied as a separate line of investment, the intended scope of graphically depicted data may not provide enough justification for the production cost and potential difficulties. Consequently, as with any significant investment, the chief information officer and the chief financial officer should conduct a timely review of the data-visualization business case for a quantifiable performance measure of success or failure. For example, a good ROI likely would not involve spending more than $1 million on data visualization to save $200,000.
     Investments in data visualization must compete with other organizational priorities. Determining the ROI is a challenging exercise because it requires that the organization quantifiably measure not only the quality of the tool’s functional characteristics (whether it is accessible, accurate and well designed) but also utilization of the produced information— what can be achieved with better, data-driven management decisions? Too often investment decisions are made, and ROI is not measured, because it is considered unrealistic to expect a quantified measurement of less tangible benefits. The abstract goal of loosely defined long-term benefits then underpins the business case: greater business and customer insight, faster decision-to-answer time, or faster response to customers and markets. However, reducing uncertainty for intangible investments is possible, as indicated by Douglas Hubbard’s Rule of Five (How to Measure Anything: Finding the Intangibles in Business, published by John Wiley & Sons, Inc., 2010 and 2014). This was applied in the investment risk simulation example in my article, “How to Improve Communication of Information Technology Investments Risks,” in the November–December 2017 issue of Defense AT&L magazine. Subject-matter expert (SME) knowledge, supplemented with historical and industry statistics, may be a reliable source for accurate numerical value metrics.
     Most organizations produce or consume data for leadership to monitor performance and answer such basic questions as: “Are we accomplishing our objectives . . . Are we using our resources in the most effective manner ... Are we learning ways to improve our performance?” Some outcomes are relatively straightforward, such as “certifying compliance within a numeric benchmark for system defects that either did or did not decline over time.” For example, the Internal Revenue Service investment in the Return Review Program (RRP) fraud detection system—replacing the Electronic Fraud Detection System that dated from 1994—either does or does not help prevent, detect and resolve criminal and civil noncompliance. A successful system should result in greater success with more revenue returned to the U.S. Treasury to offset the RRP cost.
     But it is more difficult to pinpoint how the data results would reduce risk or improve organizational performance in essential planning, organizing, directing and controlling operations—i.e., identifying the specific business decision problem, the root issue, and how the data visualization investment would help. The answer would then define the metric created to evaluate visualization product cost against expected business results: ROI = Investment Gain/ Cost of Investment.
     To select the best tool for the job, management must first precisely determine how visualization would support users’ efforts to distinguish between evidence-based reality and unsubstantiated intuitive understanding. The tool must present raw abstract data in a manner that is meaningful to users for improving understanding, discovery, patterns, measurement, analysis, confirmation, effective ness, speed, efficiency, productivity, decision making, and in reducing redundancy. Classic approaches for extracting information from data include descriptive, predictive and prescriptive analytics. The most common is descriptive analysis, used as a lag metric to review what has already occurred. Predictive analysis also uses existing data as the basis for a forecast model. Prescriptive builds on predictive analytics, going a step further by offering greater calculated insight into possible outcomes for selected courses of action leading to better decision making. Data visualization of these approaches range from classic bar and pie charts to complex illustrations.
     The approach selected must align with the organization’s senior leader expectations or else the experiment will be short lived. The organization may already possess visualization tools that can be leveraged at little or no additional cost. If the organization is just getting started, a proof of concept pilot approach may be best, initiating a seminal demonstration that can be progressively refined until an effective management tool emerges. The beginning point could be basic metrics to more accurately measure and assess success associated with the organizational goals, objectives and performance plan. Basic example performance measurement of services, products and processes includes:
• Cost Benefit = (program cost avoided or cost incurred) / (total program cost)
• Productivity = (number of units processed by an employee) / (total employee hours)
• Training Effectiveness = (number of phishing email clicked) / (total phishing attempts)
     Performance metrics enable quantitative analysis of whether the tool investment produces sufficient monetary value, fundamentally a risk decision about business outlays. One common method for quantifying risk is: Annualized Loss Expectancy (ALE) = Single Loss Expectancy (SLE) x Annualized Rate of Occurrence (ARO). For example, if the average cost of a phishing and social engineering attack is $1.6 million (M) for a midsize company and the likelihood of a targeted user clicking on the malicious attachment or link is 0.02736, then the risk value = ($1.6M x 0.02736) = $43,776. After weighing the organization’s cyber defenses and history of cyber-attacks, the business-investment decision makers could better determine if investing in employee anti-phishing training and training data visualization is a reasonable risk-reduction expenditure. After the visualization tool has been purchased and deployed, the value of the insights revealed by the analytics must at that point be substantiated through organizational actions—i.e., cause and effect linkage leading to input/process/output adjustments. As a means of generating business intelligence, the organization is then able to weigh the tool’s value, which should be equal to or greater than the production cost. Generally, a more complex visualization results in higher tool cost. The journey from feasibility determination to requirements refinement and then to operational maturity, should be undertaken with the understanding that the initial investment may not be supported by the magnitude of the early results, but total improvement over time should be greater than total outlay.
     In conclusion, managing and mining vast amounts of complex data typically results in the need to view information in ways that are measurably meaningful and actionable to the organization. Added benefits include selective sharing, on-demand viewing and more informed decisions. Information visualization tools range from low cost Microsoft Excel charts to more powerful applications capable of producing relationship and pattern analysis, forecasts, scorecards and performance dashboards from large unstructured data. Organization leaders can then shift from reacting to lag measures towards proactive actions based upon predictive data presentation.
     Data visualization has a potentially significant cost that must be balanced against the payback benefits rather than simply bundled into a data management package. Selecting the best tool for the organization should include basic cost-benefit analysis based upon a performance metric of the value of the decisions made from the information provided.


Better Communications on IT Spending Risks


     Why are million-dollar information technology (IT) investment decisions based on single-point green, yellow, and red visual indicators, which are poorly defined and ineffective abstractions of the fundamental components of risk—probability and impact? Decisions are founded on a weak understanding of the risk without considering a range of possible outcomes for any choice of action.
      IT professionals can significantly improve how they assess and communicate program risk to business investment decision makers, who must allocate funds among competing priorities. We can reform our communication of risk to business leaders so we provide a range of estimated outcome values, within a confidence interval that reflects the inherent uncertainties of large, complex decisions.
      Monte Carlo simulation prepared with standard Microsoft Excel is a low-cost, yet effective, method for quantifiably modelling risk. Displaying the simulation results graphically as a familiar management histogram chart overlaid with a risk expectancy line enables uncertainty to be precisely articulated within a confidence interval for better-informed decision making. Risk variable values can also be changed on the fly to support dynamic what-if analysis. The model presented by the author was developed from material taught by Derek E. Brink, a Certified Information Systems Security Professional, in Harvard University’s Division of Continuing Education course “How to Assess and Communicate Risk in Information Security.”
      The stakes are high. The federal IT dashboard indicates that government-wide IT spending for fiscal year (FY) 2017 totals about $81.6 billion. The site also specifies that for all major IT investments government-wide, 3.4 percent of the projects are considered to be high risk, and 23.2 percent are considered medium risk. The U.S. Government Accountability Office has issued several reports between 2011 and 2015 documenting failed major IT projects, including eight projects valued at more than $8.5 billion. Improved risk analysis and communication would return substantial value. For example, if the cost of failed programs was reduced by merely 1 percent, this would amount to more than $85 million saved on these eight projects alone.
      The key or greatest facilitator of informed business decisions is communicating data uncertainty as a frequency and impact distribution, overlaid with an exceedance probability (EP) curve at the desired confidence level. The concept may seem complex, but the technique has been widely applied in financial, insurance, actuary, and catastrophe planning to estimate the probability that a certain level of loss will be exceeded over a given time.
I offer three assumptions regarding risk that show why I believe we must improve our assessment and communication of risk. These include:
• Risk is fundamentally determined by the likelihood of an undesirable event, and the impact of such an event.
• Risk in federal IT programs is mostly presented in qualitative terms of colors—red (high), yellow (medium) or green (low).
• Risk assessment and management are important activities for successful project management.

A More Detailed Look
     Risk determination depends upon the type of threat, weakness or vulnerability. However, framing risk based only on potential dangers does very little to enable value-based investment judgments. In fact, using technical jargon to present risk supports poor value judgments because there is no assessment of the odds that something bad actually will happen. As a result, decision makers often are left with only a binary choice of whether to commit resources. For example, the IT professional might describe a cyber-security risk as an unauthorized access breach that could expose employee records to compromise if stronger access management controls are not put into place. In the best-case scenario, the business leader is somewhat better informed and at worst has misleading value information on which to base decisions. Properly framing risk in terms of the probability and associated consequence magnitude allows evaluation of the level of uncertainty. Communicating the same cyber risk as a 10 percent probability that unauthorized access could result in an annual business cost of $2 million enables the organization leaders to determine how much risk they are willing to mitigate at the corresponding cost.
      Again, most risk in federal programs is presented as red, yellow or green. The color scheme is a risk representation convention described by the Department of Defense’s Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs. The approach to relative risk levels attempts to assess risk based upon Likert scales ranging from “not likely” to “near certainty” and “minimal impact” to “critical impact.” Likert scales are ordinal, meaning the data can be ranked but not accurately interpreted mathematically. In short, risk heat maps should be limited to the most basic risk prioritization. As a business investment decision support tool, the color-coded representation is ineffective for articulating quantified risk probability distributions for a range of possible outcomes for any meaningful choice of action.
Risk management seeks to define uncertainty as the probability of an event—and the business effect, positive or negative, of such an event. In terms of program and project management, risk is most often expressed for individual cost, schedule and performance variables in relationship to delivering the end product. Different disciplines such as research, engineering development, and logistics may each have its own perspective on project risk. But managing activity risk must not be confused with investment decisions that aggregate the effect of all variables to permit best-value business case investment analysis.
      The subject-matter expert (SME) plays an essential role in determining risk. SMEs typically are more knowledgeable than others regarding uncertainty measures within their areas. Using the unauthorized access breach example, the cybersecurity SME might estimate the likelihood that the organization could experience between one and three unauthorized access breaches within the next 12 months, in line with the 2016 Ponemon Institute data breach study reporting about a 26 percent likelihood of a company having one or more data breaches involving at least 10,000 records in the following 24 months. The SME knowledge, supplemented with historical and industry data, provides a reasonable measurement of the factors of risk, while incorporating the inherent uncertainty. Typical—though insufficient—risk representation would then simply apply an annualized loss expectancy (ALE) calculation such as annual loss = (likelihood of at least one breach) x (estimated number of breaches per year) x (estimated cost per breach). Given a breach cost estimated at $100,000, an ALE statement would quantify the annual potential risk as an average of $200,000. Based on this rudimentary cost analysis, risk then would be conventionally presented as red, yellow or green ordinal choices for the business leader to determine if the potential loss would be worth the financial investment needed to mitigate the risk.
Monte Carlo simulation is an excellent quantitative method for determining the likelihood of a potential loss within any of several designated intervals, over a range of values. Standard Microsoft Excel is more than adequate for creating simulation models and displaying possible scenario impact outcomes graphically as familiar charts. In the simulation model, the SMEs provide their estimates for the risk factors; specifically, providing the values for the upper and lower bounds, with a 90 percent certainty.
      For example, consider a hypothetical software development project for which the business leader wants to assess the risk of the project’s $40 million budget and submits the Business Impact Question: What is the risk that a longer development time will increase the overall project cost? Figure 1 illustrates the project simulation risk model, with four key risk variables that fundamentally determine the overall project duration. The model simulates the number of days to complete each factor. Factors 1, 2 and 3 are accomplished in parallel and must be completed before Factor 4 can begin; Factor 4 is then added to the highest of the three values. Daily cost is then applied to the resulting number of days.




     The probability and impact simulation results for this hypothetical project are displayed in Figure 2, indicating that for 10,000 simulations there is a 90 percent likelihood that the annual cost will exceed about $46 million and a 10 percent probability that the annual cost will exceed about $50 million, with a median (50 percent likelihood) expected annual cost of about $48 million. The values between 90 percent and 10 percent represent an 80 percent confidence interval, but any level of risk can be determined simply by examining the exceedance probability curve.




     When communicating with business leaders, the same information could be presented as in Figure 3. Because Excel calculates 10,000 simulations of this model in about 1 second, leaders could quickly receive answers to “what if” sensitivity analysis questions that change the risk simulation variable values such as labor and material costs, purchase versus lease, number of units produced or purchased, workforce size and payment schedules. Creating an initial risk simulation model from existing Monte Carlo modeling templates took about a week, but subsequently building the model used in this example took only about 1 hour. The simulation model is clearly a significant improvement over ALE and red-yellow-green risk communication. First, simulation considers thousands of possible outcomes, not just the average outcome. Second, simulation assesses the likelihood of each outcome. Third, risk analysis can then be communicated as quantified values rather than hunches or guesses.


Conclusions and Recommendations
     Business leaders facing uncertainty for significant investments in complex and expensive IT projects require more than simple risk heat maps to inform their decisions. Accurate and meaningful communication of risk requires a quantitative measurement of business impact. Risk simulation provides an inexpensive yet effective method for reducing uncertainty, by quantifying probability and impact for a possible future event, within a specified time period, over a range of values, with a specified confidence level. Communicating risk as, “90 percent likelihood that the annual cost will exceed about $46 million with a median (50 percent likelihood) annual cost of about $48 million” is far more useful to making a better-informed business decision than simply stating that increased project cost is “Very Low, Low, Moderate, High, or Very High.”
To begin transitioning from risk matrix to risk simulation for investment circumstances I recommend the following:
• Schedule FY 2018 and FY 2019 for discussion, publishing guidance and creating training opportunities. Then, beginning in FY 2020, provide that Monte Carlo risk simulation become mandatory for all IT investment decisions exceeding $1 million.
• Establish a library of basic simulation models and tutorials to facilitate rapid development for a variety of applications.