(Mt) – Cyber Risk and Management Research

SOLUTION AT Academic Writers Bay

Systemic Risk Task: The purpose is to research current trends in the theory and application of Risk Management at the systems level. Review a project or system where risk management has gone wrong. Review the consequences which have occurred at the system level and comment on the cascading effects. Use a system map to depict the interlinks, interactions or interconnectedness of the affected systems. Did the organisation or enterprise have the risk appetite for the consequences? Use the learnings from the second intensive where applicable to examine the way risk has been managed. Critique the techniques used in the organisation against those discussed in scholarly papers. Where could things have been improved? In this assignment, you are to demonstrate your awareness of systems thinking within the area of risk management. This exercise requires wider research and reading such as The Australian newspaper (business section) or the Risk Management Magazine, or Harvard Business Review magazine. It is not a straightforward assignment, it requires wider reading and research, comparable to a masters degree. Length and Presentation: Submit a Report maximum of 2,500 words. PERSPECTIVE doi:10.1038/nature12047 Globally networked risks and how to respond Dirk Helbing1,2 Today’s strongly connected, global networks have produced highly interdependent systems that we do not understand and cannot control well. These systems are vulnerable to failure at all scales, posing serious threats to society, even when external shocks are absent. As the complexity and interaction strengths in our networked world increase, man-made systems can become unstable, creating uncontrollable situations even when decision-makers are well-skilled, have all data and technology at their disposal, and do their best. To make these systems manageable, a fundamental redesign is needed. A ‘Global Systems Science’ might create the required knowledge and paradigm shift in thinking. G lobalization and technological revolutions are changing our planet. Today we have a worldwide exchange of people, goods, money, information, and ideas, which has produced many new opportunities, services and benefits for humanity. At the same time, however, the underlying networks have created pathways along which dangerous and damaging events can spread rapidly and globally. This has increased systemic risks1 (see Box 1). The related societal costs are huge. When analysing today’s environmental, health and financial systems or our supply chains and information and communication systems, one finds that these systems have become vulnerable on a planetary scale. They are challenged by the disruptive influences of global warming, disease outbreaks, food (distribution) shortages, financial crashes, heavy solar storms, organized (cyber-)crime, or cyberwar. Our world is already facing some of the consequences: global problems such as fiscal and economic crises, global migration, and an explosive mix of incompatible interests and cultures, coming along with social unrests, international and civil wars, and global terrorism. In this Perspective, I argue that systemic failures and extreme events are consequences of the highly interconnected systems and networked risks humans have created. When networks are interdependent2,3, this makes them even more vulnerable to abrupt failures4–6. Such interdependencies in our ‘‘hyper-connected world’’1 establish ‘‘hyper-risks’’ (see Fig. 1). For example, today’s quick spreading of emergent epidemics is largely a result of global air traffic, and may have serious impacts on our global health, social and economic systems6–9. I also argue that initially beneficial trends such as globalization, increasing network densities, sparse use of resources, higher complexity, and an acceleration of institutional decision processes may ultimately push our anthropogenic (man-made or humaninfluenced) systems10 towards systemic instability—a state in which things will inevitably get out of control sooner or later. Many disasters in anthropogenic systems should not be seen as ‘bad luck’, but as the results of inappropriate interactions and institutional settings. Even worse, they are often the consequences of a wrong understanding due to the counter-intuitive nature of the underlying system behaviour. Hence, conventional thinking can cause fateful decisions and the repetition of previous mistakes. This calls for a paradigm shift in thinking: systemic instabilities can be understood by a change in perspective from a component-oriented to an interaction- and network-oriented view. This also implies a fundamental change in the design and management of complex dynamical systems. The FuturICT community11 (see http://www.futurict.eu), which involves thousands of scientists worldwide, is now engaged in establishing a 1 ‘Global Systems Science’, in order to understand better our information society with its close co-evolution of information and communication technology (ICT) and society. This effort is allied with the ‘‘Earth system science’’10 that now provides the prevailing approach to studying the physics, chemistry and biology of our planet. Global Systems Science wants to make the theory of complex systems applicable to the solution of global-scale problems. It will take a massively data-driven approach that builds on a serious collaboration between the natural, engineering, and social sciences, aiming at a grand integration of knowledge. This approach to real-life techno-socio-economic-environmental systems8 is expected to enable new response strategies to a number of twenty-first century challenges. BOX 1 Risk, systemic risk and hyper-risk According to the standard ISO 31000 (2009; http://www.iso.org/iso/ catalogue_detail?csnumber543170), risk is defined as ‘‘effect of uncertainty on objectives’’. It is often quantified as the probability of occurrence of an (adverse) event, times its (negative) impact (damage), but it should be kept in mind that risks might also create positive impacts, such as opportunities for some stakeholders. Compared to this, systemic risk is the risk of having not just statistically independent failures, but interdependent, so-called ‘cascading’ failures in a network of N interconnected system components. That is, systemic risks result from connections between risks (‘networked risks’). In such cases, a localized initial failure (‘perturbation’) could have disastrous effects and cause, in principle, unbounded damage as N goes to infinity. For example, a large-scale power blackout can hit millions of people. In economics, a systemic risk could mean the possible collapse of a market or of the whole financial system. The potential damage here is largely determined by the size N of the networked system. Even higher risks are implied by networks of networks4,5, that is, by the coupling of different kinds of systems. In fact, new vulnerabilities result from the increasing interdependencies between our energy, food and water systems, global supply chains, communication and financial systems, ecosystems and climate10. The World Economic Forum has described this situation as a hyper-connected world1, and we therefore refer to the associated risks as ‘hyper-risks’. ETH Zurich, Clausiusstrasse 50, 8092 Zurich, Switzerland. 2Risk Center, ETH Zurich, Swiss Federal Institute of Technology, Scheuchzerstrasse 7, 8092 Zurich, Switzerland. 2 M AY 2 0 1 3 | VO L 4 9 7 | N AT U R E | 5 1 ©2013 Macmillan Publishers Limited. All rights reserved RESEARCH PERSPECTIVE Figure 1 | Risks Interconnection Map 2011 illustrating systemic interdependencies in the hyperconnected world we are living in. Reprinted from ref. 82 with permission of the WEF. Liquidity / Liquidity/ credit crunch Slowing Chinese economy Economy Retrenchment from globalization Asset price collapse Extreme consumer price volatility Fiscal crises Global imbalances imbalances and and Global currency volatility Regulatory failures Illicit trade Extreme energy price volatility Extreme commodity price volatility Corruption Space security Organized crime Economic Economicdisparity disparity Globalgovernance governancefailures failures Global Demographic challenges Fragile states Online data and information security Terrorism Infectious diseases Geopolitical conflict Migration Water security Food security Critical information infrastructure breakdown Weapons of mass destruction Chronic diseases Biodiversity loss Climate change Threats from new technologies Infrastructure fragility Air pollution Ocean governance Flooding Storms and cyclones Earthquakes and volcanic eruptions Economic Risks Higher perceived likelihood Higher perceived impact Higher perceived interconnection Technological Risks Environmental Risks Geopolitical Risks Illicit trade Societal Risks Extreme energy price volatility Asset price collapse Corruption Fiscal crises Global imbalances and currency volatility Fiscal crises The macro-economic imbalances nexus Food security Water security Organized crime Fragile Fragile states states The illegal economy nexus Climate change The water–food–energy nexus What we know Overview Catastrophe theory12 suggests that disasters may result from discontinuous transitions in response to gradual changes in parameters. Such systemic shifts are expected to occur at certain ‘tipping points’ (that is, critical parameter values) and lead to different system properties. The theory of critical phenomena13 has shown that, at such tipping points, power-law (or other heavily skewed) distributions of event sizes are typical. They relate to cascade effects4,5,14–20, which may have any size. Hence, ‘‘extreme events’’21 can be a result of the inherent system dynamics rather than of unexpected external events. The theory of self-organized criticality22 furthermore shows that certain systems (such as piles of grains prone to avalanches) may be automatically driven towards a critical tipping point. Other work has studied the error and attack tolerance of networks23 and cascade effects in networks4,5,14–20,24, where local failures of nodes or links may trigger overloads and consequential failures of other nodes or links. Moreover, abrupt systemic failures may result from interdependencies between networks4–6 or other mechanisms25,26. Surprising behaviour due to complexity Current anthropogenic systems show an increase of structural, dynamic, functional and algorithmic complexity. This poses challenges for their design, operation, reliability and efficiency. Here I will focus on complex dynamical systems—those that cannot be understood by the sum of their components’ properties, in contrast to loosely coupled systems. The following typical features result from the nonlinear interactions in complex systems27,28. (1) Rather than having one equilibrium solution, the system might show numerous different behaviours, depending on the respective initial conditions. (2) Complex dynamical systems may seem uncontrollable. In particular, opportunities for external or topdown control are very limited29. (3) Self-organization and strong correlations dominate the system behaviour. (4) The (emergent) properties of complex dynamical systems are often surprising and counter-intuitive30. Furthermore, the combination of nonlinear interactions, network effects, delayed response and randomness may cause a sensitivity to small changes, unique path dependencies, and strong correlations, all of which are hard to understand, prepare for and manage. Each of these factors is already difficult to imagine, but this applies even more to their combination. For example, fundamental changes in the system outcome—such as non-cooperative behaviour rather than cooperation among agents—can result from seemingly small changes in the nature of the components or their mode of interaction (see Fig. 2). Such small changes may be interactions that take place on particular networks rather than on regular or random networks, interactions or components that are spatially varying rather than homogeneous, or which are subject to random ‘noise’ rather than behaving deterministically31,32. 5 2 | N AT U R E | VO L 4 9 7 | 2 M AY 2 0 1 3 ©2013 Macmillan Publishers Limited. All rights reserved Cascade effects due to strong interactions Our society is entering a new era—the era of a global information society, characterized by increasing interdependency, interconnectivity and complexity, and a life in which the real and digital world can no longer be separated (see Box 2). However, as interactions between components become ‘strong’, the behaviour of system components may seriously alter or impair the functionality or operation of other components. Typical properties of strongly coupled systems in the abovedefined sense are: (1) Dynamical changes tend to be fast, potentially outstripping the rate at which one can learn about the characteristic system behaviour, or at which humans can react. (2) One event can trigger further events, thereby creating amplification and cascade effects4,5,14–20, which implies a large vulnerability to perturbations, variations or random failures. Cascade effects come along with highly correlated transitions of many system components or variables from a stable to an unstable state, thereby driving the system out of equilibrium. (3) Extreme events tend to occur more often than expected for normally distributed event sizes17,21. Probabilistic cascade effects in real-life systems are often hard to identify, understand and map. Rather than deterministic one-to-one relationships between ‘causes’ and ‘effects’, there are many possible paths of events (see Fig. 3), and effects may occur with obfuscating delays. Systemic instabilities challenge our intuition Why are attempts to control strongly coupled, complex systems so often unsuccessful? Systemic failures may occur even if everybody involved is highly skilled, highly motivated and behaving properly. I shall illustrate this with two examples. Crowd disasters Crowd disasters constitute an eye-opening example of the eventual failure of control in a complex system. Even if nobody wants to harm anybody else, people may be fatally injured. A detailed analysis reveals amplifying feedback effects that cause a systemic instability33,34. The interaction strength increases with the crowd density, as people come closer together. When the density becomes too high, inadvertent contact forces are transferred from one body to another and add up. The resulting forces vary significantly in direction and size, pushing people around, and creating a phenomenon called ‘crowd quake’. Turbulent waves cause people to stumble, and others fall over them in an often fatal domino effect. If people do not manage to get back on their feet quickly enough, they are likely to suffocate. In many cases, the instability is created not by foolish or malicious individual actions, but by the unavoidable amplification of small fluctuations above a critical density threshold. Consequently, crowd disasters cannot simply be evaded by policing, aimed at imposing ‘better behaviour’. Some kinds of crowd control might even worsen the situation34. Financial meltdown Almost a decade ago, the investor Warren Buffett warned that massive trade in financial derivatives would create mega-catastrophic risks for the economy. In the same context, he spoke of an investment ‘‘time bomb’’ and of financial derivatives as ‘‘weapons of mass destruction’’ (see http://news. bbc.co.uk/2/hi/2817995.stm, accessed 1 June 2012). Five years later, the financial bubble imploded and destroyed trillions of stock value. During this time, the overall volume of credit default swaps and other financial derivatives had grown to several times the world gross domestic product. But what exactly caused the collapse? In response to the question by the Queen of England of why nobody had foreseen the financial crisis, the British Academy concluded: ‘‘Everyone seemed to be doing their own job properly on its own merit. And according to standard measures of success, they were often doing it well. The failure was to see how collectively this added up to a series of interconnected imbalances… Individual risks may rightly have been viewed as small, but the risk to the system as a whole was vast.’’ (See http://www.britac.ac.uk/templates/ asset-relay.cfm?frmAssetFileID58285, accessed 1 June 2012.) For example, Percentage of cooperation (%) PERSPECTIVE RESEARCH 1.0 0.8 0.6 0.4 0.2 0.0 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Connection density (%) Figure 2 | Spreading and erosion of cooperation in a prisoner’s dilemma game. The computer simulations assume the payoff parameters T 5 7, R 5 6, P 5 2, and S 5 1 and include success-driven migration32. Although cooperation would be profitable to everyone, non-cooperators can achieve a higher payoff than cooperators, which may destabilize cooperation. The graph shows the fraction of cooperative agents, averaged over 100 simulations, as a function of the connection density (actual number of network links divided by the maximum number of links when all nodes are connected to all others). Initially, an increasing link density enhances cooperation, but as it passes a certain threshold, cooperation erodes. (See http://vimeo.com/53876434 for a related movie.) The computer simulations are based on a circular network with 100 nodes, each connected with the four nearest neighbours. n links are added randomly. 50 nodes are occupied by agents. The inset shows a ‘snapshot’ of the system: blue circles represent cooperation, red circles non-cooperative behaviour, and black dots empty sites. Initially, all agents are non-cooperative. Their network locations and behaviours (cooperation or defection) are updated in a random sequential way in 4 steps: (1) The agent plays two-person prisoner’s dilemma games with its direct neighbours in the network. (2) After the interaction, the agent moves with probability 0.5 up to 4 steps along existing links to the empty node that gives the highest payoff in a fictitious play step, assuming that noone changes the behaviour. (3) The agent imitates the behaviour of the neighbour who got the highest payoff in step 1 (if higher than the agent’s own payoff). (4) The behaviour is spontaneously changed with a mutation rate of 0.1. while risk diversification in a banking system is aimed at minimizing risks, it can create systemic risks when the network density becomes too high20. Drivers of systemic instabilities Table 1 lists common drivers of systemic instabilities32, and what makes the corresponding system behaviours difficult to understand. Current global trends promote several of these drivers. Although they often have desirable effects in the beginning, they may destabilize anthropogenic systems over time. Such drivers are, for example: (1) increasing system sizes, (2) reduced redundancies due to attempts to save resources (implying a loss of safety margins), (3) denser networks (creating increasing interdependencies between critical parts of the network, see Figs 2 and 4), and (4) a high pace of innovation35 (producing uncertainties or ‘unknown unknowns’). Could these developments create a ‘‘global time bomb’’? (See Box 3.) Knowledge gaps Not well behaved The combination of complex interactions with strong couplings can lead to surprising, potentially dangerous system behaviours17,30, which are barely understood. At present, most of the scientific understanding of large networks is restricted to cases of special, sparse, or static networks. However, dynamically changing, strongly coupled, highly interconnected and densely populated complex systems are fundamentally different36. The number of possible system behaviours and proper management strategies, when regular interaction networks are replaced by irregular ones, is overwhelming18. In other words, there is no standard solution for complex systems, and ‘the devil is in the detail’. 2 M AY 2 0 1 3 | VO L 4 9 7 | N AT U R E | 5 3 ©2013 Macmillan Publishers Limited. All rights reserved RESEARCH PERSPECTIVE BOX 2 Global information and communication systems One vulnerable system deserving particular attention is our global network of information and communication technologies (ICT)11. Although these technologies will be central to the solution of global challenges, they are also part of the problem and raise fundamental ethical issues, for example, how to ensure the self-determined use of personal data. New ‘cyber-risks’ arise from the fact that we are now enormously dependent on reliable information and communication systems. This includes threats to individuals (such as privacy intrusion, identity theft or manipulation by personalized information), to companies (such as cybercrime), and to societies (such as cyberwar or totalitarian control). Our global ICT system is now the biggest artefact ever created, encompassing billions of diverse components (computers, smartphones, factories, vehicles and so on). The digital and real world cannot be divided any more; they form a single interweaved system. In this new ‘‘cybersocial world’’, digital information drives real events. The techno-socio-economic implications of all this are barely understood11. The extreme speed of these systems, their hyper-connectivity, large complexity, and massive data volumes produced are often seen as problems. Moreover, the components increasingly make autonomous decisions. For example, supercomputers are now performing the majority of financial transactions. The ‘flash crash’ of 6 May 2010 illustrates the unexpected systemic behaviour that can result (http:// en.wikipedia.org/wiki/2010_Flash_Crash, accessed 29 July 2012): within minutes, nearly $1 trillion in market value disappeared before the financial markets recovered again. Such computer systems can be considered to be ‘artificial social systems’, as they learn from information about their environment, develop expectations about the future, and decide, interact and communicate autonomously. To design these systems properly, ensure a suitable response to human needs, and avoid problems such as co-ordination failures, breakdowns of cooperation, conflict, (cyber-)crime or (cyber-)war, we need a better, fundamental understanding of socially interactive systems. Moreover, most existing theories do not provide much practical advice on how to respond to actual global risks, crises and disasters, and empirically based risk-mitigation strategies often remain qualitative37–42. Most scientific studies make idealized assumptions such as homogeneous components, linear, weak or deterministic interactions, optimal and independent behaviours, or other favourable features that make systems well-behaved (smooth dependencies, convex sets, and so on). Real-life systems, in contrast, are characterized by heterogeneous components, irregular interaction networks, nonlinear interactions, probabilistic behaviours, interdependent decisions, and networks of networks. These differences can change the resulting system behaviour fundamentally and dramatically and in unpredictable ways. That is, real-world systems are often not well-behaved. Behavioural rules may change Many existing risk models also neglect the special features of social systems, for example, the importance of a feedback of the emergent macro-level dynamics on the micro-level behaviour of the system components or on specific information input (see Box 4). Now, a single video or tweet may cause deadly social unrest on the other side of the globe. Such changes of the microdynamics may also change the failure probabilities of system components. For example, consider a case in which interdependent system components may fail or not with certain probabilities, and where local damage increases the likelihood of further damage. As a consequence, the bigger a failure cascade, the higher the probability that it might grow larger. This establishes the possibility of global catastrophic risks (see Possible paths Realised paths Figure 3 | Illustration of probabilistic cascade effects in systems with networked risks. The orange and blue paths show that the same cause can have different effects, depending on the respective random realization. The blue and red paths show that different causes can have the same effect. The understanding of cascade effects requires knowledge of at least the following three contributing factors: the interactions in the system, the context (such as institutional or boundary conditions), and in many cases, but not necessarily so, a triggering event (i.e. randomness may determine the temporal evolution of the system). While the exact timing of the triggering event is often not predictable, -trigger dynamics might be foreseeable to a certain extent (in a probabilistic sense). When system components behave randomly, a cascade effect might start anywhere, but the likelihood to originate at a weak part of the system is higher (e.g. traffic jams mostly start at known bottlenecks, but not always). Fig. 4), which cannot be reasonably insured against. The decreasing capacity of a socio-economic system to recover as a cascade failure progresses (thereby eliminating valuable resources needed for recovery) calls for a strong effort to stop cascades right at the beginning, when the damage is still small and the problem may not even be perceived as threatening. Ignoring this important point may cause costly and avoidable damage. Fundamental and man-made uncertainty Systems involving uncertainty, where the probability of particular events (for example, the occurrence of damage of a certain size) cannot be specified, are probably the least understood. Uncertainty may be a result of limitations of calibration procedures or lack of data. However, it may also have a fundamental origin. Let us assume a system of systems, in which the output variables of one system are input variables of another one. Let us further assume that the first system is composed of wellbehaved components, whose variables are normally distributed around their equilibrium state. Connecting them strongly may nevertheless cause cascade effects and power-law-distributed output variables13. If the exponent of the related cumulative distribution function is between 22 and 21, the standard deviation is not defined, and if it is between 21 and 0, not even the mean value exists. Hence, the input variables of the second system could have any value, and the damage in the second system depends on the actual, unpredictable values of the input variables. Then, even if one had all the data in the world, it would be impossible to predict or control the outcome. Under such conditions it is not possible to protect the system from catastrophic failure. Such problems must and can only be solved by a proper (re)design of the system and suitable management principles, as discussed in the following. Some design and operation principles Managing complexity using self-organization When systems reach a certain size or level of complexity, algorithmic constraints often prohibit efficient top-down management by real-time optimization. However, ‘‘guided self-organisation’’32,43,44 is a promising alternative way of managing complex dynamical systems, in a decentralized, bottom-up way. The underlying idea is to use, rather than fight, the system-immanent tendency of complex systems to self-organize and thereby create a stable, ordered state. For this, it is important to have the 5 4 | N AT U R E | VO L 4 9 7 | 2 M AY 2 0 1 3 ©2013 Macmillan Publishers Limited. All rights reserved PERSPECTIVE RESEARCH Table 1 | Drivers and examples of systemic instabilities Driver/factor Description/phenomenon Field/modelling approach Examples Surprising system behaviour Threshold effect Unexpected transition, systemic shift Strong correlations, mean-field approximation (‘representative agent model’) does not work Positive feedback Dynamic instability and amplification effect, equilibrium or stationary state cannot be maintained (Linear) stability analysis, eigenvalues theory, sensitivity analysis Wrong timing (mismatch of adjustment processes) Over-reaction, growing oscillations, loss of synchronization51 Domino and cascade effects, avalanches Perturbations in one network affect another one (Linear) stability analysis, eigenvalue theory Revolutions (for example, the Arab Spring, breakdown of former GDR, now East Germany) Self-organized criticality22, earthquakes74, stock market variations, evolutionary jumps, floods, sunspots Tragedy of the commons31 (tax evasion, over-fishing, exploitation of environment, global warming, free-riding, misuse of social benefits) Phantom traffic jams75, blackout of electrical power grids76 Sudden failure of continuous improvement attempts Randomness in a strongly coupled system Bifurcation73 and catastrophe theory12, explosive percolation25, dragon kings26 Statistical physics, theory of critical phenomena13 Financial crisis, epidemic spreading8 Coupled electricity and communication networks, impact of natural disasters on critical infrastructures Crowd turbulence33 It may be impossible to enumerate the risk Possibility of sudden failure (rather than gradual deterioration of performance) Computational and experimental testing Cybernetics78, heuristics Information and communication systems Traffic light control45, production, politics Unexpected system properties and failures Optimal solution unreachable, slower-is-faster effect75 Operations research Throughput optimization, portfolio optimization Economics, political sciences Conflict72 Evolutionary models, genetic algorithms68 Financial derivatives, new products, new procedures and new species Capacity drop75, systemic risks created by insurance against risks79 Market failure, minority may win Point change can mess up the whole system, finite time singularity35,81 Strong interaction, contagion Complex structure Complex dynamics Complex function Complex control Optimization Competition Innovation Self-organized dynamics, emergence of new systemic properties Sensitivity, opaqueness, scientific unknowns Time required for computational solution explodes with system size, delayed or non-optimal solutions Orientation at state of high performance; loss of reserves and redundancies Incompatible preferences or goals Introduction of new system components, designs or properties; structural instability80 Network analysis, agent-based models, bundle-fibre model24 Theory of interdependent networks4 Nonlinear dynamics, chaos theory77, complexity theory28 right kinds of interactions, adaptive feedback mechanisms, and institutional settings. By establishing proper ‘rules of the game’, within which the system components can self-organize, including mechanisms ensuring rule compliance, top-down and bottom-up principles can be combined and inefficient micro-management can be avoided. To overcome suboptimal solutions and systemic instabilities, the interaction rules or institutional settings may have to be modified. Symmetrical interactions, for example, can often promote a well-balanced situation and an evolution to the optimal system state32. Traffic light control is a good example to illustrate the ongoing paradigm shift in managing complexity. Classical control is based on the principle of a ‘benevolent dictator’: a traffic control centre collects information from the city and tries to impose an optimal traffic light control. But because the optimization problem is too demanding for real-time optimization, the control scheme is adjusted for the typical traffic flows on a certain day and time. However, this control is not optimal for the actual situation owing to the large variability in the arrival rates of vehicles. Significantly smaller and more predictable travel times can be reached using a flexible ‘‘self-control’’ of traffic flows45. This is based on a suitable real-time response to a short-term anticipation of vehicle flows, thereby coordinating neighbouring intersections. Decentralized principles of managing complexity are also used in information and communication systems46, and they are becoming a trend in energy production (‘‘smart grids’’47). Similar self-control principles could be applied to logistic and production systems, or even to administrative processes and governance. Coping with networked risks To cope with hyper-risks, it is necessary to develop risk competence and to prepare and exercise contingency plans for all sorts of possible failure cascades4,5,14–20. The aim is to attain a resilient (‘forgiving’) system design and operation48,49. Extreme events21, outcome can be opposite of mean-field prediction Bubbles and crashes, cooperation breaks down, although it would be better for everyone Breakdown of flow despite sufficient capacity Systemic properties differ from the component properties An important principle to remember is to have at least one backup system that runs in parallel to the primary system and ensures a safe fallback level. Note that a backup system should be operated and designed according to different principles in order to avoid a failure of both systems for the same reasons. Diversity may not only increase systemic resilience (that is, the ability to absorb shocks or recover from them), it can also promote systemic adaptability and innovation43. Furthermore, diversity makes it less likely that all system components fail at the same time. Consequently, early failures of weak system components (critical fluctuations) will create early warning signals of an impending systemic instability50. An additional principle of reducing hyper-risks is the limitation of system size, to establish upper bounds to the possible scale of disaster. Such a limitation might also be established in a dynamical way, if realtime feedback allows one to isolate affected parts of the system before others are damaged by cascade effects. If a sufficiently rapid dynamic decoupling cannot be ensured, one can build weak components (breaking points) into the system, preferably in places where damage would be comparatively small. For example, fuses in electrical circuits serve to avoid large-scale damage of local overloads. Similarly, engineers have learned to build crush zones in cars to protect humans during accidents. A further principle would be to incorporate mechanisms producing a manageable state. For example, if the system dynamics unfolds so rapidly that there is a danger of losing control, one could slow it down by introducing frictional effects (such as a financial transaction fee that kicks in when financial markets drop). Also note that dynamical processes in a system can desynchronize51, if the control variables change too quickly relative to the timescale on which the governed components can adjust. For example, stable hierarchical systems typically change slowly on the top and much quicker on the lower levels. If the influence of the top on the bottom levels becomes 2 M AY 2 0 1 3 | VO L 4 9 7 | N AT U R E | 5 5 ©2013 Macmillan Publishers Limited. All rights reserved RESEARCH PERSPECTIVE 100 BOX 3 50 Have humans created a ‘global time bomb’? 40 Total damage (%) 80 60 30 20 10 40 0 10 20 30 40 50 20 0 0.1 0.15 0.2 0.25 0.3 Connection density (%) Figure 4 | Cascade spreading is increasingly hard to recover from as failure progresses. The simulation model mimics spatial epidemic spreading with air traffic and healing costs in a two-dimensional 50 3 50 grid with periodic boundary conditions and random shortcut links. The colourful inset depicts an early snapshot of the simulation with N 5 2,500 nodes. Red nodes are infected, green nodes are healthy. Shortcut links are shown in blue. The connectivitydependent graph shows the mean value and standard deviation of the fraction i(t)/N of infected nodes over 50 simulation runs. Most nodes have four direct neighbours, but a few of them possess an additional directed random connection to a distant node. The spontaneous infection rate is s 5 0.001 per time step; the infection rate by an infected neighbouring node is P 5 0.08. Newly infected nodes may infect others or may recover from the next time step onwards. Recovery occurs with a rate q 5 0.4, if there is enough budget b . c to bear the healing costs c 5 80. The budget needed for recovery is created by the number of healthy nodes h(t). Hence, if r(t) nodes are recovering at time t, the budget changes according to b(t 1 1) 5 b(t) 1 h(t) 2 cr(t). As soon as the budget is used up, the infection spreads explosively. (See also the movie at http://vimeo.com/53872893.) For a long time, crowd disasters and financial crashes seemed to be puzzling, unrelated, ‘God-given’ phenomena one simply had to live with. However, it is possible to grasp the mechanisms that cause complex systems to get out of control. Amplification effects can result and promote failure cascades, when the interactions of system components become stronger than the frictional effects or when the damaging impact of impaired system components on other components occurs faster than the recovery to their normal state. For certain kinds of interaction networks, the similarity of related cascade effects with those of chain reactions in nuclear fission is disturbing (see Box 3 Figure). It is known that such processes are difficult to control. Catastrophic damage is a realistic scenario. Given the similarity of the cascading mechanisms, is it possible that our worldwide anthropogenic system will get out of control sooner or later? In other words, have humans unintentionally created something like a ‘‘global time bomb’’? If so, what kinds of global catastrophic scenarios might humans in complex societies81 face? A collapse of the global information and communication systems or of the world economy? Global pandemics6–9? Unsustainable growth, demographic or environmental change? A global food or energy crisis? The large-scale spreading of toxic substances? A cultural clash83? Another global-scale conflict84,85? Or, more likely, a combination of several of these contagious phenomena (the ‘‘perfect storm’’1)? When analysing such global risks, one should bear in mind that the speed of destructive cascade effects might be slow, and the process may not look like an explosion. Nevertheless, the process can be hard to stop. For example, the dynamics underlying crowd disasters is slow, but deadly. too strong, this may impair the functionality and self-organization of the hierarchical structure32. Last but not least, reducing connectivity may serve to decrease the coupling strength in the system. This implies a change from a dense to a sparser network, which can reduce contagious spreading effects. In fact, sparse networks seem to be characteristic for ecological systems52. As logical as the above safety principles may sound, these precautions have often been neglected in the design and operation of strongly coupled, complex systems such as the world financial system20,53,54. What is ahead Despite all our knowledge, much work is still ahead of us. For example, the current financial crisis shows that much of our theoretical knowledge has not yet found its way into real-world policies, as it should. Economic crises Two main pillars of mainstream economics are the equilibrium paradigm and the representative agent approach. According to the equilibrium paradigm, economies are viewed as systems that tend to evolve towards an equilibrium state. Bubbles and crashes should not happen and, hence, would not require any precautions54. Sudden changes would be caused exclusively by external shocks. However, it does not seem to be widely recognized that interactions between system elements can cause amplifying cascade effects even if all components relax to their equilibrium state55,56. Representative agent models, which assume that companies act in the way a representative (average) individual would optimally decide, are more general and allow one to describe dynamical processes. However, such models cannot capture processes well if random events, the diversity of system components, the history of the system or correlations between variables matter a lot. It can even happen that representative Possible paths Realised paths Box 3 Figure | Illustration of the principle of a ‘time bomb’. A single, local perturbation of a node may cause large-scale damage through a cascade effect, similar to chain reactions in nuclear fission. agent models make predictions opposite to those of agent-based computer simulations assuming the very same interaction rules32 (see Fig. 2). Paradigm shift ahead Both equilibrium and representative agent models are fundamentally incompatible with probabilistic cascade effects—they are different classes of models. Cascade effects cause a system to leave its previous (equilibrium) state, and there is also no representative dynamics, because different possible paths of events may look very different (see Fig. 3). Considering furthermore that the spread of innovations and products also involves cascade effects57,58, it seems that cascade effects are even the rule rather than the exception in today’s economy. This calls for a new economic thinking. Many currently applied theories are based on the 5 6 | N AT U R E | VO L 4 9 7 | 2 M AY 2 0 1 3 ©2013 Macmillan Publishers Limited. All rights reserved PERSPECTIVE RESEARCH BOX 4 BOX 5 Many twenty-first-century challenges have a social component and cannot be solved by technology alone86. Socially interactive systems, be it social or economic systems, artificial societies, or the hybrid system made up of our virtual and real worlds, are characterized by a number of special features, which imply additional risks: The components (for example, individuals) take autonomous decisions based on (uncertain) future expectations. They produce and respond to complex and often ambiguous information. They have cognitive complexity. They have individual learning histories and therefore different, subjective views of reality. Individual preferences and intentions are diverse, and imply conflicts of interest. The behaviour may depend on the context in a sensitive way. For example, the way people behave and interact may change in response to the emergent social dynamics on the macro scale. This also implies the ability to innovate, which may create surprising outcomes and ‘unknown unknowns’ through new kinds of interactions. Furthermore, social network interactions can create social capital43,87 such as trust, solidarity, reliability, happiness, social values, norms and culture. To assess systemic risks fully, a better understanding of social capital is crucial. Social capital is important for economic value generation, social well-being, and societal resilience, but it may be damaged or exploited, like our environment. Therefore, humans need to learn how to quantify and protect social capital36. A warning example is the loss of trillions of dollars in the stock markets during the financial crisis, which was largely caused by a loss of trust. It is important to stress that risk insurances today do not consider damage to social capital. However, it is known that large-scale disasters have a disproportionate public impact, which is related to the fact that they destroy social capital. By neglecting social capital in risk assessment, we are taking higher risks than we would rationally do. State-of-the-art risk analysis88 still seems to have a number of shortcomings. (1) Estimates for the probability distribution and parameters describing rare events, including the variability of such parameters over time, are often poor. (2) The likelihood of coincidences of multiple unfortunate, rare events is often underestimated (but there is a huge number of possible coincidences). (3) Classical fault tree and event tree analyses37 (see also http://en.wikipedia.org/wiki/Fault tree analysis and http:// en.wikipedia.org/wiki/Event tree, both accessed 18 November 2012) do not sufficiently consider feedback loops. (4) The combination of probabilistic failure analysis with complex dynamics is still uncommon, even though it is important to understand amplification effects and systemic instabilities. (5) The relevance of human factors, such as negligence, irresponsible or irrational behaviour, greed, fear, revenge, perception bias, or human error is often underestimated30,41. (6) Social factors, including the value of social capital, are typically not considered. (7) Common assumptions underlying established ways of thinking are not questioned enough, and attempts to identify uncertainties or ‘unknown unknowns’ are often insufficient. Some of the worst disasters have happened because of a failure to imagine that they were possible42, and thus to guard against them. (8) Economic, political and personal incentives are not sufficiently analysed as drivers of risks. Many risks can be revealed by looking for stakeholders who could potentially profit from risk-taking, negligence or crises. Riskseeking strategies that attempt to create new opportunities via systemic change are expected mainly under conditions of uncertainty, because these tend to be characterized by controversial debates and, therefore, under-regulation. To reach better risk assessment and risk reduction we need transparency, accountability, responsibility and awareness of individual and institutional decision-makers11,36. Modern governance sometimes dilutes responsibility so much that nobody can be held responsible anymore and catastrophic risks may be a consequence. The financial crisis seems to be a good example. Part of the problem appears to be that credit default swaps and other financial derivatives are modern financial insurance instruments, which transfer risks from the individuals or institutions causing them to others, thereby encouraging excessive risk taking. It might therefore be necessary to establish a principle of collective responsibility, by which individuals or institutions share responsibility for incurred damage in proportion to their previous (and subsequent) gains. Social factors and social capital assumption that statistically independent, optimal decisions are made. Under such idealized conditions one can show that financial markets are efficient, that herding effects will not occur, and that unregulated, selfregarding behaviour can maximize system performance, benefiting everyone. Some of these paradigms are centuries old yet still applied by policy-makers. However, such concepts must be questioned in a world where economic decisions are strongly coupled and cascade effects are frequent54,59. Global Systems Science For a long time, humans have considered systemic failures to originate from ‘outside the system’, because it has been difficult to understand how they could come about otherwise. However, many disasters in anthropogenic systems result from a wrong way of thinking and, consequently, from inappropriate organization and systems design. For example, we often apply theories for well-behaved systems to systems that are not well behaved. Given that many twenty-first-century problems involve socioeconomic challenges, we need to develop a science of economic systems that is consistent with our knowledge of complex systems. A massive interdisciplinary research effort is indispensable to accelerate science and innovation so that our understanding and capabilities can keep up with the pace at which our world is changing (‘innovation acceleration’11). In the following, I use the term Global Systems Science to emphasize that integrating knowledge from the natural, engineering and social sciences and applying it to real-life systems is a major challenge that goes beyond any currently existing discipline. There are still many unsolved problems regarding the interplay between structure, dynamics and functional properties of complex systems. A good overview of global interdependencies between different kinds of networks is lacking as well. The establishment of a Global Systems Science should fill these knowledge gaps, particularly regarding the role of human and social factors. Beyond current risk analysis Progress must be made in computational social science60, for example by performing agent-based computer simulations32,61–63 of learning agents with cognitive abilities and evolving properties. We also require the close integration of theoretical and computational with empirical and experimental efforts, including interactive multi-player serious games64,65, laboratory and web experiments, and the mining of large-scale activity data11. We furthermore lack good methods of calculating networked risks. Modern financial derivatives package many risks together. If the correlations between the components’ risks are stable in time, copula methodology66 offers a reasonable modelling framework. However, the correlations strongly depend on the state of the global financial system67. Therefore, we still need to learn how realistically to calculate the interdependence and propagation of risks in a network, how to absorb them, and how to calibrate the models (see Box 5). This requires the integration of probability calculus, network theory and complexity science with large-scale data mining. Making progress towards a better understanding of complex systems and systemic risks also depends crucially on the collection of ‘big data’ (massive amounts of data) and the development of powerful machine learning techniques that allow one to develop and validate realistic 2 M AY 2 0 1 3 | VO L 4 9 7 | N AT U R E | 5 7 ©2013 Macmillan Publishers Limited. All rights reserved RESEARCH PERSPECTIVE explanatory models of interdependent systems. The increasing availability of detailed activity data and of cheap, ubiquitous sensing technologies will enable previously unimaginable breakthroughs. Finally, given that it can be dangerous to introduce new kinds of components, interactions or interdependencies into our global systems, a science of integrative systems design is needed. It will have to elaborate suitable interaction rules and system architectures that ensure not only system components to work well, but also favourable systemic interactions and outcomes. A particular challenge is to design value-sensitive information systems and financial exchange systems that promote awareness and responsible action11. How could we create open information platforms that minimize misuse? How could we avoid privacy intrusion and the manipulation of individuals? How could we enable greater participation of citizens in social, economic and political affairs? Finding tailored design and operation principles for complex, strongly coupled systems is challenging. However, inspiration can be drawn from ecological52, immunological68, and social systems32. Understanding the principles that make socially interactive systems work well (or not) will facilitate the invention of a whole range of socio-inspired design and operation principles11. This includes reputation, trust, social norms, culture, social capital and collective intelligence, all of which could help to counter cybercrime and to design a trustable future Internet. New exploration instruments To promote Global Systems Science with its strong focus on interactions and global interdependencies, the FuturICT initiative proposes to build new, open exploration instruments (‘socioscopes’), analogous to the telescopes developed earlier to explore new continents and the universe. One such instrument, called the ‘‘Planetary Nervous System’’11, would process data reflecting the state and dynamics of our global technosocio-economic-environmental system. Internet data combined with data collected by sensor networks could be used to measure the state of our world in real time69. Such measurements should reflect not only physical and environmental conditions, but also quantify the ‘‘social footprint’’11, that is, the impact of human decisions and actions on our socio-economic system. For example, it would be desirable to develop better indices of social wellbeing than the gross domestic product per capita, ones that consider environmental factors, health and human and social capital (see Box 4 and http://www.stiglitz-sen-fitoussi.fr and http:// www.worldchanging.com/archives/010627.html). The Planetary Nervous System would also increase collective awareness of possible problems and opportunities, and thereby help us to avoid mistakes. The data generated by the Planetary Nervous System could be used to feed a ‘‘Living Earth Simulator’’11, which would simulate simplified, but sufficiently realistic models of relevant aspects of our world. Similar to weather forecasts, an increasingly accurate picture of our world and its possible evolutions would be obtained over time as we learn to model anthropogenic systems and human responses to information. Such ‘policy wind tunnels’ would help to analyse what-if scenarios, and to identify strategic options and their possible implications. This would provide a new tool with which political decision-makers, business leaders, and citizens could gain a better, multi-perspective picture of difficult matters. Finally, a ‘‘Global Participatory Platform’’11 would make these new instruments accessible to everybody and create an open ‘information ecosystem’, which would include an interactive platform for crowd sourcing and cooperative applications. The activity data generated there would also allow one to determine statistical laws of human decision making and collective action64. Furthermore, it would be conceivable to create interactive virtual worlds65 in order to explore possible futures (such as alternative designs of urban areas, financial architectures and decision procedures). Discussion I have described how system components, even if their behaviour is harmless and predictable when separated, can create unpredictable and uncontrollable systemic risks when tightly coupled together. Hence, an improper design or management of our global anthropogenic system creates possibilities of catastrophic failures. Today, many necessary safety precautions to protect ourselves from human-made disasters are not taken owing to insufficient theoretical understanding and, consequently, wrong policy decisions. It is dangerous to believe that crises and disasters in anthropogenic systems are ‘natural’, or accidents resulting from external disruptions. Another misconception is that our complex systems could be well controlled or that our socio-economic system would automatically fix itself. Such ways of thinking impose huge risks on society. However, owing to the systemic nature of man-made disasters, it is hard to blame anybody for the damage. Therefore, classical self-adjustment and feedback mechanisms will not ensure responsible action to avert possible disasters. It also seems that present law cannot handle situations well, when the problem does not lie in the behaviour of individuals or companies, but in the interdependencies between them. The increasing availability of ‘big data’ has raised the expectation that we could make the world more predictable and controllable. Indeed, real-time management may overcome instabilities caused by delayed feedback or lack of information. However, there are important limitations: too much data can make it difficult to separate reliable from ambiguous or incorrect information, leading to misinformed decisionmaking. Hence too much information may create a more opaque rather than a more transparent picture. If a country had all the computer power in the world and all the data, would this allow a government to make the best decisions for everybody? Not necessarily. The principle of a caring state (or benevolent dictator) would not work, because the world is too complex to be optimized topdown in real time. Decentralized coordination with affected (neighbouring) system components can achieve better results, adapted to local needs45. This means that a participatory approach, making use of local resources, can be more successful. Such an approach is also more resilient to perturbations. For today’s anthropogenic system, predictions seem possible only over short time periods and in a probabilistic sense. Having all the data in the world would not allow one to forecast the future. Nevertheless, one can determine under what conditions systems are prone to cascades or not. Moreover, weak system components can be used to produce early warning signals. If safety precautions are lacking, however, spontaneous cascades might be unstoppable and become catastrophic. In other words, predictability and controllability are a matter of proper systems design and operation. It will be a twentyfirst-century challenge to learn how to turn this into practical solutions and how to use the positive sides of cascade effects. For example, cascades can produce a large-scale coordination of traffic lights45 and vehicle flows70, or promote the spreading of information and innovations57,58, of happiness71, social norms72, and cooperation31,32,59. Taming cascade effects could even help to mobilize the collective effort needed to address the challenges of the century ahead. Received 31 August 2012; accepted 26 February 2013. 1. World Economic Forum. Global Risks 2012 and 2013 (WEF, 2012 and 2013); http://www.weforum.org/issues/global-risks. 2. Rinaldi, S. M., Peerenboom, J. P. & Kelly, T. K. Critical infrastructure interdependencies. IEEE Control Syst. 21, 11–25 (2001). 3. Rosato, V. et al. Modelling interdependent infrastructures using interacting dynamical models. Int. J. Critical Infrastruct. 4, 63–79 (2008). 4. Buldyrev, S. V., Parshani, R., Paul, G., Stanley, H. E. & Havlin, S. Catastrophic cascade of failures in interdependent networks. Nature 464, 1025–1028 (2010). 5. Gao, J., Buldyrev, S. V., Havlin, S. & Stanley, H. E. Robustness of networks of networks. Phys. Rev. Lett. 107, 195701 (2011). 6. Vespignani, A. The fragility of interdependency. Nature 464, 984–985 (2010). 7. Brockmann, D., Hufnagel, L. & Geisel, T. The scaling laws of human travel. Nature 439, 462–465 (2006). 8. Vespignani, A. Predicting the behavior of techno-social systems. Science 325, 425–428 (2009). 9. Epstein, J. M. Modelling to contain pandemics. Nature 460, 687 (2009). 10. Crutzen, P. & Stoermer, E. The anthropocene. Global Change Newsl. 41, 17–18 (2000). 5 8 | N AT U R E | VO L 4 9 7 | 2 M AY 2 0 1 3 ©2013 Macmillan Publishers Limited. All rights reserved PERSPECTIVE RESEARCH 11. Helbing, D. & Carbone, A. (eds) Participatory science and computing for our complex world. Eur. Phys. J. Spec. Top. 214, (special issue) 1–666 (2012). 12. Zeeman, E. C. (ed.) Catastrophe Theory (Addison-Wesley, 1977). 13. Stanley, H. E. Introduction to Phase Transitions and Critical Phenomena (Oxford Univ. Press, 1987). 14. Watts, D. J. A simple model of global cascades on random networks. Proc. Natl Acad. Sci. USA 99, 5766–5771 (2002). 15. Motter, A. E. Cascade control and defense in complex networks. Phys. Rev. Lett. 93, 098701 (2004). 16. Simonsen, I., Buzna, L., Peters, K., Bornholdt, S. & Helbing, D. Transient dynamics increasing network vulnerability to cascading failures. Phys. Rev. Lett. 100, 218701 (2008). 17. Little, R. G. Controlling cascading failure: understanding the vulnerabilities of interconnected infrastructures. J. Urban Technol. 9, 109–123 (2002). This is an excellent analysis of the role of interconnectivity in catastrophic failures. 18. Buzna, L., Peters, K., Ammoser, H., Kühnert, C. & Helbing, D. Efficient response to cascading disaster spreading. Phys. Rev. E 75, 056107 (2007). 19. Lorenz, J., Battiston, S. & Schweitzer, F. Systemic risk in a unifying framework for cascading processes on networks. Eur. Phys. J. B 71, 441–460 (2009). This paper gives a good overview of different classes of cascade effects with a unifying theoretical framework. 20. Battiston, S., Delli Gatti, D., Gallegati, M., Greenwald, B. & Stiglitz, J. E. Default cascades: when does risk diversification increase stability? J. Financ. Stab. 8, 138–149 (2012). 21. Albeverio, S., Jentsch, V. & Kantz, H. (eds) Extreme Events in Nature and Society (Springer, 2010). 22. Bak, P., Tang, C. & Wiesenfeld, K. Self-organized criticality: an explanation of the 1/f noise. Phys. Rev. Lett. 59, 381–384 (1987). 23. Albert, R., Jeong, H. & Barabasi, A. L. Error and attack tolerance of complex networks. Nature 406, 378–382 (2000). 24. Kun, F., Carmona, H. A., Andrade, J. S. Jr & Herrmann, H. J. Universality behind Basquin’s law of fatigue. Phys. Rev. Lett. 100, 094301 (2008). 25. Achlioptas, D., D’Souza, R. M. & Spencer, J. Explosive percolation in random networks. Science 323, 1453–1455 (2009). 26. Sornette, D. & Ouillon, G. Dragon-kings: mechanisms, statistical methods and empirical evidence. Eur. Phys. J. Spec. Top. 205, 1–26 (2012). 27. Nicolis, G. Introduction to Nonlinear Science (Cambridge Univ. Press, 1995). 28. Strogatz, S. H. Nonlinear Dynamics and Chaos (Perseus, 1994). 29. Liu, Y. Y., Slotine, J. J. & Barabasi, A. L. Controllability of complex networks. Nature 473, 167–173 (2011). 30. Dörner, D. The Logic of Failure (Metropolitan, 1996). This book is a good demonstration that we tend to make wrong decisions when trying to manage complex systems. 31. Nowak, M. A. Evolutionary Dynamics (Belknap, 2006). 32. Helbing, D. Social Self-Organization (Springer, 2012). This book offers an integrative approach to agent-based modelling of emergent social phenomena, systemic risks in social and economic systems, and how to manage complexity. 33. Johansson, A., Helbing, D., Al-Abideen, H. Z. & Al-Bosta, S. From crowd dynamics to crowd safety: a video-based analysis. Adv. Complex Syst. 11, 497–527 (2008). 34. Helbing, D. & Mukerji, P. Crowd disasters as systemic failures: analysis of the Love Parade disaster. Eur. Phys. J. Data Sci. 1, 7 (2012). 35. Bettencourt, L. M. A. et al. Growth, innovation, scaling and the pace of life in cities. Proc. Natl Acad. Sci. USA 104, 7301–7306 (2007). 36. Ball, P. Why Society is a Complex Matter (Springer, 2012). 37. Aven, T. & Vinnem, J. E. (eds) Risk, Reliability and Societal Safety Vols 1–3 (Taylor and Francis, 2007). This compendium is a comprehensive source of information about risk, reliability, safety and resilience. 38. Rodriguez, H., Quarantelli, E. L. & Dynes, R. R. (eds) Handbook of Disaster Research (Springer, 2007). 39. Cox, L. A. Jr. Risk Analysis of Complex and Uncertain Systems (Springer, 2009). 40. Perrow, C. Normal Accidents. Living with High-Risk Technologies (Princeton Univ. Press, 1999). This eye-opening book shows how catastrophes result from couplings and complexity. 41. Peters, G. A. & Peters, B. J. Human Error. Causes and Control (Taylor and Francis, 2006). This book is a good summary of why, how and when people make mistakes. 42. Clarke, L. Worst Cases (Univ. Chicago, 2006). 43. Axelrod, R. & Cohen, M. D. Harnessing Complexity (Basis Books, 2000). This book offers a good introduction into complex social systems and bottom-up management. 44. Tumer, K. & Wolpert, D. H. Collectives and the Design of Complex Systems (Springer, 2004). 45. Lämmer, S. & Helbing, D. Self-control of traffic lights and vehicle flows in urban road networks. J. Stat. Mech. P04019 (2008). 46. Perkins, C. E. & Royer, E. M. Ad-hoc on-demand distance vector routing. In Second IEEE Workshop on Mobile Computing Systems and Applications 90–100 (WMCSA Proceedings, 1999). 47. Amin, M. M. & Wollenberg, B. F. Toward a smart grid: power delivery for the 21st century. IEEE Power Energy Mag. 3, 34–41 (2005). 48. Schneider, C. M., Moreira, A. A., Andrade, J. S. Jr, Havlin, S. & Herrmann, H. J. Mitigation of malicious attacks on networks. Proc. Natl Acad. Sci. USA 108, 3838–3841 (2011). 49. Comfort, L. K., Boin, A. & Demchak, C. C. (eds) Designing Resilience. Preparing for Extreme Events (Univ. Pittsburgh, 2010). 50. Scheffer, M. et al. Early-warning signals for critical transitions. Nature 461, 53–59 (2009). 51. Pikovsky, A., Rosenblum, M. & Kurths, J. Synchronization (Cambridge Univ. Press, 2003). 52. Haldane, A. G. & May, R. M. Systemic risk in banking ecosystems. Nature 469, 351–355 (2011). 53. Battiston, S., Puliga, M., Kaushik, R., Tasca, P. & Caldarelli, G. DebtRank: too connected to fail? Financial networks, the FED and systemic risks. Sci. Rep. 2, 541 (2012). 54. Stiglitz, J. E. Freefall: America, Free Markets, and the Sinking of the World Economy (Norton & Company, 2010). 55. Sterman, J. Business Dynamics: Systems Thinking and Modeling for a Complex World (McGraw-Hill/Irwin, 2000). 56. Helbing, D. & Lämmer, S. in Networks of Interacting Machines: Production Organization in Complex Industrial Systems and Biological Cells (eds Armbruster, D., Mikhailov, A. S. & Kaneko, K.) 33–66 (World Scientific, 2005). 57. Young, H. P. Innovation diffusion in heterogeneous populations: contagion, social influence, and social learning. Am. Econ. Rev. 99, 1899–1924 (2009). 58. Montanari, A. & Saberi, A. The spread of innovations in social networks. Proc. Natl Acad. Sci. USA 107, 20196–20201 (2010). 59. Grund, T., Waloszek, C. & Helbing, D. How natural selection can create both selfand other-regarding preferences, and networked minds. Sci. Rep. 72, 1480, http://dx.doi.org/10.1038/srep01480 (2013). 60. Lazer, D. et al. Computational social science. Science 323, 721–723 (2009). 61. Epstein, J. M. & Axtell, R. L. Growing Artificial Societies: Social Science from the Bottom Up (Brookings Institution, 1996). This is a groundbreaking book on agent-based modelling. 62. Gilbert, N. & Bankes, S. Platforms and methods for agent-based modeling. Proc. Natl Acad. Sci. USA 99 (S3), 7197–7198 (2002). 63. Farmer, J. D. & Foley, D. The economy needs agent-based modeling. Nature 460, 685–686 (2009). 64. Szell, M., Sinatra, R., Petri, G., Thurner, S. & Latora, V. Understanding mobility in a social petri dish. Sci. Rep. 2, 457 (2012). 65. de Freitas, S. Game for change. Nature 470, 330–331 (2011). 66. McNeil, A. J., Frey, R. & Embrechts, P. Quantitative Risk Management (Princeton Univ. Press, 2005). 67. Preis, T., Kenett, D. Y., Stanley, H. E., Helbing, D. & Ben-Jacob, E. Quantifying the behaviour of stock correlations under market stress. Sci. Rep. 2, 752 (2012). 68. Floriano, D. & Mattiussi, C. Bio-Inspired Artificial Intelligence (MIT Press, 2008). 69. Pentland, A. Society’s nervous system: building effective government, energy, and public health systems. IEEE Computer 45, 31–38 (2012). 70. Kesting, A., Treiber, M., Schönhof, M. & Helbing, D. Adaptive cruise control design for active congestion avoidance. Transp. Res. C 16, 668–683 (2008). 71. Fowler, J. H. & Christakis, N. A. Dynamic spread of happiness in a large social network. Br. Med. J. 337, a2338 (2008). 72. Helbing, D. & Johansson, A. Cooperation, norms, and revolutions: a unified gametheoretical approach. PLoS ONE 5, e12530 (2010). 73. Seydel, R. U. Practical Bifurcation and Stability Analysis (Springer, 2009). 74. Bak, P., Christensen, K., Danon, L. & Scanlon, T. Unified scaling law for earthquakes. Phys. Rev. Lett. 88, 178501 (2002). 75. Helbing, D. Traffic and related self-driven many-particle systems. Rev. Mod. Phys. 73, 1067–1141 (2001). 76. Lozano, S., Buzna, L. & Diaz-Guilera, A. Role of network topology in the synchronization of power systems. Eur. Phys. J. B 85, 231–238 (2012). 77. Schuster, H. G. & Just, W. Deterministic Chaos (Wiley-VCH, 2005). 78. Wiener, N. Cybernetics (MIT Press, 1965). 79. Beale, N. et al. Individual versus systemic risk and the regulator’s dilemma. Proc. Natl Acad. Sci. USA 108, 12647–12652 (2011). 80. Allen, P. M. Evolution, population dynamics, and stability. Proc. Natl Acad. Sci. USA 73, 665–668 (1976). 81. Tainter, J. The Collapse of Complex Societies (Cambridge Univ. Press, 1988). 82. The World Economic Forum, Global Risks 2011 6th edn (WEF, 2011); http:// reports.weforum.org/wp-content/blogs.dir/1/mp/uploads/pages/files/globalrisks-2011.pdf. 83. Huntington, S. P. The clash of civilisations? Foreign Aff. 72, 22–49 (1993). 84. Cederman, L. E. Endogenizing geopolitical boundaries with agent-based modeling. Proc. Natl Acad. Sci. USA 99 (suppl. 3), 7296–7303 (2002). 85. Johnson, N. et al. Pattern in escalations in insurgent and terrorist activity. Science 333, 81–84 (2011). 86. Beck, U. Risk Society (Sage, 1992). 87. Lin, N. Social Capital (Routeledge, 2010). 88. Kröger, W. & Zio, E. Vulnerable Systems (Springer, 2011). Acknowledgements This work has been supported partially by the FET Flagship Pilot Project FuturICT (grant number 284709) and the ETH project ‘‘Systemic Risks— Systemic Solutions’’ (CHIRP II project ETH 48 12-1). I thank L. Böttcher, T. Grund, M. Kaninia, S. Rustler and C. Waloszek for producing the cascade spreading movies and figures. I also thank the FuturICT community for many inspiring discussions. Author Information Reprints and permissions information is available at www.nature.com/reprints. The author declares no competing financial interests. Readers are welcome to comment on the online version of the paper. Correspondence and requests for materials should be addressed to D.H. ([email protected]). 2 M AY 2 0 1 3 | VO L 4 9 7 | N AT U R E | 5 9 ©2013 Macmillan Publishers Limited. All rights reserved C OV E R STO R Y STRATEGIC RISK MANAGEMENT AT THE LEGO GROUP By Mark L. Frigo, CMA, CPA, and Hans Læssøe How can organizations manage strategic risks in a volatile and fast-paced business environment? Many have started focusing their enterprise risk management (ERM) programs on the critical strategic risks that can make or break a company. This effort is being driven by requests from boards and other stakeholders and by the realization that a systematic approach is needed and that it’s highly valuable to include strategic risk management in ERM and to integrate risk management within the fabric of an organization. Some companies are at the forefront of this evolving movement. February 2012 I S T R AT E G I C F I N A N C E 27 C OVE R S TO R Y Figure 1: Four Elements of Risk Management at the LEGO Group 3 4 ACTIVE RISK & PREPARING FOR UNCERTAINTY OPPORTUNITY PLANNING (AROP) In this article we describe strategic risk management at the LEGO Group, which is based on an initiative started in late 2006 and led by Hans Læssøe, senior director of strategic risk management at LEGO System A/S. It’s also part of the continuing work of the Strategic Risk Management Lab at DePaul University, which is identifying and developing leading practices in integrating risk management with strategy development and strategy execution. The LEGO Group Strategy To understand strategic risk management at the LEGO Group, you need to understand the company’s strategy. This is consistent with the first step in developing strategic risk management in an organization: to understand the business strategy and the related risks as described in the Strategic Risk Assessment process (see Mark L. Frigo and Richard J. Anderson, “Strategic Risk Assessment,” Strategic Finance, December 2009). The LEGO Group’s mission is “Inspire and develop the builders of tomorrow.” Its vision is “Inventing the future of play.” To help accomplish them, the company uses a growth strategy and an innovation strategy. Growth Strategy: The LEGO Group has chosen a strategy that’s based on a number of growth drivers. One is to increase the market share in the United States. Many Americans may think they buy a lot of LEGO products, but they buy only about a third of what Germans buy, for example. Thus there are potential growth opportunities in the U.S. market. The LEGO Group also wants to increase market share in Eastern Europe, where the toy market is growing very rapidly. In addition, it wants to invest in emerging markets, but cautiously. The toy industry isn’t the first one to move in new, emerging markets, so the LEGO Group will invest at appropriate levels and be ready for when those markets do move. It will also expand direct-to-consumer activities (sales through LEGO-owned retail stores), online sales, and online activities (such as online games for children). 28 S T R AT E G I C F I N A N C E I February 2012 1 2 ENTERPRISE RISK MONTE CARLO MANAGEMENT SIMULATIONS Innovation Strategy: On the product side, the LEGO Group focuses on creating innovative new products from concepts developed under the title “Obviously LEGO, never seen before.” The company plans to come up with such concepts every two to three years. The latest example is LEGO Games System, which is family board games (a new way of playing with LEGO bricks) with a LEGO attitude of changeability (obviously LEGO). The company also intends to expand LEGO Education, its division that works with schools and kindergartens. And it will develop its digital business as the difference between the physical world and the digital world becomes more and more blurred and less and less relevant for children. Now let’s look at the development of LEGO strategic risk management. LEGO Strategic Risk Management The LEGO Group developed risk management in four steps, as shown in Figure 1: Step 1. Enterprise Risk Management was traditional ERM in which financial, operational, hazard, and other risks were later supplemented by explicit handling of strategic risks. Step 2. Monte Carlo Simulations were added to understand the financial performance volatility (which proved to be significant) and the drivers behind it to integrate risk management into the budgeting and reporting processes. Those two steps were seen mostly as “damage control.” To get ahead of the decision process and have risk awareness impact future decisions as well, LEGO risk management added: Step 3. Active Risk and Opportunity Planning (AROP), where business projects go through a systematic risk and opportunity process as part of preparing the business case before final decisions about the projects have been made. Step 4. Preparing for Uncertainty, where management tries to ensure that long-term strategies are relevant for and resilient to future changes that may very well differ from those planned for. Scenarios help them envision a set of different yet plausible futures to test the strategy for resilience and relevance. These last two steps were designed to move “upstream”—or getting involved earlier in strategy development and the strategic planning and implementation process. Figure 2: The LEGO ERM Umbrella: Adding Strategic Risk STRATEGIC (ADDED 2006) OPERATIONAL Strategic Risk Management Lab Commentary: This four-step approach is a good illustration of how organizations can develop their risk management capabilities and processes in incremental steps. It represents an example of how to evolve beyond traditional ERM and integrate risk management into the strategic decision making of an organization. This approach positions risk management as a value-creating element of the strategic decision-making process and the strategy-execution process. In our research on high-performance companies, we’ve found that companies like the LEGO Group achieve sustainable high performance and create stakeholder value by consistently executing the strategic activities in the Return Driven Strategy framework (for example, the focus on innovating its offerings toward changing customer needs) while co-creating value through its engagement platforms (the online community, including its My LEGO Network, which engages more than 400 million people and helps its product development process). Its strategic risk management processes incorporate distinct elements of co-creation by engaging its employees (internal stakeholders) throughout the strategic decisionmaking, planning, and execution processes, as well as engaging external stakeholders (suppliers, partners, customers). The LEGO Group’s approach is a good example of how an organization can engage stakeholders in cocreating strategic risk-return management (see Mark L. Frigo and Venkat Ramaswamy, “Co-Creating Strategic Risk-Return Management,” Strategic Finance, May 2009, and Venkat Ramaswamy and Francis Gouillart, The Power of Co-Creation, 2010). Step 1: Enterprise Risk Management The evolution of ERM toward strategic risk management is represented in Figure 2. Strategic risk was missing from the ERM portfolio until 2006. To fix this, based on his then 25 years of LEGO experience and a request from the CFO, Hans Læssøe started looking at strategic risk management. “I was a corporate strategic controller who had never heard the term until EMPLOYEE SAFETY LEGAL ERM FINANCIAL HAZARD IT SECURITY then,” he says. The company had embedded risk management in its processes. Operational risk—minor disruptions—was handled by planning and production. Employee health and safety was ISO 18001 certified. Hazards were managed through explicit insurance programs in close collaboration with the company’s partners (insurance companies and brokers). IT security risk was a defined functional area. Financial risk covered currencies and energy hedging. And legal was actively pursuing trademark violations as well as document and contract management. But strategic risks weren’t handled explicitly or systematically, so the CFO charged Hans with ensuring they would be from then on. This became a full-time position in 2007, and Hans added one employee in 2009 and another in 2011. Strategic Risk Management Lab Commentary: The 2006 situation is common. Even though strategic risks need to be integrated with risk management, many organizations don’t explicitly assess and manage strategic risks within strategic decision-making processes and strategy execution. But the LEGO Group’s approach shows how strategic risk management can be a key to increasing the value of ERM within an organization. It also shows how executive leadership from the CFO played an important role in the evolution of ERM as a valuable management process. Finally, Hans came from the business side and had the attributes necessary to lead the initiative: broad knowledge of the business and its core February 2012 I S T R AT E G I C F I N A N C E 29 C OVE R S TO R Y About the LEGO Group Headquartered in Billund, Denmark, the family-owned LEGO Group has approximately 10,000 employees worldwide and is the third-largest toy manufacturer in the world in terms of sales. Its portfolio, which focuses on LEGO bricks, includes 25 product lines sold in more than 130 countries. The name of the company is an abbreviation of the two Danish words “leg godt” that mean “play well.” The LEGO Group began in 1932 in Denmark, when Ole Kirk Kristiansen founded a small factory for making wooden toys. Fifteen years later, he discovered that plastic was the ideal material for toy production and bought the first injection molding machine in Denmark. In 1949, the brick adventure started. Over the years, the LEGO Group perfected the brick, which is still the basis of the entire game and building system. Though there have been small adjustments in shape, color, and design from time to time, today’s LEGO bricks still fit bricks from 1958. The 2,400 different LEGO brick shapes are produced in plants in Denmark, the Czech Republic, Hungary, and Mexico with the greatest of precision and subjected to constant controls. And there are more than 900 million different ways of combining six eight-stud bricks of the same color. 30 S T R AT E G I C F I N A N C E I February 2012 strategies, strong relationships with directors and executive management, strong communication and facilitation skills, knowledge of the organization’s risks, and broad acceptance and credibility across the organization. (For more, see Mark L. Frigo and Richard J. Anderson, “Embracing ERM: Practical Approaches for Getting Started,” published by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) at www.coso.org/guidance.htm, 2011, p. 4.) Also, the risk-owner concept at LEGO provides a good example of the importance of understanding who owns the risks as well as defining the role of risk management in the organization. The idea of “risk owners” was important to ensure action and accountability. Hans’s charge was to develop strategic risk management and make sure the LEGO Group had processes and capabilities in place to do this. But as senior director of strategic risk management, Hans doesn’t own the risk. He can’t own the risk because this essentially would mean he would own the strategy, and each line of business owns the pertinent strategic risks. Hans trains, leads, and supports line management to apply a systematic process to deal with risk. This is just like budgeting functions: They don’t earn the money or spend the money, but they support management to deliver on the budget or compare performance against the budget. Step 2: Monte Carlo Simulation In 2008, Hans introduced Monte Carlo simulation to the process. A mathematician by education (M.Sc. in engineering), he started defining how Monte Carlo simulation could be used in risk management. Now it’s being used for three areas: Budget Simulation. The business controllers are asked for their input about volatility, which is combined with analyses based on past performance of budget accuracy. Management says this helps them understand the financial volatility, so it’s now part of the financial and budget reporting. In fact, the first analyses directed top management’s attention to a sales volatility that was known but that proved to be much more significant than everyone intuitively believed. Credit Risk Portfolio. The LEGO Group uses a similar approach to look at its credit risk portfolio so it can have a more professional “conversation” with a credit risk insurance partner. Consolidation of Risk Exposure. You could multiply the probability and impact of each risk and add the whole thing up. But this is seen as an almost “systemic” error in Figure 3: Monte Carlo Simulations and Risk Appetite at the LEGO Group GROSS “EaR” NET “EaR” EFFECT OF MITIGATION NET GROSS 5% OF SIMULATIONS risk consolidation because it would give an average loss over “a million years.” Risk management isn’t about averages (if it were, no one would take out an insurance policy on anything). With a Monte Carlo simulation, the LEGO Group can “calculate” the 5% worst-case loss compared to budget and use that to define risk appetite and risk report exposure vis à vis this risk appetite, as shown in Figure 3. Risk Appetite: A privately held company, the LEGO Group can’t look at stock values, so it looks at the amount of earnings the company is likely to lose compared to budget if the worst-case combined scenarios happen. Not all risks will materialize in any one year because some of them are mutually exclusive, but a huge number may happen in any one year as we have seen during the global financial crisis. Hans computes a “net earnings at risk,” and corporate management and later the board of directors use that net earnings at risk to define their risk appetite. They have said that the 5% worst-case loss may not exceed a certain percentage of the budgeted earnings (the percentage is not 100). That guides management toward understanding and “sizing” the risk exposure. This process has helped the LEGO Group take more risks and be more aggressive than it otherwise would have dared to be and grow faster than it otherwise could have done. Strategic Risk Management Lab Commentary: Risk appetite is a difficult area for organizations to address. The approach used at the LEGO Group provides a good example of deriving risk appetite in an actionable and systematic way. It also shows an approach that fosters intelligent risk taking and that avoids being too risk averse while maintaining discipline on the amount of risk undertaken. What we’ve discussed so far is more or less “damage control” because it’s about managing risks already taken by approving strategies and initiating business projects. Hans decided he wanted to move beyond damage control and be more proactive so he could create real value as a risk manager. He came up with a process he calls Active Risk and Opportunity Planning for business projects. Step 3: AROP: Risk Assessment of Business Projects When the LEGO organization implements business projects of a defined minimum size or level of complexity, it’s mandatory that the business case includes an explicit definition and method of handling both risks and opportuFebruary 2012 I S T R AT E G I C F I N A N C E 31 C OVE R S TO R Y “The changes the world will see between 2010 and 2020 will be somewhere between 10 and 80 times the changes the world saw in the 20th Century.” embedded in the tool. All the reports are standardized. That’s good for the project managers, but it’s also good for the people on the steering committees because they now receive a standardized report on risks. They don’t have a change between layouts of probability/impact risk maps or when somebody comes up with severity or whatever from project to project. Everyone has the same kind of formula, the same way of doing it. Strategic Risk Management Lab Commentary: nities. Hans says that the LEGO Group has created a supporting tool (a spreadsheet) with which to do this, and it differs from the former approach to project risk management in several areas: Identification, “where we call upon more stakeholders, look at opportunities as well as risks, and look at risks both to the project and from the project (i.e., potential project impact on the entire business system).” Assessment, “where we define explicit scales and agree what ‘high’ means to avoid different people agreeing on an impact being high without having a shared understanding of the exposure.” Handling, “where we systematically assign risk owners to ensure action and accountability and include the use of early-warning indicators where relevant.” Re-assessment, “where we define the net-risk exposure to ensure that we have an exposure we know we can accept.” Follow-up, “where we keep the risk portfolio of the project updated for gate and milestone sessions.” Reporting, “which is done automatically and fully standardized based on the data.” Common Language and Common Framework: The most important point is that the people who address and work with risks get a systematic approach so they can use the same approach from Project A to Project B. The one element that project managers really like is having the data in a database. They don’t receive just a spreadsheet model. Data is entered into the spreadsheet as a database, and all the required reporting on risk management is collected from that data, so project managers don’t have to develop a report—they can just cut and paste from one of the three reporting sheets that are 32 S T R AT E G I C F I N A N C E I February 2012 The AROP process is a great example of integrating risk assessment in terms of upside and downside risks in the strategic decision-making process. This balanced approach to strategic risk management allows organizations to create more stakeholder value while intelligently managing risk. Step 4: Preparing for Uncertainty: Defining and Testing Strategies To get further ahead in the decision process, the LEGO Group has added a systematic approach to defining and testing strategies. As Hans notes, “We are going one step further upstream in the decision process with what we call ‘Prepare for Uncertainty.’ This is a strategy process, and we’re looking at the trends of the world. The industry is moving; the world is moving quite rapidly. I just saw a presentation that indicated that the changes the world will see between 2010 and 2020 will be somewhere between 10 and 80 times the changes the world saw in the 20th Century, compressed into a decade.” He offers the following story to illustrate the forces of change the company is facing: “My seven-year-old granddaughter came to me and asked, ‘Granddad, why do you have a wire on your phone?’ She didn’t understand that. She’s never seen a wire on a phone before. We need to address that level of change and do it proactively.” Four Strategic Scenarios: A group of insightful staff people (Hans and a few from the Consumer Insight function) defined a set of four strategic scenarios based on the well-documented megatrends defined by the World Economic Forum in 2008 for the Davos meetings (see Figure 4). Hans commented: ◆ “We presented and discussed these with senior management in 2009, prior to their definition of 2015 strate- Figure 4: Four Strategic Scenarios Based on Megatrends— Illustrative Example Based on Corporate Social Responsibility SCENARIO DESCRIPTION IDENTIFIED ISSUES HANDLING/RESILIENCE • Legal compliance is sufficient • Adhering to and systematic third- 1. More of the Same Some growth in consumer spending, driven by RDE markets. Technologies emerge, but impact on toy industry is control • Some efforts needed to remain part limited/fragmented. E-tailing is of the “good guys team,” e.g. growing, and traditional retailers are “cradle to cradle” documentation pressured—but no major changes. party auditing on “Global Compact” • Documented adherence to the disclosed “Planet Promise” • Transparency is enhanced • … • NGOs and consumer groups multi- • Liaise with key NGOs 2. Brave New World Significant growth, driven by Asian markets. Educational overhaul of/into ply and get stronger and global— peer-learning where learning content fast is mandatory. Distributed and collaborative product development by “prosumers.” • Complying with legal requirements is not good enough—added benefit is needed • Globalization is almost exploding • Scan blogs/Internet for expectation on “good performance” on governance • Proactively use social media to communicate with our constituents and stakeholders • … 3. Cut-Throat Competition Networking is the norm in a highly • No major issue among consumers— diversified society. Customization and they are more occupied with other flexibility are essential. Halted expan- things sion of global middle classes. Legisla- • Legislative overdrive on compliance— tive overdrive and aggressive pricing/ to some extent driven by marketing with very short lifecycles. protectionism • Increased monitoring of key legislative processes (especially U.S.) • Close(r) liaison with partners to reduce likelihood of “unbearable” legislation • Focus on extremely close compliance with defined legislation • … 4. Murphy’s Surprise Networking permeates everything. Trade protectionism and lack of resources hamper growth and • Full-steam legislative overdrive and restrictions driven by “other motives” of protectionism globalization. IPs dominate, and new • Globalization is effectively halted crazes permeate the globe in hours. • High demands for openness—but Powerful retailers drive market polarization into private label and branded products. no severe demands for “high performance” • Be extremely focused on compliance with legal demands • Monitoring of and lobbying on legislation—also on minor country/state level • Local representation everywhere relevant • Enable openness on governance practices and results • Outperform the competition • … February 2012 I S T R AT E G I C F I N A N C E 33 C OVE R S TO R Y Fast ACT PARK ADAPT Low High When looking at the scenarios, the LEGO Group uses what it calls a Park, Adapt, Prepare, Act (PAPA) model, as shown in Figure 5. Hans explains: Park: The slow things that have a low probability of happening, we park. We do not forget about them. Adapt: The slow things that we know will happen or are highly likely to happen—we adapt to those trends. In our case, this is a lot around demographics. We know children’s play is changing, we know demographics are changing, we know the buying power between the different realms or the different parts of the world is changing. We also know it does not happen fast. So we adjust, systematically monitoring what direction it’s moving in and following that trend. Prepare: The things that have a low probability of happening, but, if they do, they materialize fast—we need to be prepared for. In fact, this is where we identify most of the risks that we need to put into our ERM risk database, make sure that we have contingency plans for them, apply early warnings and whatever mitigation we can put in place to make sure that we can cover these should they S T R AT E G I C F I N A N C E I February 2012 OVERALL STRATEGIC RESPONSE PREPARE The PAPA Model 34 Figure 5: The LEGO Group’s PAPA Model Slow SPEED OF CHANGE gies, to support that they would look at the potential world of 2015 when defining strategies and not ‘just’ extrapolate presentday conditions. ◆ “Having done that, we then prepared to revisit each key strategy vis à vis all four scenarios to identify issues (i.e., risks and opportunities) for that particular strategy if the world looks like this particular scenario. “This list of issues is then addressed via a PAPA model whereby a strategic response is defined and embedded in the strategy. ◆ “This way, we believe that we have reasonably ensured our strategies will be relevant if/when the world changes in other ways than we originally planned for. “Once we have decided on the strategy and defined what we’re going to do, we test the strategy for resilience. We very simply take that particular strategy and, together with the strategy owner, discuss: If this scenario happens, what will happen to the strategy? Some of these issues will be highly probable, and some of them will be less probable. Some of them will happen very fast; some others will happen very slowly. This is where the PAPA model comes in.” LIKELIHOOD materialize, but they are not expected to. Act: Finally, we have the high probability and fastmoving things that we need to act on now in order to make sure the strategy will be relevant. In our case, anything that has to do with the concept of connectivity— i.e., mobile phones, Internet, that world—if we can see it, move on it. We know that is changing so fast, and it’s changing the way kids play. It’s changing their concepts and their view of the world. This way, we have a kind of prioritization model of what we do because we shouldn’t, of course, be betting on every horse in the race. That’s not profitable, and it isn’t even doable. Strategic Risk Management Lab Commentary: One of the challenges of risk management is to find ways to prioritize risks that make business sense. The PAPA model provides a good example of a framework that can prioritize risks and set the stage for the appropriate actions. Our research on high-performance companies (see Mark L. Frigo and Joel Litman, DRIVEN: Business Strategy, Human Actions and the Creation of Wealth, 2008) found that companies who demonstrate sustainable high performance exhibit a “vigilance to forces of change” that allows them to manage the threats and opportunities in the uncertainties and changes better than other companies. The approach used at LEGO is a great example of embedding this vigilance to forces of change in its strategy development and strategy execution processes. Strategic Risk Management Return on Investment A great deal has happened in the LEGO Group’s approach to risk management based on strong support from top management, all the time needed to develop processes and methodologies, and a strong focus. They have demonstrated value from the efforts they’ve made. They also have explicitly embedded risk management in most of the key planning processes used to “run” the company: ◆ The Strategic and Financial Management Process— Monte Carlo and Scenarios ◆ The LEGO Development Process—AROP in projects ◆ The Customer Business Planning Process—AROP in collaboration ◆ The Sales & Operations Planning Process—Tactical scenarios ◆ The Performance Management Process—Bonus based on results, not efforts “All of this has worked,” Hans says. “Based on actual data, we have had a 20% average growth from the period between 2006 and 2010 in a market that grows between 2% and 3% a year. Beyond that, our profitability has “We have been prepared to make bigger supply chain investments than we otherwise would have done and have been able to achieve a bigger growth than we ever imagined we could have.” developed quite significantly as well. We’ve grown from a 17% return on sales to a 31% return on sales in 2010. And it goes beyond that. If you go back a couple more years, in 2004 we were in dire straits and had a negative return on sales of 15%. We changed a number of strategies. “Risk management is not the driver of these changes. I’m not even sure it’s a big part. But it’s one part. It’s a part that has allowed us to take bigger risks and make bigger investments than we otherwise would have seen. The Monte Carlo simulation has shown us what the uncertainty is. The risk appetite has shown us how much risk we can afford to take, and are prepared to take, between the board of directors and the corporate management team. This has meant that we have been prepared to make bigger supply chain investments than we otherwise would have done and have been able to achieve a bigger growth than we ever imagined we could have.” Strategic Risk Management Lab Commentary: The development of strategic risk management at the LEGO Group provides a great example of how organizations can develop their ERM programs to incorporate strategic risk and make strategic risk management a discipline and core competency within. One of the key elements was “integration.” During discussions with LEGO management, when Hans was asked about the ongoing development of risk management at the LEGO Group, he replied that it was “naturally integrated.” It is this integration of risk management in strategy and strategy execution, and the integration of strategy in risk management, that can elevate the value of ERM in an organization. One Last Note We want to emphasize that risk management is not about risk aversion. If, or rather when, you want/need to take bigger chances than your competitors—and get away with it (succeed)—you need to be better prepared. The fastest race cars in the world have the best brakes and the best steering to enable them to be driven faster, not slower. Risk management should enable organizations to take the risks necessary to grow and create value. To quote racing legend Mario Andretti: “If everything’s under control, you’re going too slow.” SF Mark L. Frigo, Ph.D., CMA, CPA, is director of the Center for Strategy, Execution and Valuation; the Strategic Risk Management Lab; and the CFO Leadership Initiative in the Kellstadt G…



error: Content is protected !!