It’s 1AM Do You Know Where Your Risks Are?

As we move further into areas such as artificial intelligence (AI) and autonomous vehicles we move further into the danger zone for risk. Not that we should not go there but we must look for ways to reduce the risks, or at least understand that risks may exist of which we are unaware. These risks often exist because of unrecognized interdependencies between components of the system.

There are a number of risk models to consider but the following list covers what is included in most lists:

I. Strategic Risks
II. Operational Risks
III. Compliance Risks
IV. Financial Risks
V. Reputational Risks
VI. Other Risks

I will discuss only items associated with the first two.

Strategic Risks

These vary significantly between organizations and industries but have two common elements. Each organization can face a “Strategic Inflection Point” and often the “Future has already happened.” Andrew Grove’s book Only the Paranoid Survive, provides insight into these two concepts. He identifies six factors, existing competitors, potential competitors, customers, suppliers, complementors, and the possibility of doing what your business is doing can be done in a different way, as ones that can bring on what he calls a 10X force on your business.

A Strategic Inflection Point occurs when the old strategic picture dissolves and gives way to the new, allowing the business to ascend to new heights, -or- if you don’t navigate your way through an inflection point you go through a peek and then the business declines. Often you begin to have a sense that something is different. There is a growing difference between what you think you are doing and what is actually happening within the bowels of the organization. Eventually a new framework evolves and a new set of actions emerges often generated by a new set of senior managers.

Most strategic inflection points do not come in with a bang. They are not clear until you can look at results in retrospect. They are accompanied by both signals of change and noise. The question becomes how to differentiate the signals of change from the noise that often accompanies change.

Some simple questions can help:

  • Is your key competitor about to change?
  • Does the company that in past years mattered the most seem less important now?
  • Are people who you have relied upon seem to be loosing it? Does it seem that those individuals that have been considered competent suddenly become decoupled from what really matters?

If you answered yes to any of these questions you should consider that you are approaching a strategic inflection point.

The most important tool in identifying a particular development as a strategic inflection point is broad, intensive and inclusive debate. Include people from different levels of the organization, the more diverse the better. Involve people outside the organization such as customers and partners who have different expertise and interests. There may be individuals within the organization, “Helpful Cassandras”, that have recognized the change and have sounded an early warning.

Contemporary management doctrine suggest that you should approach any debated with data in hand. It should be noted that far too often people substitute opinions for facts and emotions for analysis. Also remember that data is about the past and strategic inflection points are about the future.

As with all management involved debate it is necessary to ensure that individuals can speak their minds without the fear of punishment. This has be demonstrated again and again to ensure everyone sees that open dialogue is not punished. You have to also avoid “Boss Speak”. Opinions should be obtained from the most junior individual and progress up the chain of command.

Getting through a strategic inflection point involves confusion, uncertainty and disorder. You must avoid the inertia of success. Current skills and strengths may not be relevant in the future. Don’t cling to the past.

Operational Risks

Have you ever heard of “Normal Accidents”? Chris Clearfield and Adreas Tilcsik in Meltdown Why Our Systems Fail and What We Can Do About It added to the concepts developed by Charles Perrow in Normal Accidents. Perrow identified the relationship between the complexity of the infrastructure and the coupling of the components of that infrastructure. The more complex and coupled a system is, the more-risky it is. Using this model, he provided examples of how seeming unrelated parts of a system had previously unknown interactions resulting is catastrophe. Also, how increasing the number of safety system may actually increase the risk of catastrophic failure. Such failures include Three Mile Island, petrochemical plants, aircraft and airways, marine accidents, earthbound systems (dams, earth quakes, mine and lakes) and exotics (space, weapons and DNA). By mapping systems onto the complexity/coupled grid you can begin to see where unanticipated risks may occur.

Most of us are familiar with linear systems, such as assembly lines, where errors that cause a system failure are quickly evident because we can see clearly into the process. Complex systems are often opaque. We can’t go look at what’s happening, and we must rely of secondary indicators to understand what is going on.

Systems can also be tightly or loosely coupled. Tightly coupled systems, one part of the system is highly dependent on other parts. These relationships are often unknown until a problem occurs. Loosely coupled systems provide flexibility so that if one part of the system fails the other parts can continue to operate (at least for a while).

Systems that are in the danger zone are everywhere. The typical ones are physical systems, but others are not. Meltdowns have also occurred in financial systems, marketing systems and even in the British Post Office. This does not include any hacking-related incidents. To determine which of the systems are the riskiest, you can map them into a grid such as the one below. Knowing which of your systems are in the danger zone and which ones may move there will help you know where to focus your efforts.

Moving out of the Danger Zone

There are options available to help move a system out, or away from the most critical parts of the danger zone:

  • If possible, reduce the complexity or coupling of the system.
  • Recognize that safety systems may actually increase the complexity of a system and make the system riskier. Excessive warnings, both visual and audio, over time will be ignored or can cause confusion and prevent the necessary actions from being taken. The same goes for complex play books that provide conflicting solutions and are too complex to be useful during an emergency.
  • Don’t ignore warning signs. Don’t assume a complex system is working fine and discard evidence that conflicts with that assumption. Recognize that our institution often fails us when dealing with systems that in the danger zone and adding structure to our decision process will improve decisions. We construct an expected world because we can’t handle the complexity of the present one, and then process the information that fits the expected world and find reasons to exclude the information that might contradict it. Unexpected or unlikely interactions are ignored.
  • Harness the power of hindsight by using “Premortums.” We can identify potential problems when we imagine what caused a project or process to fail. Once we have identified these problems we can identify steps for early identification of the potential problems and plan mitigating actions if they occur.
  • Rely more on physical presence. Although face-to-face meetings are less efficient they are more effective in conveying shades of meaning, gauging responses and provides the ability to adjust words to avoid misunderstandings and reach agreements.
    Organizations must take the time to find those system that are in the danger zone and those that are approaching it and apply the concepts that were presented in the three books referenced here to prevent meltdowns.