2. Decision making and risk management

Looking into what can lead to people making mistakes and how to mitigate errors.

2.1 Information processing

So, why do humans make mistakes? To answer this question, we need to understand how we process information. Figure 1 shows a simplified version of what happens when we receive information from any one of our senses.

Figure 1: how people process information they receive through their senses to the long-term memory

Diagram showing how information received by the senses passes through the short-term memory to the long-term memory

The steps in processing information

  1. We collect information from the world around us through the 5 senses, these are sight, touch, taste, smell and hearing.
  2. The information is passed through a filter – the sensory memory. This gets rid of unimportant information (for example, background noise) and passes on useful information, allowing us to focus on it.
  3. As we start making sense of the information we are receiving (perception) it’s passed onto our conscious or short-term memory. This can hold between 5 and 9 pieces of information at a time for 15 to 30 seconds. Our “perception” can be influenced by any “bias” we have (see section 2.3 for more information).
  4. The information then is stored in our long-term memory or is forgotten. The long-term memory has a theoretically unlimited capacity and timespan. When we need to remember something, we retrieve it from our long-term memory and move it to the short-term memory.

The process of storing information is open to errors at any point because our brains have evolved to take shortcuts to conserve energy. Factors such as time pressures can affect this process. It can also be influenced by biases we have. In sections 2.2 and 2.3 we will explore some of these shortcuts and how they might negatively affect decision making.

To learn more about how we process, and store information look at Behaving Safely: A Practical Guide for Risky Work, Dik Gregory and Paul Shanahan.

2.2 System 1 and system 2 thinking

In section 2.1 you learnt about unconscious and conscious processing. We will focus on these systems of thinking here in more detail which affects our decision making. Psychologist and economist Daniel Kahneman, who is widely considered to be an expert in decision making, proposed that the human brain operates mainly in two systems.

System 1 thinking

  • Fast, unconscious, and automatic thinking which we cannot control.
  • Allows us to assess situations quickly.
  • Makes up most of our thinking.
  • Examples: an experienced driver using the brakes in response to perceived danger; detecting emotions in someone’s voice.
  • Automatically generates suggestions, feelings, and intuitions for System 2.

System 2 thinking

  • Slow, conscious, and effortful thinking that we can control.
  • We use this to investigate or probe for information and to make complex decisions.
  • Makes up a small percentage of our thinking.
  • Examples include learning to drive or changing the way you speak to someone.

Both systems do an important function, but issues arise when we rely too heavily on system 1 instead of system 2. For example, have you ever driven somewhere and realised you have no idea how you arrived there? How can you tell for sure that you did not make any mistakes like going through a red light on the way? The same applies at work. We can complete tasks on autopilot when we should be paying attention. When we rely too much on system 1, we can fail to notice when things might be going wrong.

If you notice that system 1 has taken over it’s important to slow down, go back a few steps in your task and check what you’re doing thoroughly. Checklists and other aide memoires can be an effective way of keeping track of your tasks and engaging system 2 in tasks that are habitual or boring.

To learn more about system 1 and system 2 thinking, have a look at Thinking, Fast and Slow by Daniel Kahneman.

2.3 Cognitive bias

Our perception and consequently the information passed on to our memory is affected by what we choose to direct our attention to. This is where cognitive bias comes in and influences System 1 thinking. Our brains have evolved to take shortcuts based on our previous experiences. This is to make decision making more efficient and to make sure we’re not overloaded with information.

Cognitive bias focuses our attention very quickly on information that seems important or urgent, which can lead to errors in perception and judgement.

A cognitive bias is a systematic error in thinking that occurs when we’re processing information. It affects our perception and judgement of a situation. Biases are shortcuts that are made unconsciously based on our past experiences.

There are many different types of cognitive bias.

Confirmation bias

This is when you’re only looking for or paying attention to information that supports what you already believe. Anchoring bias also describes relying too much on the first piece of information received when making decisions. An example of this would be thinking that a particular colleague is not good at their job, and then not paying attention to evidence that shows that they are.

Familiarity bias

This is the belief that some events are more frequent or likely because they are more familiar in your memory. For example, if you believe more incidents occur on aircraft than on vessels because aircraft incidents are more often in the news.

Framing effect

This is when the decision is made based on how the information is presented rather than the content of the message. So, for instance, “I’m not happy with some of your performance” sounds worse than “I’m happy with your performance most of the time”.

Hindsight bias

This is the tendency to see events in the past as more predictable than when they occurred. For example, wondering how your team made a mistake because the correct path seems so obvious now.

Automation bias

This is the tendency to overly rely on technology when making decisions even when contradictory information is introduced. For example, relying on electronic chart display and information system (ECDIS) too much during navigation and ignoring other sources of information.

Risk normalisation

This is tendency to get used to risk, so believing that it’s normal and does not need to be managed. For example, when a task is repeated regularly without any severe consequences, so you believe that there is no risk. You need to be aware of cognitive biases because they can have a significant impact on your perception and judgement in any situation

Tips for overcoming bias in decision making

  • Constantly seek new pieces of information from diverse sources – be open to input from others.
  • Seek to understand why a mistake happened instead of just blaming the person who made it.
  • Use a systematic process in decision making (such as a checklist).
  • Take time to think through a decision - pressure will only make you more prone to cognitive bias.

2.4 Errors, mistakes and violations

Now that you know how issues can arise while processing information, it’s important to understand the different types of human failure and how they can lead to an incident.

Human failure is a term used to describe ways in which humans in a system deviate from the intended course of action.

As a leader, it’s important to recognise why you and others make errors and mistakes and use this knowledge to prevent this from happening again. A leader also plays an important role in setting people up for success.

While this section will help you understand human failure, in the system’s view, people are not seen as sources of error so much as the creators of safety. This view recognises that there will always be gaps in any system, because designers and rule makers cannot think of all situations and contingencies. This means that human operators must be given some degree of freedom to cope with the unexpected. In turn, this increases the need for the human operator to identify and manage the risks that arise.

Figure 2: the types of human failure

Flow chart showing how the types of human failure and the steps that lead to them.

Figure 2 shows there are many types of human failure. These can be classified as either an “unintended error”, which can be an error of action or thinking, and “deliberate non-compliance” where someone chooses not to comply with guidance.

Unintended errors or incorrect actions

Slips – when you intend to do the right thing but getting it wrong, for example moving a switch down instead of up. Lapses – when you forget to do something or lose your place in the middle of a task, for example losing your place in a safety-critical task.

There are many causes for these errors, including:

  • it being a familiar task requiring little conscious thought (system 1 thinking)
  • two similar tasks being confused
  • tasks being too complicated
  • the main task being completed, but details are missed
  • distractions

You can reduce these incidents by:

  • making staff aware that slips and lapses do occur
  • using checklists and flowcharts to ensure tasks have been completed
  • ensuring that procedures are clear and logical
  • ensuring complicated tasks have checks
  • reducing distractions

Mistakes that are failures in decision making

These can be “rule-based mistakes” that occur when learnt procedures or rules are applied to a task incorrectly. For example, you may follow the procedure correctly but misjudge the speed of the vessel while berthing because you are in an unfamiliar vessel. There are also “knowledge-based mistakes”. These are mistakes made because of lack of knowledge of the task for example, relying on an out-of-date check to plan an unfamiliar route.

There are many causes for these errors including:

  • trying to accomplish too many tasks at once
  • a complicated task
  • time pressures and poor work environments.
  • social factors for example, peer pressure or staffing issues
  • individual stressors for example health
  • problems with equipment
  • organisational factors such as a lack of training and monitoring

You can reduce these incidents by:

  • increasing awareness of high-risk situations
  • providing procedures for predictable non-routine, high-risk tasks
  • supervising inexperienced workers
  • providing aids and diagrams for procedures

A violation

This is a deliberate action or decision to deviate from the rules, regulations or procedures. These can be “routine violations” – shortcuts to rules or procedures that become a normal way of working. For example, everyone onboard routinely working without personal protective equipment (PPE) because of being under time pressure.

There can also be “situational violations” – when exceptional pressures lead to deviations from rules. An example would be having to take shortcuts or rush through tasks to do everything you need to do in the time you have been given.

“Exceptional violations” are when unusual or emergency situations leads to a deviation from the rules, usually to solve the problem. For example, the Master working during a rest period as they’re needed on the bridge to avoid a collision.

Causes of violations can include:

  • being under pressure, including time and manning
  • peer pressure
  • not understanding the reasons for the rules
  • perception that the rules are too strict, or that compliance with the rules is not expected
  • lack of supervision or perception that there are no repercussions for violations
  • wanting to take the easy option – this may be particularly prevalent if workers are stressed
  • lack of correct equipment

You can reduce these incidents by:

  • setting a good example and encouraging compliance – if the procedure does not make sense, consider changing it
  • involving staff in changes to rules, to increase buy-in
  • explaining the reasons behind rules and procedures and why they’re relevant
  • ensuring the work environment is suitable to carry out procedure
  • providing appropriate supervision
  • ensuring you provide the necessary resources to carry out tasks
  • including possibilities of violations when completing risk assessments
  • creating a culture where safety is prioritised

Violations are deliberate acts that deviate from the rules or usual procedures; however, the intended outcome is often well meaning.

A violation intended to cause harm to others, or the organisation is called sabotage. However, there are many reasons why errors occur and, in most cases, they’re unintended.

It’s easy to blame someone when something goes wrong, but to stop the same thing happening again, it’s important to look at the reasons why the mistake happened. We have to explore the weaknesses in the whole system and look at the organisational causes, not just on an individual level, of the error.

To learn more about errors, mistakes, and violations, take a look at the Health and Safety Executive’s toolkit on Managing human failures.

2.5 Swiss cheese model

The Swiss cheese model, put forward by James Reason, shows that there are many layers of defence that lie between a hazard and an incident.

In this model, the layers of cheese represent different aspects of the work environment that contain some weaknesses (holes). For example, the layers of cheese might represent:

  • organisational culture
  • policies and procedures
  • equipment
  • the individual carrying out a task

Figure 3: successive layers of defence in the Swiss cheese model

Diagram showing how the Swiss Cheese Model indicates how weaknesses in defences can line up to let a hazard through.

Some holes are because of active failures and other holes are because of latent conditions.

Most of the time, when a hazard is present, the protection the other layers provide stops the hazard from passing through different parts of the system. For example, there may be a poor organisational culture that allows hazards to pass through, but there may be strong policies and procedures that allow individuals to be alerted to and mitigate the hazard.

Sometimes, there are too many (or even just enough) weaknesses (holes) in the system or the few weaknesses in every layer line up, letting a hazard to pass through.

So, when an incident occurs, remember to go further than just looking at the individual. Most of the time, they’re just the last layer of defence in an already weak system. Think about the environment around them that could have led to, or protected from, a hazard becoming an incident.

To learn more, have a look at the Swiss Cheese Model of Safety Incidents by James Reason.

2.6 Mitigating risk

You cannot fully eliminate risk. It’s a consequence of the uncertainty in the situations we face and the world around us. Considering what you have learnt about risk perception so far, it’s important to think about how you can mitigate these risks in the workplace. Also, think about the tools you can provide to make your colleagues more aware of risk in themselves and others.

Risk assessments are a useful way of identifying and mitigating risks in the workplace. There are 6 steps to conducting an effective risk assessment:

  1. Identify hazards – anything that has the potential to cause harm is a hazard.
  2. Identify risk – A risk is the likelihood that a hazard (something that can cause a harmful effect) will occur.
  3. Assess the level of risk – decide who might be harmed, how serious it might be and how likely the event is to happen.
  4. Mitigate the risk – see if you can remove the hazard all together or put in place controls to minimise the impact or the likelihood of something going wrong.
  5. Record and communicate – it may be necessary to record your process in a formal risk assessment, or at least to make sure others are aware.
  6. Review – make sure you review the risk assessment at regular periods to ensure that the assessment is still relevant.

Risk assessments can be formal and informal. Formal risk assessments are documented and often completed as part of workplace procedures. Informal risk assessments are held in our daily lives as we mentally weigh up the risks of our actions. They can be affected by the individual factors. You can reduce risk normalisation by making sure the same person is not always completing the risk assessment.

Risk normalisation means becoming used to seeing high-risk situations, without realising the possible consequences of risky behaviour. One way to avoid complacency is to arrange for training in the human perception of risk for yourself and your seafarer colleagues. If you’re introducing new equipment or kit, consider how it will affect awareness of risk and manage this appropriately.

The hierarchy of risk control (see Figure 4) is a framework that is used to minimise exposure to hazards. Controlling risks and reducing hazards is a safety-critical responsibility, and while this is the focus of those working in roles that affect the delivery and culture of shipboard operations, all need to implement this framework.

Figure 4: the hierarchy of risk control

A diagram showing the hierarchy of risk control from 1. Eliminate through to 6. PPE in a pyramid.

1. Eliminate

Remove the cause of the danger.

2. Substitute

Replace the hazardous work practice or machine with an alternative.

3. Isolate

Separate the hazard from the people at risk of injury.

4. Engineer controls

Make physical changes, such as adding safeguards by redesigning the machine.

5. Administer controls

For instance, install signs or rotate jobs.

6. PPE

Provide relevant personal protection equipment such as gloves or earplugs.

2.7 Decision-making models

When a situation occurs you may not have the time to consider all factors that affect decision making before planning a course of action. Decision-making models such as TDODAR, which stands for “time, diagnose, options, decide, act or assign, review” (Figure 5) provide a step-by-step process that you can use to help make decisions even in high stress situations.

TDODAR is commonly used in the aviation industry, but there are many different decision-making models that serve different purposes. It’s important to find one that you can remember and that works for you.

Figure 5: the TDODAR decision-making model

Diagram showing the time, diagnose, options, act or assign, review (TDODAR) decision-making model wheel.

1. Time

Work out how much time you have to make a decision and keep this in mind as you go through the next steps.

2. Diagnose

Diagnose the problem: figure out what is happening by collecting information from a range of sources including relevant experts.

3. Options

Using all the information you have collected so far, make a decision. What option would have the best outcome? Would it solve the problem?

4. Decide

Using all the information you have collected so far, make a decision. What option would have the best outcome? Would it solve the problem?

5. Act or assign

Decide who should act on the decision and assign roles if necessary.

6. Review

Think about how effective your actions and decisions were in solving the problem. Has the problem occurred again and if so consider whether your solution was effective.