Introduction de la session : Biais, erreur, effet tunnel etc …
Seance of wednesday 25 june 2025 (Les biais cognitifs)
DOI number : 10.26299//2025.24.01
Abstract
When an unexpected event occurs due to a defect in organization or equipment, human error or negligence, safety margins and protocols represent the defenses: the anomaly or error is then corrected and does not lead to consequences.
But if the defenses are not sufficient, the anomaly can progress to the critical incident. This situation, if not corrected in time, can lead to an accident, i.e. irreversible damage.
Recovering from a critical incident that defenses did not prevent requires the development of a dedicated response to an unprecedented situation.
Error, defined as a performance that deviates from the achievable ideal, is intrinsic to all systems developed.
Human performance is limited. We make mistakes in everything we do, to the point where cognitive errors cause more accidents than technical failures.
Our cognition, our memory and our ability to manage simultaneously are not infinite and saturate from a certain level of solicitation. We also lose the efficiency of our performance under the effect of stress, the acceleration of operations or external constraints (schedules, hierarchical pressures, etc.).
Accidents are often multifactorial, but the human factor is involved in more than half of all accidents. However, this accusation must be modulated by two major elements:
- Error is part of the normal functioning of complex systems and the human brain.
• Man is not the only one to blame. He is involved in a scenario that has led him to a potentially uncontrollable situation.
• - People are also the key factor when the usual defenses are overwhelmed by events or when the breakdown is unpredictable.
•
The operator is at the end of a series of different situations and elements that generated the conditions of the accident:
- J. Reason's "Swiss cheese" model explains that a chain of unfortunate circumstances and defects in the overall system can lead to an accident if the defense mechanisms themselves fail. If the trajectory of an event is in line with the system faults, only defences and safety margins can block the evolution towards the accident. But if these are also lacking, then the accident is inevitable.
- The mathematics of "chaos" has highlighted three essential data points in risk management:
• The occurrence of an unexpected event in a complex system, such as an accident in operation, is the result of so many possible combinations that its prediction poses colossal problems.
• The frequency of events is inversely proportional to their severity: minor events are common, but disasters are rare.
• Complex systems have a certain degree of instability. Causality is not linear, but depends at each moment on multiple interdependent sequences. A change in a single small element can lead to disaster depending on the successive stages and subsequent circumstances (butterfly effect).
- These findings mean that serious events are inherently unpredictable and can happen at any time: each case can become unexpectedly complicated.
One of the keys to safety is to anticipate that an accident is always possible.
This is the foundation of Murphy's Law: if a system can go wrong, it will do it once.
We have two ways to manage our response to an incident depending on the circumstances:
• Low level of integration (Type I): automatisms, mental schemas, intuitions. This mode of operation does not allow for a new solution to an unknown problem.
• High level of integration (Type II): conscious analytical reflection.
This search for a solution requires a slow and sequential flow but it makes it possible to invent an original solution, adapted to a new situation.
• If it fails, it is a cognitive error.
The tunneling effect (or fixation, or anchoring) is a major cognitive block that locks an individual into a single diagnosis or activity. The operator is obsessed with the option chosen to quickly resolve the problem that has occurred, and he does not question his conduct despite the conflicting data. Focused on his crisis management, he no longer has an overall view of the situation.
There are different techniques to fight against cognitive errors, starting with raising awareness of this problem among practitioners, who tend to overestimate their abilities in this area.
On the other hand, trying to eradicate error or making a moral judgment about the one who makes a mistake is not the right answer. On the one hand, it is necessary to learn to manage mistakes and to develop an attitude of active monitoring, and on the other hand, to develop effective defense systems that correct any deviation immediately. Because a mistake is not a mistake. Fault involves the deliberate violation of an established rule or negligence in the performance of a task.
There are several countermeasures:
- Fault algorithms allow simple schemas to be stored.
- Checklists are more reliable than individual memory.
- When a perilous situation is foreseeable, a strategy is established beforehand.
- At all times, it is good to have an action plan in mind in case of an unexpected problem, as a pilot knows the different diversion airports in case of a breakdown or incident on board.
- Knowledge of the cognitive errors that occur in acute situations makes it possible to limit their effects and to strive to maintain a certain critical spirit.
- Performance in crisis situations is a function of experience: the simulator is an extremely effective way to train and acquire the reflexes that will allow you to follow the appropriate procedures in stressful situations.
- A team is a functional unit whose performance is always greater than the sum of that of each of its members.
But if the defenses are not sufficient, the anomaly can progress to the critical incident. This situation, if not corrected in time, can lead to an accident, i.e. irreversible damage.
Recovering from a critical incident that defenses did not prevent requires the development of a dedicated response to an unprecedented situation.
Error, defined as a performance that deviates from the achievable ideal, is intrinsic to all systems developed.
Human performance is limited. We make mistakes in everything we do, to the point where cognitive errors cause more accidents than technical failures.
Our cognition, our memory and our ability to manage simultaneously are not infinite and saturate from a certain level of solicitation. We also lose the efficiency of our performance under the effect of stress, the acceleration of operations or external constraints (schedules, hierarchical pressures, etc.).
Accidents are often multifactorial, but the human factor is involved in more than half of all accidents. However, this accusation must be modulated by two major elements:
- Error is part of the normal functioning of complex systems and the human brain.
• Man is not the only one to blame. He is involved in a scenario that has led him to a potentially uncontrollable situation.
• - People are also the key factor when the usual defenses are overwhelmed by events or when the breakdown is unpredictable.
•
The operator is at the end of a series of different situations and elements that generated the conditions of the accident:
- J. Reason's "Swiss cheese" model explains that a chain of unfortunate circumstances and defects in the overall system can lead to an accident if the defense mechanisms themselves fail. If the trajectory of an event is in line with the system faults, only defences and safety margins can block the evolution towards the accident. But if these are also lacking, then the accident is inevitable.
- The mathematics of "chaos" has highlighted three essential data points in risk management:
• The occurrence of an unexpected event in a complex system, such as an accident in operation, is the result of so many possible combinations that its prediction poses colossal problems.
• The frequency of events is inversely proportional to their severity: minor events are common, but disasters are rare.
• Complex systems have a certain degree of instability. Causality is not linear, but depends at each moment on multiple interdependent sequences. A change in a single small element can lead to disaster depending on the successive stages and subsequent circumstances (butterfly effect).
- These findings mean that serious events are inherently unpredictable and can happen at any time: each case can become unexpectedly complicated.
One of the keys to safety is to anticipate that an accident is always possible.
This is the foundation of Murphy's Law: if a system can go wrong, it will do it once.
We have two ways to manage our response to an incident depending on the circumstances:
• Low level of integration (Type I): automatisms, mental schemas, intuitions. This mode of operation does not allow for a new solution to an unknown problem.
• High level of integration (Type II): conscious analytical reflection.
This search for a solution requires a slow and sequential flow but it makes it possible to invent an original solution, adapted to a new situation.
• If it fails, it is a cognitive error.
The tunneling effect (or fixation, or anchoring) is a major cognitive block that locks an individual into a single diagnosis or activity. The operator is obsessed with the option chosen to quickly resolve the problem that has occurred, and he does not question his conduct despite the conflicting data. Focused on his crisis management, he no longer has an overall view of the situation.
There are different techniques to fight against cognitive errors, starting with raising awareness of this problem among practitioners, who tend to overestimate their abilities in this area.
On the other hand, trying to eradicate error or making a moral judgment about the one who makes a mistake is not the right answer. On the one hand, it is necessary to learn to manage mistakes and to develop an attitude of active monitoring, and on the other hand, to develop effective defense systems that correct any deviation immediately. Because a mistake is not a mistake. Fault involves the deliberate violation of an established rule or negligence in the performance of a task.
There are several countermeasures:
- Fault algorithms allow simple schemas to be stored.
- Checklists are more reliable than individual memory.
- When a perilous situation is foreseeable, a strategy is established beforehand.
- At all times, it is good to have an action plan in mind in case of an unexpected problem, as a pilot knows the different diversion airports in case of a breakdown or incident on board.
- Knowledge of the cognitive errors that occur in acute situations makes it possible to limit their effects and to strive to maintain a certain critical spirit.
- Performance in crisis situations is a function of experience: the simulator is an extremely effective way to train and acquire the reflexes that will allow you to follow the appropriate procedures in stressful situations.
- A team is a functional unit whose performance is always greater than the sum of that of each of its members.