With the recent tragic crash in Ethiopia, Captain Pierre Wannaz took a step back and shares some thoughts about Air Data Systems.
#FlightSafety #Investigation #FlightDataAnimation #SafetyII #data #AirData
Tragedies pointing out problems with Air Data Systems…
2018: a tragic year for flight safety. Two airliners were lost due to erroneous indications of their Air Data Systems.
- First, an AN-148 in Moscow, caused by pitot icing (apparently a checklist mistake, the pilots omitting to switch on the pitot/probe heating system) that lead to the loss of an airliner.
- Second, the 737 in Jakarta that had problem probably related to erroneous AoA (Angle Of Attack) values.
Now, the recent tragic crash in Ethiopia in circumstances that initially seems to be very similar to the Lion Air accident in Jakarta… Shortly after take-off, the crew performed a level-off followed by a sudden dive towards the ground. According to The Aviation Herald, two listeners on frequency reported independently the crew declared emergency shortly after normal departure, while in the initial climb, reporting they had unreliable airspeed indications and had difficulties to control the aircraft. This lost is still a mystery… in the hope we can rapidly have the details in order to understand what happened and take the necessary measure in order to avoid such losses!
Man against the machine…
On all aircraft types I have flown, the unreliable airspeed checklist is apparently so simple to apply…
Switch off the automatic, fly a pitch and set an engine power according the QRH (Quick Reference Handbook).
So why are trained crew not in a position to simply apply theses apparently so simple points of a checklist?
- First, most modern airliner are still relying on an old fashioned system invented in 1732 by Mr. Pitot and AoA vane dating from the beginning of controlled flight. Both are highly subject to mechanical and environment influences that might lead to wrong indications.
- Second, because the reality of an air data system fault is much more complex. Today’s aircraft is a network of interconnected computers. If key data like airspeed/angle of attack is erroneous, a lot of computers in this network will be affected and start to act in a way the pilots are not used to!
Under the simple computing principle: garbage in -> computer -> garbage out
Erroneous speed information can affect simultaneously the flight control laws, flight controls travel limitation, flight management system, numerous indications, trigger audio warnings, visual warnings and numerous checklists on the ECAM (Electronic Centralised Aircraft Monitor).
The “Unreliable Airspeed/AoA” has to be realized by the pilots while already fighting with degraded flight control authority or against computer triggering some protection maneuver that is erroneous for the real aircraft flight status… the man is fighting against the machine!
In such a complex situation, usually, no instrument is pointing out the root cause of these numerous suddenly failures. It is a difficult and challenging situation.
On top of that, the situation can be extremely confusing, confronting pilots to simultaneous occurrences of “stall warning” and “overspeed warning” as one is depending on the indicated airspeed and the second one on the AoA.
On a case I studied in detail, I never saw a simulator able to correctly display all the complexity of such an event. As a SFE/TRI (A), in the simulator exercises I sometime give to my trainees, I basically see two types of reaction:
- The ones who, in doubt, dare to switch off as much as possible of the automation in order to fly a pitch and a power… after such a move, it is suddenly easy to fly and it becomes an almost boring exercise in the simulator.
- The other ones wanting to keep as much automation as possible – note that most of operating procedures ask for an “optimum use of automation”, erroneously often interpreted as “maximum use of automation” – and trying to identify still potentially reliable information.
Theses ones usually are having a very rough ride and are facing an extremely challenging situation, especially when some protection, like high speed or angle of attack protection (correct or erroneous ones) start to interfere with their flying… They are fighting against the computers.
The industry, at least in Europe, has recognized the necessity to improve the simulator training, implementing a 3-year cycle dealing from approach to stall up to upset recovery.
Time for human in the system, now, for real!
Very well, but why not use the maximum of the “human in the system” as safety protection?
Let’s implement a real approach for Safety II, as today we do have tools that help such an approach!
Many crews worldwide have been confronted to partial/total loss of air data on the A330/340 prior to the loss of AF447. Around 40 flights reported the loss of their speed indication in flight and survived. In most of their reports, they mentioned the confusion, the initial surprise and the startle effect (see my previous article about this subject here) until being able to realize the root cause of their problems.
Today, flight animation in real time does exist! Why are videos of such cases not available to the pilots’ community?
Case studies of complex failures (air data problem, double hydraulics problem…) could easily be made available within the airlines in a case study forum for example. Or, why not also imagine that there could be in today’s electronics quick reference handbooks a link that would enable pilots to see a real-time animation of all the failing components combined with the dynamic of such a situation?
Such animation tools exist already since a long time in flight safety departments… Why didn’t these departments realize the potential danger of these problems?
- Because our approach is still Safety I. If an accident or serious underperforming is not measured, it is difficult to realize the potential inherent danger.
- With Safety II, imagine if only a single among the 80 pilots of A330/340 having experienced a speed loss and such a complex situation would have pressed the “Event” button. Using CEFA AMS, minutes after the landing, they would have an animation available on their tablet showing the dynamic and the complexity of the situation they were facing. Such an animation would be very helpful in order to have a complete understanding of what happened.
On top of that, their subjective perception of the encountered danger and difficulties could be of extreme help for other crew members, to be better prepared.
With such an animation tool, airlines, manufacturers and flight safety departments would be in a position to share valuable data from real life, from incident that had a positive outcome a big step ahead, to act pro-actively instead of reactively!
What do you think? Let’s discuss this huge potential and improve flight safety together!