• Skip to content
  • Skip to primary sidebar

Right Attitudes

Ideas for Impact

Aviation

Many Hard Leadership Lessons in the Boeing 737 MAX Debacle

August 24, 2021 By Nagesh Belludi Leave a Comment

The U.S. House committee’s report on Boeing’s 737 MAX disaster makes interesting reading on contemporary leadership, particularly the pressures of rapid product development.

The rush to market and a culture of contributory negligence and concealment conspired to ensure that a not-yet-airworthy plane carried passengers into service, resulting in two fatal accidents and a long grounding.

Boeing’s design and development of the 737 MAX was marred by technical design failures, a lack of transparency with both regulators and customers, and efforts to downplay or disregard concerns about the operation of the aircraft.

Of particular importance are the “management failures,” “inherent conflicts of interest,” and “grossly insufficient oversight” at both Boeing and its regulator, the Federal Aviation Administration (FAA.) Boeing failed to offset the design limitations and cost- and schedule-pressures in favor of attention to customer safety. Leadership was fixated on fending off the runaway success of the Airbus A320neo program.

The company relied on too many technical assumptions—and they couldn’t make themselves the space and time to be reasonable about any of this. Boeing’s “culture of concealment” and an “unwillingness to share technical details” are the report’s most damning indictment. Employees spoke but went unheard; indeed, their voices were suppressed.

Wondering what to read next?

  1. The Boeing 737 MAX’s Achilles Heel
  2. Availability Heuristic: Our Preference for the Familiar
  3. How Stress Impairs Your Problem-Solving Capabilities: Case Study of TransAsia Flight 235
  4. Be Smart by Not Being Stupid
  5. How to Guard Against Anything You May Inadvertently Overlook

Filed Under: Leadership Tagged With: Aviation, Biases, Change Management, Decision-Making, Problem Solving, Risk, Thinking Tools

Lessons from David Dao Incident: Watch Out for the Availability Bias!

August 23, 2021 By Nagesh Belludi Leave a Comment

In the weeks and months after the United Airlines’David Dao incident and the ensuing customer service debacle, news of all kinds of disruptive airline incidents, coldblooded managers, and inconsiderate airline staff showed up everywhere.

The United incident raised everyone’s awareness of airline incidents. Expectedly, the media started drawing attention to all sorts of airline incidents—fights on airplanes, confusion and airports, seats taken from small children, insects in inflight meals, snakes on the plane—affecting every airline, large and small. However, such unpleasant incidents rarely happen, with thousands of flights every day experiencing nothing of the sort.

Parenthetically, the underlying problem that led to the David Dao incident wasn’t unique to United. The incident could have happened at other airlines. All airlines had similar policies regarding involuntary-denied boarding and prioritizing crew repositioning. Every other airline, I’m sure, felt lucky the David Dao incident didn’t happen on their airline.

In the aftermath of the incident, many people vowed to boycott United. Little by little, that negative consumer sentiment faded away while the backlash—and media coverage—over the incident diminished.

Availability bias occurs when we make decisions based on easy or incomplete ideas.

The David Dao incident’s media coverage is an archetypal case of the Availability Bias (or Availability Heuristic) in force. Humans are inclined to disproportionately assess how likely something will happen by how easy it is to summon up comparable–and recent–examples. Moreover, examples that carry a fierce emotional weight tend to come to mind quickly.

The availability heuristic warps our perception of real risks. Therefore, if we’re assessing whether something is likely to happen and a similar event has occurred recently, we’re much more liable to expect the future possibility to occur.

What we remember is shaped by many things, including our beliefs, emotions, and things like intensity and frequency of exposure, particularly in mass media. When rare events occur, as was the case with the David Dao incident, they become evident. Suppose you’re in a car accident involving a Chevy, you are likely to rate the odds of getting into another car accident in a Chevy much higher than base rates would suggest.

If you are aware of the availability bias and begin to look for it, you will be surprised how often it shows up in all kinds of situations. As with many other biases, we can’t remove this natural tendency. Still, we can let our rational minds account for this bias in making better decisions by being aware of the availability bias.

Idea for Impact: Don’t be disproportionately swayed by what you remember. Don’t underestimate or overestimate a risk or choosing to focus on the wrong risks. Don’t overreact to the recent facts.

Wondering what to read next?

  1. Why Your Judgment Sucks
  2. The Unthinking Habits of Your Mind // Book Summary of David McRaney’s ‘You Are Not So Smart’
  3. What if Something Can’t Be Measured
  4. The Upsides of Slowing Down
  5. Be Smart by Not Being Stupid

Filed Under: Mental Models, Sharpening Your Skills Tagged With: Aviation, Biases, Change Management, Critical Thinking, Decision-Making, Psychology, Thought Process

Why Is (Was!) Airline Boarding a Mess?

June 11, 2020 By Nagesh Belludi Leave a Comment

Prescript: I drafted and pre-scheduled this article late last year … who would have imagined that life, and the airline industry specifically, could be utterly derailed by a lethal virus?

Boarding an airplane is one of the most inefficient aspects of flying.

There’s no money to be made when a plane is sitting on the ground. Little wonder, then, that airlines have attempted for decades to improve the boarding process—usually with little to no success.

Airlines and airports have engaged industrial engineers, logistics experts, and university researchers to study how to get passengers into their planes in a timely fashion. They’ve experimented with back-to-front, window-to-aisle, every-other-row, and many seating combinations thereof. The improvements have turned out marginal at best.

A Little Too Theoretical to Work Well

No airline seems to have cracked the code for efficient boarding because of the same old reasons—much of the sequencing models and organizing tests are a little too theoretical for reality and are reductive about human behavior.

All the boarding methods have an implicit assumption that passengers are orderly and don’t create frustrating bottlenecks. But, when it comes down to it, passengers simply can’t lend themselves to the airline’s preferred boarding order. Passengers don’t show up at the gate on time and organize themselves precisely in the airline’s prescribed sequence. Once onboard, they don’t place their carryon bags into bins promptly and clear the aisle swiftly.

To make matters worse, airlines need to treat some passengers preferentially—the highest paying customers, loyal frequent flyers, military personnel, people with special needs, and families with young kids must board before general boarding. Then there’re complications arising from making passengers pay for carryon bags. Passengers with bare-bones tickets are not only given middle seats but also inconvenienced enough to board in the end and then scramble for overhead bin space for their bags.

All these complexities add a significant burden on gate agents and flight attendants, who, while making every effort for an on-time departure, must monitor passengers boarding when they must, carrying paid-for carryon bags, and using overhead bin space near their seats.

Basic Human Nature is the Inhibiting Factor

Given the not-so-orderly-and-decorous tendencies of humans, no one boarding method has statistically proved to be consistently and reliably better than others. As a result, airlines fall back on a variety of general boarding schemes, usually some combinations of back-to-front and window-to-aisle arrangement.

In my experience, the “free-for-all” seating that Southwest Airlines operates appears the fastest. Southwest’s passengers don’t get assigned seat numbers, so they have the freedom to sit anywhere they want. They line up for boarding in the order they check-in and reach the gate. Once onboard, they move quickly to find the best available seats and keep out of each other’s way. Southwest is also helped by the fact that passengers tend to have fewer and smaller carryon bags because Southwest doesn’t charge for checked luggage.

Wondering what to read next?

  1. The Loss Aversion Mental Model: A Case Study on Why People Think Spirit is a Horrible Airline
  2. Steering the Course: Leadership’s Flight with the Instrument Scan Mental Model
  3. How to See Opportunities Your Competition Doesn’t
  4. Intentions, Not Resolutions
  5. Do You Really Need More Willpower?

Filed Under: Mental Models Tagged With: Aviation, Customer Service, Discipline

Five Where Only One is Needed: How Airbus Avoids Single Points of Failure

April 6, 2020 By Nagesh Belludi Leave a Comment

In my case study of the Boeing 737 MAX aircraft’s anti-stall mechanism, I examined how relying on data from only one Angle-of-Attack (AoA) sensor caused two accidents and the aircraft’s consequent grounding.

A single point of failure is a system component, which, upon failure, renders the entire system unavailable, dysfunctional, or unreliable. In other words, if a bunch of things relies on one component within your system, and that component breaks, you are counting the time to a catastrophe.

Case Study: How Airbus Builds Multiple Redundancies to Minimize Single Points of Failure

As the Boeing 737 MAX disaster has emphasized, single points of failure in products, services, and processes may spell disaster for organizations that have not adequately identified and mitigated these critical risks. Reducing single points of failure requires a thorough knowledge of the vital systems and processes that an organization relies on to be successful.

Since the dawn of flying, reliance on one sensor has been anathema.

The Airbus A380 aircraft, for example, features 100,000 different wires—that’s 470 km of cables weighing some 5700 kg. Airbus’s wiring includes double or triple redundancy to mitigate the risk of single points of failure caused by defect wiring (e.g., corrosion, chafing of isolation or loose contact) or cut wires (e.g., through particles intruding aircraft structure as in case of an engine burst.)

The Airbus fly-by-wire flight control system has quadruplex redundancy i.e., it has five flight control computers where only one computer is needed to fly the aircraft. Consequently, an Airbus aircraft can afford to lose four of these computers and still be flyable. Of the five flight control computers, three are primary computers and two are secondary (backup) computers. The primary and the secondary flight control computers use different processors, are designed and supplied by different vendors, feature different chips from different manufacturers, and have different software systems developed by different teams using different programming languages. All this redundancy reduces the probability of common hardware- and software-errors that could lead to system failure.

Redundancy is Expensive but Indispensable

The multiple redundant flight control computers continuously keep track of each other’s output. If one computer produces deviant results for some reason, the flight control system as a whole excludes the results from that aberrant computer in determining the appropriate actions for the flight controls.

By replicating critical sensors, computers, and actuators, Airbus provides for a “graceful degradation” state, where essential facilities remain available, allowing the pilot to fly and land the plane. If an Airbus loses all engine power, a ram air turbine can power the aircraft’s most critical systems, allowing the pilot to glide and land the plane (as happened with Air Transat Flight 236.)

Idea for Impact: Build redundancy to prevent system failure from the breakdown of a single component

When you devise a highly reliable system, identify potential single points of failure, and investigate how these risks and failure modes can be mitigated.

For every component of a product or a service you work on, identify single points of failure by asking, “If this component fails, does the rest of the system still work, and, more importantly, does it still do the function it is supposed to do?”

Add redundancy to the system so that failure of any component does not mean failure of the entire system.

If you can’t build redundancy into a system due to some physical or operational complexity, establish frequent inspections and maintenance to keep the system reliable.

Postscript: In people-management, make sure that no one person has sole custody of some critical institutional knowledge, creativity, reputation, or experience that makes him indispensable to the organization’s business continuity and its future performance. If he/she should leave, the organization suffers the loss of that valued standing and expertise. See my article about this notion of key-person dependency risk, the threat posed by an organization, or a team’s over-reliance on one or a few individuals.

Wondering what to read next?

  1. How Stress Impairs Your Problem-Solving Capabilities: Case Study of TransAsia Flight 235
  2. Steering the Course: Leadership’s Flight with the Instrument Scan Mental Model
  3. What Airline Disasters Teach About Cognitive Impairment and Decision-Making Under Stress
  4. Be Smart by Not Being Stupid
  5. How to Solve a Problem By Standing It on Its Head

Filed Under: Business Stories, Sharpening Your Skills Tagged With: Aviation, Critical Thinking, Decision-Making, Innovation, Mental Models, Problem Solving, Risk, Thought Process

This is Not Responsible Leadership: Boeing’s CEO Blames Predecessor

March 12, 2020 By Nagesh Belludi Leave a Comment

In January, Boeing’s former Chairman, David Calhoun, became CEO after the board fired Dennis Muilenburg. Less than two months later, in a New York Times interview last week, Calhoun blamed Muilenburg for the misfortunes plaguing Boeing:

  • Asked why he wouldn’t give up his salary (he gets a $7 million bonus if he can get the 737 MAX back into the sky) in light of the 737 MAX-related woes, Calhoun declared, “… ’cause I’m not sure I would have done it [taken the job without a salary].”
  • On Boeing’s systemic culture problem (a steady trickle of revelations has exposed software problems and corners being cut in the engineering and certification processes,) Calhoun characterized the contents of the leaked emails as unacceptable but also downplayed the issue: “… I see a couple of people who wrote horrible emails.”
  • Calhoun has been on Boeing’s board since 2009. While the MAX crisis snowballed and Boeing’s crisis management went from bad to worse, Calhoun took over as the board’s chairman. In that capacity, he fully endorsed Muilenburg saying, “from the vantage point of our board, he has done everything right,” “he didn’t create this problem,” and “shouldn’t resign.” Now, in the last week’s interview, Calhoun had a different take: “Boards are invested in their CEOs until they’re not. We had a backup plan. I am the backup plan.”
  • Acknowledging that Muilenburg boosted production rates before the supply chain was ready, Calhoun declared, “I’ll never be able to judge what motivated Dennis, whether it was a stock price that was going to continue to go up and up, or whether it was just beating the other guy to the next rate increase. If anybody ran over the rainbow for the pot of gold on stock, it would have been him.”

Calhoun and the rest of Boeing’s board of directors were part of the context right from the outset. The roots of Boeing’s current crisis embody decisions made by the company’s leadership over a decade and fully sanctioned by the board. The board is wholly accountable for everything that happens under its authority.

Idea for Impact: Blame is an Accountability Killer

This is not responsible leadership. A true leader doesn’t pass the blame for failure but graciously accepts responsibility for the problems he inherited. Even though Boeing’s lapses may not be traceable directly to him in his capacity as a member of the company’s board, Calhoun should have acknowledged his—and the rest of the board’s—failing to keep an eye on Boeing’s leadership team over the last decade.

Leading with integrity means taking personal responsibility. It’s tempting for people to take flight and avoid the personal consequences of what happened, to reject personal responsibility, and to pass the blame on to other people.

Calhoun could have acknowledged that the board’s actions had a role in the situation. By facing up to these criticisms and admitting that Boeing and it’s board could have done things better, Calhoun could have encouraged others at Boeing to do the same, especially considering that he must overhaul the company culture from the top down.

Wondering what to read next?

  1. The Cost of Leadership Incivility
  2. Five Signs of Excessive Confidence
  3. Power Inspires Hypocrisy
  4. Books in Brief: ‘Flying Blind’ and the Crisis at Boeing
  5. Shrewd Leaders Sometimes Take Liberties with the Truth to Reach Righteous Goals

Filed Under: Effective Communication, Leadership Tagged With: Attitudes, Aviation, Governance, Humility, Integrity, Leadership, Leadership Lessons, Respect

The Boeing 737 MAX’s Achilles Heel

January 7, 2020 By Nagesh Belludi Leave a Comment

Two thousand nineteen was one of the most turbulent years in Boeing’s history. Its 737 MACS (pardon the pun) troubles went from bad to worse to staggering when aviation regulators around the world grounded the aircraft and a steady trickle of disclosures increasingly exposed software problems and corners being cut.

The flaw in this aircraft, its anti-stall mechanism that relied on data from a single sensor, offers a particularly instructive case study of the notion of single point of failure.

One Fault Could Cause an Entire System to Stop Operating

A single point of failure of a system is an element whose failure can result in the failure of the entire system. (A system may have multiple single points of failure.)

Single points of failures are eliminated by adding redundancy—by doubling the critical components or simply backing them up, so that failure of any such element does not initiate a failure of the entire system.

Boeing Mischaracterized Its Anti-Stall System as Less-than-Catastrophic in Its Safety Analysis

The two 737 MAX crashes (with Lion Air and Ethiopian Airlines) originate from a late-change that Boeing made in a trim system called the Maneuvering Characteristics Augmentation System (MCAS.)

Without the pilot’s input, the MCAS could automatically nudge the aircraft’s nose downwards if it detects that the aircraft is pointing up at a dangerous angle, for instance, at high thrust during take-off.

Reliance on One Sensor is an Anathema in Aviation

The MCAS was previously “approved” by the Federal Aviation Administration (FAA.) Nevertheless, Boeing made some design changes after the FAA approval without checking with the FAA again. The late-changes were made to improve MCAS’s response during low-speed aerodynamic stalls.

The MCAS system relied on data from just one Angle-of-Attack (AoA) sensor. With no backup, if this single sensor were to malfunction, erroneous input from that sensor would trigger a corrective nosedive just after take-off. This catastrophe is precisely what happened during the two aircraft crashes.

The AoA sensor thus became a single point of failure. Despite the existence of two angle-of-attack sensors on the nose of the aircraft, the MCAS system not only used data from either one of the sensors but also did not expect concurrence between the two sensors to infer that the aircraft was stalling. Further, Lion Air did not pay up to equip its aircraft with a warning light that could have alerted the crew to a disagreement between the AoA sensors.

Boeing Missed Safety Risks in the Design of the MAX’s Flight-Control System

Reliance on one sensor’s data is an egregious violation of a long-standing engineering principle about eliminating single points of failure. Some aircraft use three duplicate systems for flight control: if one of the three malfunctions, if two systems agree, and the third does not, the flight control software ignores the odd one out.

If the dependence on one sensor was not enough, Boeing, blinded by time- and price-pressure to stay competitive with its European rival Airbus, intentionally chose to do away with any reference to MCAS in pilot manuals to spare pilot training for its airline-customers. Indeed, Boeing did not even disclose the existence of the MCAS on the aircraft.

Boeing allows pilots to switch the trim system off to override the automated anti-stall system, but the pilots of the ill-fated Lion Air and Ethiopian Airlines flights failed to do so.

Idea for Impact: Redundancy is the Sine Qua Non of Reliable Systems

In preparation for airworthiness recertification for the 737 MAX, Boeing has corrected the MCAS blunder by having its trim software compare inputs from two AoA sensors, alerting the pilots if the sensors’ readings disagree, and limiting MCAS’s authority.

One key takeaway from the MCAS disaster is this: when you devise a highly reliable system, identify all single points of failure, and investigate how these risks and failure modes can be mitigated. Examine if every component of a product or a service you work on is a single point of failure by asking, “If this component fails, does the rest of the system still work, and, more importantly, does it still do the function it is supposed to do?”

Wondering what to read next?

  1. Availability Heuristic: Our Preference for the Familiar
  2. Many Hard Leadership Lessons in the Boeing 737 MAX Debacle
  3. Be Smart by Not Being Stupid
  4. How to Guard Against Anything You May Inadvertently Overlook
  5. What Airline Disasters Teach About Cognitive Impairment and Decision-Making Under Stress

Filed Under: Mental Models, Sharpening Your Skills Tagged With: Aviation, Biases, Decision-Making, Problem Solving, Risk, Thinking Tools

Two Leadership Lessons from Oscar Munoz, United Airlines CEO

December 12, 2019 By Nagesh Belludi 1 Comment

United Airlines announced last week that CEO Oscar Munoz and President Scott Kirby would transition to new roles as executive chairman and CEO respectively in May 2020.

Munoz was very good for the airline. He deserves kudos for getting United back on track, for improving the company’s culture, employee morale, brand image, and customer experience, and for hiring Kirby.

  • Munoz, who came to United from the railroad company CSX, had hitherto gained considerable experience while serving for 15 years on United’s (via its predecessor Continental Airlines’s) board. But, when he became CEO in 2015, he stated that he hadn’t realized how bad things had got at United. That admission reflects poorly on his board tenure—board members are expected to be clued up about the day-to-day specifics of the company and have more visibility into the pulse of the company’s culture beyond its senior management. Alas, board members not only owe their cushy jobs to the CEOs and the top leadership but also build long, cozy relationships with them.
  • Munoz will be remembered chiefly for the David Dao incident and the ensuing customer service debacle. The video of Dao being dragged out of his seat screaming was seen around the world. While the dragging was not Munoz’s fault (the underlying problem wasn’t unique to United,) the company’s horrendous response to the incident was. However, Munoz is worthy of praise for using the event as a learning exercise and an impetus for wholesale change in United’s operations and employee culture. In the aftermath of the incident, many customers vowed to boycott United flights, but that sentiment passed as the backlash over the incident waned. Even so, the David Dao incident need not have happened for United’s operational and cultural changes to materialize.

Now then, Scott Kirby is a hardnosed, “Wall Street-first, customer loyalty-last” kinda leader. Even though Kirby has made United an operationally reliable airline, his manic focus on cost-cutting has made him less popular with United’s staff and its frequent fliers. Let’s hope he’ll keep the momentum and preserve the good that Munoz has wrought.

Wondering what to read next?

  1. Books in Brief: ‘Flying Blind’ and the Crisis at Boeing
  2. Tylenol Made a Hero of Johnson & Johnson: A Timeless Crisis Management Case Study
  3. Book Summary of Nicholas Carlson’s ‘Marissa Mayer and the Fight to Save Yahoo!’
  4. Book Summary of Donald Keough’s ‘Ten Commandments for Business Failure’
  5. This is Not Responsible Leadership: Boeing’s CEO Blames Predecessor

Filed Under: Effective Communication, Leadership, The Great Innovators Tagged With: Aviation, Change Management, Ethics, Governance, Leadership Lessons, Learning, Problem Solving, Transitions, Winning on the Job

Lessons from the World’s Worst Aviation Disaster // Book Summary of ‘The Collision on Tenerife’

November 5, 2019 By Nagesh Belludi Leave a Comment

Jon Ziomek’s nonfiction history book Collision on Tenerife (2018) is the result of years of analysis into the world’s worst aviation disaster on Tenerife Island in the Canary Islands of Spain.

Distinct Small Errors Can Become Linked and Amplified into a Big Tragedy

On 27-March-1977, two fully loaded Boeing 747 passenger jets operated by Pan American World Airways (Pan Am) and KLM Royal Dutch Airlines collided on the runway, killing 583 passengers and crew on the two airplanes. Only 61 survived—all from the Pan Am jet, including its pilot.

These two flights, and a few others, were diverted to Tenerife after a bomb went off at the Gran Canaria Airport in Las Palmas, their original destination. Tenerife was not a major airport—it had a single runway, and taxi and parking space were limited. After the Las Palmas airport reopened, flights were cleared for takeoff from Tenerife, but the fog rolled in over Tenerife reducing visibility to less than 300 feet. Several airplanes that had been diverted to Tenerife had blocked the taxiway and the parking ramp. Therefore, the KLM and Pan Am jets taxied down the single runway in preparation for takeoff, the Pan Am behind the KLM.

At one end of the runway, the KLM jet turned 180 degrees into position for takeoff. In the meantime, the Pan Am jet was still taxiing on the runway, having missed its taxiway turnoff in the fog. The KLM pilot jumped the gun and started his take-off roll before he got clearance from traffic control.

When the pilots of the two jets caught sight of each other’s airplanes through the fog, it was too late for the Pan Am jet to clear out of the runway into the grass and for KLM jet to abort the takeoff. The KLM pilot lifted his airplane off the runway prematurely, but could not avoid barreling into the Pan Am’s fuselage at 240 kmph. Both the jets exploded into flames.

The accident was blamed on miscommunication—breakdown of coordinated action, vague language from the control tower, the KLM pilot’s impatience to takeoff without clearance, and the distorted cross-talk of the KLM and Pan Am pilots and the controllers on a common radio channel.

Breakdown of Coordination Under Stress

Sweeping changes were made to international airline regulations following the accident: cockpit procedures were changed, standard phrases were introduced, and English was emphasized as a common working language.

'Collision on Tenerife' by Jon Ziomek (ISBN 1682617734) In Collision on Tenerife, Jon Ziomek, a journalism professor at Northwestern University, gives a well-written, detailed account of all the mistakes leading up to the crash and its aftermath.

The surviving passengers’ first- and second-hand accounts recall the horror of those passengers on the right side of the Pan Am jet who saw the lights of the speeding KLM 747, just as the Pan Am pilot was hastily turning his airplane onto the grass to avoid the collision.

Ziomek describes how passengers escaped. Some had to make the difficult choice of leaving loved ones or friends and strangers behind.

Dorothy Kelly … then spotted Captain Grubbs lying near the fuselage. Badly burned and shaken by his jump from the plane, he could not move. “What have I done to these people?” he yelled, pounding the ground in anguish. Kelly grabbed him under his shoulders and urged “Crawl, Captain, crawl!”

Recommendation: Read Jon Ziomek’s Collision on Tenerife

Some of the bewildering details make for difficult reading—especially the psychological effects (post-traumatic stress syndrome) on the surviving passengers. But Jon Ziomek’s Collision on Tenerife is important reading, providing a comprehensive picture of the extensive coordination required in aviation, the importance of safety and protocols, and how some humans can freeze in shock while others spring into action.

The key takeaway is the recognition of how small errors and problems (an “error chain”) can quickly become linked and amplified into disastrous outcomes.

Wondering what to read next?

  1. “Fly the Aircraft First”
  2. Under Pressure, The Narrowing Cognitive Map: Lessons from the Tragedy of Singapore Airlines Flight 6
  3. What Airline Disasters Teach About Cognitive Impairment and Decision-Making Under Stress
  4. How Stress Impairs Your Problem-Solving Capabilities: Case Study of TransAsia Flight 235
  5. Lessons from the Princeton Seminary Experiment: People in a Rush are Less Likely to Help Others (and Themselves)

Filed Under: Business Stories, Effective Communication, Sharpening Your Skills Tagged With: Anxiety, Assertiveness, Aviation, Biases, Books for Impact, Conflict, Decision-Making, Mindfulness, Problem Solving, Stress, Thinking Tools, Worry

How Stress Impairs Your Problem-Solving Capabilities: Case Study of TransAsia Flight 235

October 1, 2019 By Nagesh Belludi Leave a Comment

As I’ve examined previously, airline disasters are particularly instructive on the subjects of cognitive impairment and decision-making under stress.

Consider the case of TransAsia Airways Flight 235 that crashed in 2015 soon after takeoff from an airport in Taipei, Taiwan. Accident investigations revealed that the pilots of the ATR 72-600 turboprop erroneously switched off the plane’s working engine after the other lost power. Here’s a rundown of what happened:

  1. About one minute after takeoff, at 1,300 feet, engine #2 had an uncommanded autofeather failure. This is a routine engine failure—the aircraft is designed to be able to be flown on one engine.
  2. The Pilot Flying misdiagnosed the problem, and assumed that the still-functional engine #1 had failed. He retarded power on engine #1 and it promptly shut down.
  3. With power lost on both the engines, the pilots did not react to the stall warnings in a timely and effective manner. The Pilot Flying acknowledged his error, “wow, pulled back the wrong side throttle.”
  4. The aircraft continued its descent. The pilots rushed to restart engine #1, but the remaining altitude was not adequate enough to recover the aircraft.
  5. In a state of panic, the Pilot Flying clasped the flight controls and steered (see this video) the aircraft perilously to avoid apartment blocks and commercial buildings before clipping a bridge and crashing into a river.

A High Level of Stress Can Diminish Your Problem-solving Capabilities

Thrown into disarray after a routine engine failure, the pilots of TransAsia flight 235 did not perform their airline’s abnormal and emergency procedures to identify the failure and implement the required corrective actions. Their ineffective coordination, communication, and error management compromised the safety of the flight.

The combination of sudden threat and extreme time pressure to avert a danger fosters a state of panic, in which decision-makers are inclined to commit themselves impulsively to courses of action that they will soon come to regret.

Idea for Impact: To combat cognitive impairment under stress, use checklists and standard operating procedures, as well as increased training on situational awareness, crisis communication, and emergency management.

Wondering what to read next?

  1. What Airline Disasters Teach About Cognitive Impairment and Decision-Making Under Stress
  2. Lessons from the World’s Worst Aviation Disaster // Book Summary of ‘The Collision on Tenerife’
  3. Under Pressure, The Narrowing Cognitive Map: Lessons from the Tragedy of Singapore Airlines Flight 6
  4. “Fly the Aircraft First”
  5. Lessons from the Princeton Seminary Experiment: People in a Rush are Less Likely to Help Others (and Themselves)

Filed Under: Business Stories, Leadership, Sharpening Your Skills Tagged With: Anxiety, Aviation, Biases, Decision-Making, Emotions, Mental Models, Mindfulness, Problem Solving, Risk, Stress, Thought Process, Worry

Make Friends Now with the People You’ll Need Later

June 10, 2019 By Nagesh Belludi Leave a Comment

Addison Schonland of the commercial aerospace consulting firm AirInsight describes how the 737 MAX hullabaloos have exposed shortfalls in Boeing’s crisis communications and public relations:

The MAX crisis demonstrated to everyone in aerospace media how poorly Boeing was prepared for the recent crashes. More importantly, Boeing was unprepared for the onslaught of information that started to flow freely after the crashes. … In the absence of communications from Boeing, subject matter experts, whether highly qualified or not, become media stars overnight. An information vacuum cannot exist in today’s 24-hour news cycle and the Internet. The demand for information is great, and somebody will fill the vacuum.

The fact that Boeing had to clam up about the crashes for legal reasons is well understood. But the lack of transparency about design decisions, how the company made trade-off choices when creating the MAX, and issues related to the certification process left Boeing exposed.

Rival Airbus has traditionally reached out and established relationships with the aerospace media:

Airbus spends a lot of money once per year inviting the media to an event it calls “Innovation Days”. A week ago, at the most recent event, there were 130 media members from almost every country. Airbus briefed the media on both their products and plans …. Airbus provided access to the key leaders so attendees could speak with them and ask questions, with unrestricted Q&As with C-Suite executives who stayed for a substantial period of time.

Airbus clearly has an ROI. From the perspective of an attendee, and having attended several, is that the media comes away from the event informed. But more importantly, attendees feel they understand what Airbus is doing.

Airbus, through these events, communicates with the trade and news media. This communication provides attendees with, de minimis, a sympathetic view. If Airbus had suffered the two crashes, we believe the press would not have attacked the company the same way it has Boeing.

Schonland highlights how such a web of relationships becomes indispensable during a crisis, whether the crisis is self-inflicted or caused by external events:

By not being more open Boeing has helped create a gap between itself and much of the media. … Boeing has lost any control of the [737 MAX disaster] story. Whatever Boeing does provide now is seen as biased and self-serving—there is little goodwill from the media. When [Boeing CEO] Dennis Muilenburg goes on television for the rare interview, he does not come across as well as he might. Why is that? Because everything he says is now filtered through a non-sympathetic, hyper-critical lens.

Boeing needs to invest in the small army of trade and press media that cover the industry—not just a handful of selectees. This small army provides crucial perspective en masse during a crisis and fills the vacuum with educated views and perspective.

Businesses that fail to develop such goodwill or simply lose their way with regard to public relations become vulnerable to condemnation and backlash. This can result in a wide-ranging loss of credibility, as has transpired with Boeing and its leadership.

Idea for Impact: Invest in formal and informal relationships with key external constituents who can help your business—and personal—interests. The Guanxi tradition in the Chinese culture has it just about right in placing a huge emphasis on building social capital through relationships. From Wikipedia,

At its most basic, guanxi describes a personal connection between two people in which one is able to prevail upon another to perform a favor or service, or be prevailed upon, that is, one’s standing with another. … Guanxi can also be used to describe a network of contacts, which an individual can call upon when something needs to be done, and through which he or she can exert influence on behalf of another.

Wondering what to read next?

  1. No Boss Likes a Surprise—Good or Bad
  2. Any Crisis Calls for Constant, Candid Communication
  3. Could Limiting Social Media Reduce Your Anxiety About Work?
  4. Leadership is Being Visible at Times of Crises
  5. How to Prevent a Communications Breakdown During Crisis

Filed Under: Effective Communication, Leadership Tagged With: Aviation, Conflict, Getting Along, Leadership, Leadership Lessons, Mindfulness, Networking, Relationships, Skills for Success, Stress, Winning on the Job

« Previous Page
Next Page »

Primary Sidebar

Popular Now

Anxiety Assertiveness Attitudes Balance Biases Books Coaching Conflict Conversations Creativity Critical Thinking Decision-Making Discipline Emotions Entrepreneurs Etiquette Feedback Getting Along Getting Things Done Goals Great Manager Leadership Leadership Lessons Likeability Mental Models Mentoring Mindfulness Motivation Networking Parables Performance Management Persuasion Philosophy Problem Solving Procrastination Relationships Simple Living Social Skills Stress Thinking Tools Thought Process Time Management Winning on the Job Wisdom Worry

About: Nagesh Belludi [hire] is a St. Petersburg, Florida-based freethinker, investor, and leadership coach. He specializes in helping executives and companies ensure that the overall quality of their decision-making benefits isn’t compromised by a lack of a big-picture understanding.

Get Updates

Signup for emails

Subscribe via RSS

Contact Nagesh Belludi

RECOMMENDED BOOK:
The Art of Stillness

The Art of Stillness: Pico Iyer

Travel writer Pico Iyer’s argues the importance of taking a timeout from busyness. Examples of a privileged few who have found peace through stillness in practice.

Explore

  • Announcements
  • Belief and Spirituality
  • Business Stories
  • Career Development
  • Effective Communication
  • Great Personalities
  • Health and Well-being
  • Ideas and Insights
  • Inspirational Quotations
  • Leadership
  • Leadership Reading
  • Leading Teams
  • Living the Good Life
  • Managing Business Functions
  • Managing People
  • MBA in a Nutshell
  • Mental Models
  • News Analysis
  • Personal Finance
  • Podcasts
  • Project Management
  • Proverbs & Maxims
  • Sharpening Your Skills
  • The Great Innovators
  • Uncategorized

Recently,

  • Beyond the Illusion: The Barnum Effect and Personality Tests
  • Take this Quiz and Find Out if You’re a Perfectionist
  • Inspirational Quotations #1025
  • What to Say When Words Escape You
  • Balancing Acts: Navigating ‘Good’ Addictions
  • Stop Owning Other People’s Problems
  • The Never-Ending Office vs. Remote Work Debate

Unless otherwise stated in the individual document, the works above are © Nagesh Belludi under a Creative Commons BY-NC-ND license. You may quote, copy and share them freely, as long as you link back to RightAttitudes.com, don't make money with them, and don't modify the content. Enjoy!