Delta Outage Spotlights Technology Risks

Delta’s computer outage on Jan. 29 was over by midnight, but its effects have extended into the week, not only resulting in 170 cancelations on Sunday, but grounding more than 100 flights on Monday and causing many delays. Adding to the frustration was the fact that the company’s mobile apps were also not working.

This latest incident follows another computer outage for Delta in mid-August, when flights were canceled for two days, leaving thousands of passengers stranded.

Such outages can be costly. A Southwest Airlines outage in July caused more than 2,000 flights to be canceled and cost about $54 million. The August Delta outage, which involved a fire, resulted in cancellation of 2,300 flights over three days and cost the airline $150 million in lost revenue, according to USA Today.

Jim Corridore, an analyst at CFRA Research, told USA Today on Monday that Delta’s computer outage puts a “spotlight on risks of airline technology infrastructure, much of which is old and patched with differing systems.” He said that airlines build new programming over old software, especially after a merger, when computer languages may differ. Programmers’ assumptions about how software will work are sometimes wrong.

While large companies such as Delta would have fewer outages with more testing of their systems, this is an expensive proposition.

According to USA Today:

Gil Hecht, CEO of Continuity Software, which tests computer systems for large banks and insurers, compared the construction of complex computer systems to a layer cake, with web servers, database software, storage and possibly interaction with other systems such as government computers that check whether passengers are allowed to fly.

“Testing should be done by every single layer and every single business service that participates in the critical infrastructure, and some of them are simply not under the airline’s control,” Hecht said.

He compared one way of testing to running a car into a tree to see whether the airbags work, which isn’t possible while keeping a computer system working. Instead, testing for a large financial institution or airline must confirm that each layer is configured to work well with all the others, he said.

“In order to do that, critical infrastructure operators must do much more testing, whether it’s manual by humans or by technology or by any means possible,” Hecht said. “Yes, it costs money. Quite a lot. But if more money and more effort will be driven into testing, we will have far less down time and data-loss events.”

Similar Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *