Delta Limping Back to Normalcy

After two days of cancellations due to a system-wide outage, leaving thousands of customers stranded, Delta today announced it will return to normal operation by mid-to-late afternoon. It added a caveat, however, that “a chance of scattered thunderstorms expected in the eastern U.S. may have the potential to slow the recovery.”

Delta said that by late morning on Wednesday it had canceled 255 flights whileDelta 1,500 departed. About 800 flights were canceled on Tuesday and there were around 1,000 cancellations on Monday. It also extended its travel waiver and continued to provide hotel vouchers, of which more than 2,300 were issued Tuesday night in Atlanta alone.

“The technology systems that allow airport customer service agents to process check-ins, conduct boarding and dispatch aircraft are functioning normally with the bulk of delays and cancellations coming as a result of flight crews displaced or running up against their maximum allowed duty period following the outage,” Delta said.

The company’s chief operating officer, Gil West, said on Aug. 9:

Monday morning a critical power control module at our Technology Command Center malfunctioned, causing a surge to the transformer and a loss of power. The universal power was stabilized and power was restored quickly. But when this happened, critical systems and network equipment didn’t switch over to backups. Other systems did. And now we’re seeing instability in these systems. For example we’re seeing slowness in a system that airport customer service agents use to process check-ins, conduct boarding and dispatch aircraft. Delta agents today are using the original interface we designed for this system while we continue with our resetting efforts.

Reuters reported:

Like many large airlines, Delta uses its proprietary computer system for its bookings and operations, and the fact that other airlines appeared unaffected by the outage also pointed to the company’s equipment, said independent industry analyst Robert Mann.

Critical computer systems have backups and are tested to ensure high reliability, he said. It was not clear why those systems had not worked to prevent Delta’s problems, he said.

“That suggests a communications component or network component could have failed,” he said.

The airline has not yet detailed the financial impact of the event.

Companies Failing to Use Technology to Fight Fraud

While an increasing number of malicious actors are using technology to perpetrate fraud, the vast majority of companies are not using the technological resources available to fight it. According to KPMG’s new report Global Profiles of the Fraudster, technology significantly enabled 29% of the 110 fraudsters analyzed in North America and 24% of the 750 fraudsters analyzed worldwide. What’s more, 25% of frauds that hinged on the use of technology were detected by accident rather than safeguards or analytics, compared to just 10% spotted by accident in cases where the criminals did not use technology.

Indeed, proactive data analytics was not the primary means of detection in any North American cases and was only used to detect 3% of fraudsters worldwide. In North America, the most common means of detecting fraud were: tip offs and complaints, management review, accidentally, suspicious superiors and internal audit.

KPMG found that weak internal controls contributed to 59% of frauds in North America. Companies are failing to focus on strengthening controls, the firm reported, despite the increasing threat of newer types of frauds, such as cyber fraud and continued traditional forms of wrongdoing.

“In addition to ensuring internal controls are thoughtfully designed, companies should deploy effective training and instill a culture of integrity so that controls are properly executed,” said Phillip Ostwalt, partner and Global Investigations Network Leader at KPMG LLP. “Companies should also adopt new controls as their risk profiles change. Ongoing risk assessments can help cost-constrained companies ensure they are properly investing in such controls.”

Who are these fraudsters?

  • 65% are between ages 36 and 55
  • 39% are employed by the victim organization for over six years, most in operations, finance or office of the chief executive
  • 42% operate in groups and 52% of collusive frauds involved external parties

Check out the infographic below for more of the study’s findings:

Profiles of the Fraudster InfographicFraudster Infographic Women

Financial Services IT Overconfident in Breach Detection Skills

Despite the doubling of data breaches in the banking, credit and financial sectors between 2014 and 2015, most IT professionals in financial services are overconfident in their abilities to detect and remediate data breaches. According to a new study by endpoint detection, security and compliance company Tripwire, 60% of these professionals either did not know or had only a general idea of how long it would take to isolate or remove an unauthorized device from the organization’s networks, but 87% said they could do so within minutes or hours.

When it comes to detecting suspicious and risky activity, confidence routinely exceeded capability. While 92% believe vulnerability scanning systems would generate an alert within minutes or hours if an unauthorized device was discovered on their network, for example, 77% said they automatically discover 80% or less of the devices on their networks. Three out of 10 do not detect all attempts to gain unauthorized access to files or network-accessible file shares. When it comes to patching vulnerabilities, 40% said that less than 80% of patches are successfully fixed in a typical cycle.

The confidence but lack of comprehension may reflect that many of the protections in place are motivated by compliance more than security, Tripwire asserts.

“Compliance and security are not the same thing,” said Tim Erlin, director of IT security and risk strategy for Tripwire. “While many of these best practices are mandated by compliance standards, they are often implemented in a ‘check-the-box’ fashion. Addressing compliance alone may keep the auditor at bay, but it can also leave gaps that can allow criminals to gain a foothold in an organization.”

Check out more of the study’s findings below:

financial services cyber risk management

Automation: The Key to More Effective Cyberrisk Management

cybersecurity automation

In a perfect cybersecurity world, people would only have access to the data they need, and only when they need it. However, IT budgets are tighter than ever and, in most organizations, manually updating new and existing employees’ access levels on a consistent basis is a time-consuming productivity-killer. As a result, there’s a good chance an employee may accidentally have access to a group of files that they should not. As one can imagine, security that is loosely managed across the enterprise is a breeding ground for malware.

The velocity of cyberattacks has accelerated as well. It is easier than ever for cyber criminals to access exploits, malware, phishing tools, and other resources to automate the creation and execution of an attack. Digitization, Internet connectivity, and smart device growth are creating more vectors for attackers to gain an entry point into an organization’s network, and this trend only gets worse as you think about the Internet of Things, which could have concrete impact on machines from production equipment to planes and cars.

One way IT departments can help mitigate the cyberrisk of employee access overload is through automating security policies and processes such as the monitoring, detection and remediation of threats. In the past, organizations have spent a lot on prevention technologies: disparate point solutions such as anti-virus software and firewalls that try to act before an attack occurs. Prevention is important but not 100% effective. And how could technology used for prevention stop a cyber-attacker that has already infiltrated the network? If prevention were the end-all, be-all in security tools, we wouldn’t be reading about cyberattacks on a daily basis. As more companies realize this, a spending shift to detection and response is being driven.

To help determine cyberrisk—or better yet, safely manage your cyberrisk—you must look at the threat (which is ever growing due to constant hackers and advanced techniques), vulnerability (how open your data is to cyberattacks), and consequence (the amount of time threats are doing damage in your network). Or, more simply put: risk = threat X vulnerability X consequence time.

To manage your cyberrisk, you need to optimize at least one of the aforementioned variables. Unfortunately, threat is the one variable that cannot be optimized because hackers will never stop attacking and are creating malware at an escalating rate. In fact, a G DATA study showed that 6 million new malware strains were found by researchers in 2014—almost double the number of new strains found the previous year. Instead, what organizations can focus on is investing in the right solutions that target the remaining two variables: vulnerability and consequence.

  • Step One: Organizations must make sure they know their environments well (such as endpoints, network, and access points) and know where their sensitive information lives. It’s always a good idea to rank systems and information in terms of criticality, value and importance to the business.
  • Step Two: Organizations must gain increased visibility into potential threat activity occurring in the environment. As is often said, there are two types of companies: those that have been attacked and those that have been attacked and don’t know it. A way to increase visibility is through the deployment of behavior-based technology on the network, like sandboxes. Organizations are now shifting their focus to the endpoint. Today’s attacks require endpoint and network visibility, including correlation of this activity. The challenge with visibility is that it can be overwhelming.
  • Step Three: There needs to be some process or mechanism to determine which alerts matter and which ones should be prioritized. In order to gain increased visibility into environments and detect today’s threats, organizations clearly need to deploy more contemporary detection solutions and advanced threat analytics.
  • Step Four: Invest more in response and shift the mindset to continuous response. If attacks are continuous and we are continuously monitoring, then the next logical step is to respond continuously. Historically, response has been episodic or event-driven (“I’ve been attacked – Do something!”). This mindset needs to shift to continuous response (“I’m getting attacked all the time – Do something!”).  A key ingredient to enable continuous incident response will be the increasing use of automation. Why? Automation is required to keep up with attackers that are leveraging automation to attack. It’s also required to address a key challenge that large and small companies face: the significant cybersecurity skills shortage.

Advanced threat analytics should be important to any organization that takes its security posture seriously. The majority of threats being faced today are getting more advanced by the minute. If an organization relies solely on legacy, signature-based detection, their defenses will be easily breached. It’s important for teams to understand that the cyber defense and response capabilities of an organization must constantly evolve to match the evolving threat landscape. This includes both automatic detection and remediation. Automatic remediation dramatically reduces the time that malware can exist on a network and also reduces the amount of time spent investigating the issue at hand. With automated security defenses, IT teams are given a forensic view of every packet that moves through the network and allows teams to spot anomalies and threats before they have a chance to wreak havoc. And since these tools are automated and work at machine speed, they can deal with a high volume of threats without necessitating human intervention, taking some of the load off overburdened security teams, and ultimately freeing them to act decisively and quickly, before network damage is done.