A cyberattack targeting the U.S. power grid would have widespread economic implications, resulting in insurance claims of between $21.4 billion and $71.1 billion in a worst case scenario, according to a report by Lloyd’s.

Lloyd’s and the University of Cambridge’s Centre for Risk Studies recently released “Business Blackout,” which examines the insurance implications of a major cyberattack using the U.S. power grid as an example. In the scenario outlined, malware is used to infect control rooms for generating electricity in areas of the Northeastern U.S. The malware goes undetected and locates 50 generators that it can control, forcing them to overload and burn out. The scenario, described as “improbable but technologically possible,” leaves 15 states in darkness, meaning that 93 million people are without power.

Economic impacts include direct damage to assets and infrastructure, decline in sales revenue to electricity supply companies, loss of sales revenue for businesses and disruption to the supply chain. The total impact to the U.S. economy is estimated at $243 billion, rising to more than $1 trillion in the most extreme version of the scenario.

Claimant types fell into six categories:

Power generation companies

• Property damage to their generators.

• Business interruption from being unable to sell electricity as a result of property damage.

• Incident response costs and fines from regulators for failing to provide power.

Defendant companies

• Companies sued by power generation businesses to recover a proportion of losses incurred under defendants’ liability insurance.

Companies that lose power – companies that suffer losses as a result of the blackout.

• Property losses (principally to perishable cold store contents).

• Business interruption from power loss (with suppliers extension).

• Failure to protect workforces or causing pollution as a result of the loss of power.

Companies indirectly affected – a separate category of companies that are outside the power outage but are impacted by supply chain disruption emanating from the blackout region.

• Contingent business interruption and critical vendor coverage.

• Share price devaluation as a result of having

inadequate contingency plans may generate claims under their directors’ and officers’ liability insurance.


• Property damage, principally resulting from fridge and freezer contents defrosting, covered by contents insurance.


• Claims possible under various specialty covers, most importantly event cancellation.

 Other key findings of the report include:

• Responding to these challenges will require innovation by insurers. The pace of innovation will likely be linked to the rate at which some of the uncertainties revealed in this report can be reduced.

• Cyberattack represents a peril that could trigger losses across multiple sectors of the economy.

• A key requirement for an insurance response to cyber risks will be to enhance the quality of data available and to continue the development of probabilistic modelling.

• The sharing of cyberattack data is a complex issue, but it could be an important element for enabling the insurance solutions required for this key emerging risk.


As cyber threats emerge and evolve each day, they pose challenges for organizations of all sizes, in all industries. Even though most industries are investing heavily in cybersecurity, many companies are still playing catch up, discovering breaches days, months, and even years after they occur. The 2015 Verizon DBIR shows that this “detection deficit” is still increasing: The time taken for attackers to compromise networks is significantly less than the time it takes for organizations to discover breaches.

The risk posed by third parties complicates the issue further. How can an organization allocate time and resources to trust their partners’ security when they are struggling to keep up with their own? Over the years, audits, questionnaires, and penetration tests have helped to assess third party risk. However, in today’s ever-changing cyber landscape, these tools alone do not offer an up-to-date, objective view. While continuous monitoring solutions can improve detection and remediation times for all organizations, the retail, healthcare, and utilities industries can especially benefit from greater adoption.


Some of the most notable data breaches have occurred in the retail sector. Recently, eBay asked its 145 million customers to change passwords after names, e-mail addresses, physical addresses, phone numbers and dates of birth were stolen. Retailers frequently work with new vendors and suppliers over time. Moreover, companies rely on point-of-sale systems (PoS) that are often susceptible to new types of malware. Compounded with the challenge of dealing with a large number of vendors and keeping up with new vulnerabilities, retail often ranks low in detection times. A recent study by Arbor Networks and the Ponemon Institute found that retailers take an average of 197 days to detect advanced threats on their networks.

Retail companies with tight budgets may not be able to commit the same amount of resources towards security as the Finance sector. Yet, implementing a continuous monitoring solution will enable companies to better monitor their own networks and stay on top of threats in their vendor ecosystem in a more cost-effective manner. Furthermore, it will also help retailers reduce detection and remediation times.


Healthcare providers have recently dominated headlines with large data breaches. In January, Premera disclosed that it lost information for roughly 11 million of its customers. A month earlier, Anthem Inc., said information of close to 70 million current and former employees and customers was stolen. Both of these breaches exposed personally identifiable information (PII) including SSNs and birthdays, and possibly medical information as well.

In general, healthcare providers have an immense amount of devices connected to their networks. Following widely known breaches in this sector, many criticized organizations for failing to encrypt files containing sensitive customer information. While stronger encryption would certainly help, these companies must also ensure their networks are secure in the first place. Weeks before the Premera breach, federal auditors told the organization that some of its network security practices were inadequate and vulnerable to attack. If Premera had been monitoring their networks with greater frequency, they may have learned of these vulnerabilities earlier, on their own. Subsequently, they may have had significantly more time to patch and prevent a breach.


Companies in the Utilities sector are challenged with protecting critical infrastructure. These companies also hold a large amount of customer data, making them big targets for hackers looking to destroy or exfiltrate data. In 2014, nearly 70% of companies in the utility sector said they had been breached. Many companies also have reported attempts to have their data completely deleted or destroyed.

Breaches of Utility companies are often not disclosed, so the full scope of vulnerable companies are in this industry is not fully understood. However, a recent study found that 52% of companies in the Utilities industry had significant botnet infections. Greater monitoring will be necessary for companies in this sector to decrease the breadth of infection. Without it, our critical infrastructure and personal information remain vulnerable.

Narrowing the gap

For this “detection deficit” to narrow, companies need to monitor their own networks with greater frequency. As business have increasingly outsourced their operations over the years, they will also need to monitor third parties –and even fourth parties– to manage risk.

A recent survey found that 46% of companies that experienced a data breach took more than four months to detect a problem on their networks. Perhaps even more concerning is that 70% of these breaches were detected by a third party. Continuous monitoring solutions will enable organizations to detect intrusions as they occur. As a result, IT teams can spend more time and resources on fixing and remediating threats rather than detecting them in the first place.

Nobody wants to live the embarrassment of being told over the phone that they’ve been breached, or worse, read about it in the news. But as more organizations adopt continuous monitoring solutions, this experience should become far less frequent.


With the highly-publicized rise in cyberbreaches, we have seen hackers break into systems for a variety of reasons: criminal enterprises simply stealing money, thieves gathering Social Security or credit card numbers to sell on the black market, state-sponsored groups taking confidential information, and malicious actors taking passwords or personal data to use to hit more valuable targets. Now, another group of financially-motivated hackers has emerged with a different agenda that may have even riskier implications for businesses.

According to a new report from computer security company Symantec, a group it calls Morpho has attacked multiple multibillion-dollar companies across an array of industries in pursuit of one thing: intellectual property. While it is not entirely clear what they do with this information, they may aim to sell it to competitors or nation states, the firm reports. “The group may be operating as ‘hackers for hire,’ targeting corporations on request,” Symantec reported. “Alternatively, it may select its own targets and either sell stolen information to the highest bidder or use it for insider trading purposes.”

Victimized businesses have spanned the Internet, software, pharmaceutical, legal and commodities fields, and the researchers believe the Morpho group is the same one that breached Facebook, Twitter, Apple and Microsoft in 2013.

Symantec does not believe the group is affiliated with or acting on behalf of any particular country as they have attacked businesses without regard for the nationality of its targets. But, as the New York Times reported, ” the researchers said there were clues that the hackers might be English speakers — their malicious code is written in fluent English — and they named their encryption keys after memes in American pop culture and gaming. Researchers also said the attackers worked during United States working hours, though they conceded that might just be because that is when their targets are most active.”

The researchers have tied Morpho to attacks against 49 different organizations in more than 20 countries, deploying custom hacking tools that are able to break into both Windows and Apple computers, suggesting it has plenty of resources and expertise. The group has been active since at least March 2012, the report said, and their attacks have not only continued to the present day, but have increased in number. “Over time, a picture has emerged of a cybercrime gang systematically targeting large corporations in order to steal confidential data,” Symantec said.

Morpho hacking victims by industry

Morpho hackers have also been exceptionally careful, from preliminary reconnaissance to cleaning up evidence. In some cases, to help best determine the valuable trade secrets they would steal, the group intercepted company emails as well as business databases containing legal and policy documents, financial records, product descriptions and training documents. In one case, they were able to compromise a physical security system that monitors employee and visitor movements in corporate buildings. After getting the data they wanted, they scrubbed their tracks, even making sure the servers they used to orchestrate the attacks were rented using the anonymous digital currency Bitcoin.

In short, the hackers are really good, according to Vikram Thakur, a senior manager of the attack investigations team at Symantec. “Who they are? We don’t know. They are virtually impossible to track,” he said.


There is growing concern that corporate boards and senior executives are not prepared to govern their organization’s exposure to cyberrisk. While true to some degree, executive management can learn to identify and focus on the strategic and systemic sources of cyberrisk, without becoming distracted by complex technology-related symptoms, by understanding the organization’s ability to make well-informed decisions about cyberrisk and reliably execute those decisions.

Making well-informed cyberrisk decisions

To gain greater confidence regarding cyberrisk decision-making, executives should ensure that their organizations are functioning well in two areas: visibility into the cyber risk landscape, and risk analysis accuracy.

1. “How good is our cyberrisk visibility?”

You can’t manage what you haven’t identified. Many companies focus so strongly on supporting rapidly evolving business objectives that they lose sight of closely managing the technology changes that result from those objectives. Consequently, it is common to find that organizations have an incomplete and out-of-date understanding of:

  • Their company’s network connectivity to other companies and the Internet
  • Which systems, applications, and technologies support critical business functions
  • Where sensitive data resides, both inside and outside their company’s network

Without this foundational information, an organization can’t realistically claim to understand how much cyberrisk it has or where its cyber risk priorities need to be.

2. “How accurately are we analyzing cyberrisk?”

It is common to find that over 70% of the “high-risk” issues brought before management do not, in fact, represent high risk. In some organizations more than 90% of “high risk” issues are mislabeled. When it comes to analyzing cyberrisk, several foundational challenges exist in many organizations:


How anxious would you be to ride on a space shuttle mission if you knew that the engineers and scientists who planned the mission and designed the spacecraft couldn’t agree on definitions for mass, weight, and velocity?

Odds are good that if you ask six people within your risk management organization to define “risk” or provide examples of “risks” you’ll get several different, perhaps very different, answers. Given this, it isn’t hard to imagine that risk analysis quality will be inconsistent.

Broken models

In the cyberrisk industry today, there is heavy reliance on the informal mental models of personnel. As a result, very often the focus of a “risk rating” is strongly biased on a control deficiency rather than a more explicit consideration of the loss scenario(s) the control may be relevant to. Without applying a probabilistic lens to risk analysis it is much more difficult to differentiate and prioritize effectively among the myriad loss events that could, possibly, happen.

Another challenge is that most technologies that identify weaknesses in security generate significantly inflated risk ratings. The outcome is wasted resources, unwarranted angst, and an inability to identify and resolve the issues that truly deserve immediate attention.

Although risk management programs within some industries have begun to examine and manage the risk associated with poor models, this focus is often limited to models that do quantitative financial analysis. This leaves unexamined:

  • The mental models of risk professionals and whether their off-the-cuff risk estimates are accurate
  • Home-grown qualitative and ordinal models
  • Models embedded within cyberrisk tools

Yet these models, with their implicit assumptions and weaknesses, are responsible for driving critical decisions about how organizations manage their cyber risk landscapes.

Reliable execution

Although risk management expectations and objectives are set through decision-making, execution is the deciding factor on whether the organization is able to consistently realize the intended outcomes.

3. “How well do personnel understand what’s expected of them?”

In one organization, the information security policies were written at a grade 21 level. Most organizations today have some form of information security policy and related standards, and many even require personnel to read and acknowledge those policies annually. Very often however, the policies have been written by consultants or subject matter experts using verbiage that is complex and/or ambiguous. As a result, personnel may dutifully read and acknowledge the policies but they may not have a clear understanding of what actually is expected of them.

4. “How capable are personnel of meeting expectations?”

Things change. When budget belts get tightened organizations often cut training budgets. Given the rapid pace of change in the cyberrisk landscape, this can create serious skills gaps for cyberrisk professionals and technologists.

Another challenge in this regard has to do with outdated technology. Many organizations hang on to technologies well beyond the point where they can be maintained in a secure state. As a result, “policy exceptions” for these technologies become routinely accepted, which limits the ability of the organization to achieve or maintain its own security objectives.

5. “How well are personnel prioritizing cyberrisk?”

Which is more important; revenue, budgets, deadlines, or cyber risk?

Root cause analyses performed on cyberrisk deficiencies have found that personnel routinely choose not to comply with cyberrisk policies because they believe revenue, budgets, and/or deadlines are more important. This is influenced in part (perhaps a significant part) by the challenges noted above regarding risk-rating inaccuracies. It isn’t unusual to find that overestimated risk ratings create a “boy who cried wolf” syndrome within organizations. The result is that organizations don’t consistently or meaningfully incentivize executives to achieve cyberrisk management objectives because there is tacit recognition that much of what is claimed to be high-risk is not. Another factor is that revenue, cost, and deadlines are measureable in the near-term, whereas many high-impact risk scenarios are less likely to materialize before they become “someone else’s problem.”

The bottom line is that prudent risk-taking is only likely to occur if executives are provided accurate risk information and if they are appropriately incentivized based on the level of risk they subject the organization to.

At the end of the day…

Effectively governing cyberrisk is within the grasp of senior executives who deal with complex and dynamic challenges every day. By examining their organization’s ability to make well-informed decisions and to execute reliably, senior executives can more effectively identify and address the strategic and systemic sources of risk within their organizations.

{ 1 comment }