Measuring Risk: Why We Need Standards for Continuous Monitoring & Assessment

Continuous monitoring on its own is great for the detection and remediation of security events that may lead to breaches. But when it comes to allowing us to measure and compare the effectiveness of our security programs, there are many ways that simply monitoring falls short. Most significantly, it does not allow us to answer the question of whether not we are more or less secure than we were yesterday, last week or last year.

This is a question that we all have grappled with in the security community, and more recently, in the board room. No matter how many new tools you install, settings you adjust, or events you remediate, there are few ways to objectively determine your security posture and that of your vendors and third parties. How do you know if the changes and decisions you have made have positively impacted your security posture if there is no way to measure your effectiveness over time?

In recent years, solutions have emerged in the market which bring to light new potential from continuous monitoring and enable organizations to not only identify and remediate security issues, but also answer questions about security performance and effectiveness. Through the analysis of historical data, performance rating solutions allow organizations to quickly and objectively compare their effectiveness over time as well as to their industry and peers. The ratings are generated through the continuous collection of security data, including events, user behaviors and configurations, and updated on a daily basis. Higher ratings indicate better security performance, and users receive alerts when ratings change significantly. The ease with which these ratings can be accessed means organizations can leverage performance ratings in a number of ways that go far beyond threat detection.

For example, using ratings in vendor selection can help organizations choose and negotiate with secure partners from the beginning of business relationships. They have access to information that can show how performance over time has varied, as well as if there have been prior security incidents or breaches worthy of further investigation. Using ratings for vendor management encourages all parties to be proactive and transparent in their security practices, thus helping to improve overall performance.

There are other third party transactions where continuous security performance ratings can help, such as in underwriting and negotiating cyber insurance premiums as well as making strategic M&A decisions. Performance ratings provide context that is lacking from other assessment methods, as ratings are based on evidence of security outcomes and the criteria for both assessment and rating is congruent between networks.

However, the value in this metric isn’t simply in providing a number; the value is in its potential to become a standard that organizations can objectively benchmark themselves and their third parties against. Many organizations have their own methodologies to assess security risk, relying on auditors, compliance certificates, questionnaires and multiple frameworks for qualitatively, and in some ways quantitatively, measuring their risk. But if we’re all using different frameworks and methodologies, the ability to compare and contrast is lost, and objectivity comes into question. The lack of a standard in this area has lead to ambiguity when it comes to defining what “good security performance” actually looks like.

Of late, legislators and regulators have been pushing organizations to show that they are monitoring security risks across the business ecosystem and taking responsibility for the performance of their vendors as well. There has also been additional pressure placed on board members and executives to demonstrate awareness and oversight of security performance at all times.

HIPAA, PCI and OCC guidelines have all added language around vendor selection and management, requiring more frequent assessments and in some cases, naming liability if a vendor falls out of compliance. One thing these updates don’t include is specific guidelines for how and what to assess in network security ecosystems. This means it is up to the individual to interpret guidance, which may result in inconsistent (and often biased) assessments.

If regulators and lawmakers want to simplify risk management, they could make great strides by adopting and enforcing a set of measurement standards that could span industries and bring transparency to security practices in all organizations. To overcome the lack of awareness and bias in security performance assessments, continuous performance monitoring provides a significant advantage because it is outcome based rather than control based. Because of this, continuous assessment methodologies can answer the age old questions of how am I doing compared to my industry and my peers? Am I safer now than I was before?

Looking Beyond Compliance When Assessing Security

For a long time now, security evangelists have railed against the dangers of relying only on checkbox compliance. They warn that if you focus too much on the list of requirements, you’re bound to miss risks that may not actually be covered in rules and regulations. That’s why organizations need to start evaluating effectiveness alongside these audits, in order to get a more holistic view into the systems they are assessing.

“Organizations are so focused on meeting the letter of the regulations and mandates that they lose sight of the risks that the individual controls in the mandates are intended to mitigate,” explained security consultant Brian Musthaler in a recent blog post.

It’s a theme revisited in a ComputerWorld article, which cited a survey showing that just 17% of organizations have what they consider a mature risk management program—i.e., one that goes beyond ticking off items on an audit list. The maturation to risk-based security, the article emphasizes, is “about a not so insignificant shift in objectives—from compliance to making systems more resilient to attack.”

The principle holds true not just when evaluating and shoring up in-house infrastructure. It also applies to how enterprises evaluate partners. As security organizations seek to find a sane way to measure the IT security stance of partners and vendors, the most common first step is to do it by following a requirements checklist or questionnaire, or by asking for an auditor’s attestation of compliance with some kind of standard. Assessment guidance from standards like the Statement on Standards for Attestation Engagements (SSAE) No. 16, ISO 27001, and FedRAMP all come to mind here.

Serving as a compendium of best practices, measuring against these standards can give good indicators of where to focus resources and are a good place to start your evaluation. The challenge is that while necessary, using these methods alone for assessing security risks is not sufficient. A company may be compliant with all the appropriate regulations and have excellent security policies but may be completely ineffective in the day-to-day implementation of these policies—rarely does a questionnaire ask how many compromised servers a provider is currently running on its network. Also, no matter how complete a checklist or audit is, its results are only a point in time reflection and can’t measure the dynamic nature of the risks it is meant to assess for the duration of the business partnership. Even if a penetration test or vulnerability scan is included as part of a vendor assessment, it cannot reveal issues that may appear the following week.

Complimenting an audit with a continuous evaluation of security effectiveness allows organizations to augment their view into the security risks of the extended enterprise. In addition to gaining visibility into the weaknesses of a network, a data-driven, evidence-based assessment can allow organizations to proactively mitigate new risks as they emerge and identify issues that a regulatory audit was not designed to catch. By taking these steps, organizations can move towards a mature, risk-based security model and away from the more simple checkbox mentality.