Measuring Risk: Why We Need Standards for Continuous Monitoring & Assessment

Continuous monitoring on its own is great for the detection and remediation of security events that may lead to breaches. But when it comes to allowing us to measure and compare the effectiveness of our security programs, there are many ways that simply monitoring falls short. Most significantly, it does not allow us to answer the question of whether not we are more or less secure than we were yesterday, last week or last year.

This is a question that we all have grappled with in the security community, and more recently, in the board room. No matter how many new tools you install, settings you adjust, or events you remediate, there are few ways to objectively determine your security posture and that of your vendors and third parties. How do you know if the changes and decisions you have made have positively impacted your security posture if there is no way to measure your effectiveness over time?

In recent years, solutions have emerged in the market which bring to light new potential from continuous monitoring and enable organizations to not only identify and remediate security issues, but also answer questions about security performance and effectiveness. Through the analysis of historical data, performance rating solutions allow organizations to quickly and objectively compare their effectiveness over time as well as to their industry and peers. The ratings are generated through the continuous collection of security data, including events, user behaviors and configurations, and updated on a daily basis. Higher ratings indicate better security performance, and users receive alerts when ratings change significantly. The ease with which these ratings can be accessed means organizations can leverage performance ratings in a number of ways that go far beyond threat detection.

For example, using ratings in vendor selection can help organizations choose and negotiate with secure partners from the beginning of business relationships. They have access to information that can show how performance over time has varied, as well as if there have been prior security incidents or breaches worthy of further investigation. Using ratings for vendor management encourages all parties to be proactive and transparent in their security practices, thus helping to improve overall performance.

There are other third party transactions where continuous security performance ratings can help, such as in underwriting and negotiating cyber insurance premiums as well as making strategic M&A decisions. Performance ratings provide context that is lacking from other assessment methods, as ratings are based on evidence of security outcomes and the criteria for both assessment and rating is congruent between networks.

However, the value in this metric isn’t simply in providing a number; the value is in its potential to become a standard that organizations can objectively benchmark themselves and their third parties against. Many organizations have their own methodologies to assess security risk, relying on auditors, compliance certificates, questionnaires and multiple frameworks for qualitatively, and in some ways quantitatively, measuring their risk. But if we’re all using different frameworks and methodologies, the ability to compare and contrast is lost, and objectivity comes into question. The lack of a standard in this area has lead to ambiguity when it comes to defining what “good security performance” actually looks like.

Of late, legislators and regulators have been pushing organizations to show that they are monitoring security risks across the business ecosystem and taking responsibility for the performance of their vendors as well. There has also been additional pressure placed on board members and executives to demonstrate awareness and oversight of security performance at all times.

HIPAA, PCI and OCC guidelines have all added language around vendor selection and management, requiring more frequent assessments and in some cases, naming liability if a vendor falls out of compliance. One thing these updates don’t include is specific guidelines for how and what to assess in network security ecosystems. This means it is up to the individual to interpret guidance, which may result in inconsistent (and often biased) assessments.

If regulators and lawmakers want to simplify risk management, they could make great strides by adopting and enforcing a set of measurement standards that could span industries and bring transparency to security practices in all organizations. To overcome the lack of awareness and bias in security performance assessments, continuous performance monitoring provides a significant advantage because it is outcome based rather than control based. Because of this, continuous assessment methodologies can answer the age old questions of how am I doing compared to my industry and my peers? Am I safer now than I was before?

Is outside-in the “Next Gen” of Continuous Monitoring?

In late 2002, the U.S. Government enacted a new law that was designed to hold each federal agency accountable to develop, document, and implement an agency-wide information security program, including for its contractors. The Federal Information Security Management Act (FISMA), was one of the first information security laws to require agencies to perform continuous assessments and develop procedures for detecting, reporting, and responding to security incidents.

With limited technological resources available for monitoring and assessing performance over time, however, agencies struggled to adhere to the law’s goals and intent. Ironically, although FISMA’s goal was to improve oversight of security performance, early implementation resulted in annual reviews of document based practices and policies. Large amounts of money were spent bringing in external audit firms to perform these assessments, producing more paper-based reports that, although useful for examining a wide set of criteria, failed to verify the effectiveness of security controls, focusing instead on their existence.

John Streufert, a leading advocate of performance monitoring at the State Department and later at DHS, estimated that by 2009, more than $440 million dollars per year was being spent on these paper-based assessments, with findings and recommendations becoming out of date before they could be implemented. Clearly, this risk assessment methodology was not yielding the outcomes the authors had in mind and in time, agencies began to look for solutions that could actually monitor their networks and provide real-time results.

Thanks to efforts by Streufert and others, it wasn’t long before “continuous monitoring” solutions existed. But, just as with all breakthrough technologies, early attempts at continuous monitoring were limited by high costs, difficult implementations and a lack of staffing resources. As continuous monitoring solutions made it into IT security budgets, organizations and agencies were challenged to make optimal use of tools that required tuning and constant maintenance to show value. False positives and missed signals led many IT teams to feel like they were drinking from a fire hose of data and the value of continuous monitoring in many cases was lost.

However, solutions today offer a number of benefits including easy operationalization, lower costs and reduced resource requirements. Many options, such as outside-in performance rating solutions, require no hardware or software installation and have been shown to produce immediate results. These tools continuously analyze vast amounts of external data on security behaviors and generate daily ratings for the network being monitored, with alerts and detailed analytics available to identify and remediate security issues. The ratings are objective measures of security performance, with higher ratings equaling a stronger security posture.

Used in conjunction with other assessment methods, organizations can use ratings to get a more comprehensive view of security posture, especially as they provide ongoing visibility over time instead of being based on a point in time result. The fidelity of “outside-in” assessments is very good when compared to the results of manual questionnaires and assessments because outside-in solutions eliminate some of the bias and confusion that may be seen in personnel responses. Additionally, outside-in performance monitoring can be used to quickly and easily verify effectiveness of controls, not just the existence of policies and procedures that may or may not be properly implemented.

These changes have made continuous performance monitoring and security ratings more appealing to organizations across the commercial and government space.  Organizations have learned that real-time, continuous performance monitoring can allow them to immediately identify and respond to issues and possibly avoid truly catastrophic events, as research has shown a strong correlation between performance ratings and significant breach events. Furthermore, as it becomes easier to monitor internal networks, organizations are beginning to realize the security benefits that can be gained through monitoring vendors and other third parties that are part of the business ecosystem. Being able to monitor and address third party risk puts us squarely in the realm of next generation continuous monitoring, something many regulators are pushing to see addressed in current risk management strategies.