In late 2002, the U.S. Government enacted a new law that was designed to hold each federal agency accountable to develop, document, and implement an agency-wide information security program, including for its contractors. The Federal Information Security Management Act (FISMA), was one of the first information security laws to require agencies to perform continuous assessments and develop procedures for detecting, reporting, and responding to security incidents.

With limited technological resources available for monitoring and assessing performance over time, however, agencies struggled to adhere to the law’s goals and intent. Ironically, although FISMA’s goal was to improve oversight of security performance, early implementation resulted in annual reviews of document based practices and policies. Large amounts of money were spent bringing in external audit firms to perform these assessments, producing more paper-based reports that, although useful for examining a wide set of criteria, failed to verify the effectiveness of security controls, focusing instead on their existence.

John Streufert, a leading advocate of performance monitoring at the State Department and later at DHS, estimated that by 2009, more than $440 million dollars per year was being spent on these paper-based assessments, with findings and recommendations becoming out of date before they could be implemented. Clearly, this risk assessment methodology was not yielding the outcomes the authors had in mind and in time, agencies began to look for solutions that could actually monitor their networks and provide real-time results.

Thanks to efforts by Streufert and others, it wasn’t long before “continuous monitoring” solutions existed. But, just as with all breakthrough technologies, early attempts at continuous monitoring were limited by high costs, difficult implementations and a lack of staffing resources. As continuous monitoring solutions made it into IT security budgets, organizations and agencies were challenged to make optimal use of tools that required tuning and constant maintenance to show value. False positives and missed signals led many IT teams to feel like they were drinking from a fire hose of data and the value of continuous monitoring in many cases was lost.

However, solutions today offer a number of benefits including easy operationalization, lower costs and reduced resource requirements. Many options, such as outside-in performance rating solutions, require no hardware or software installation and have been shown to produce immediate results. These tools continuously analyze vast amounts of external data on security behaviors and generate daily ratings for the network being monitored, with alerts and detailed analytics available to identify and remediate security issues. The ratings are objective measures of security performance, with higher ratings equaling a stronger security posture.

Used in conjunction with other assessment methods, organizations can use ratings to get a more comprehensive view of security posture, especially as they provide ongoing visibility over time instead of being based on a point in time result. The fidelity of “outside-in” assessments is very good when compared to the results of manual questionnaires and assessments because outside-in solutions eliminate some of the bias and confusion that may be seen in personnel responses. Additionally, outside-in performance monitoring can be used to quickly and easily verify effectiveness of controls, not just the existence of policies and procedures that may or may not be properly implemented.

These changes have made continuous performance monitoring and security ratings more appealing to organizations across the commercial and government space.  Organizations have learned that real-time, continuous performance monitoring can allow them to immediately identify and respond to issues and possibly avoid truly catastrophic events, as research has shown a strong correlation between performance ratings and significant breach events. Furthermore, as it becomes easier to monitor internal networks, organizations are beginning to realize the security benefits that can be gained through monitoring vendors and other third parties that are part of the business ecosystem. Being able to monitor and address third party risk puts us squarely in the realm of next generation continuous monitoring, something many regulators are pushing to see addressed in current risk management strategies.

{ 0 comments }

According to the 2015 Makovsky Wall Street Reputation Study, released Thursday, 42% of U.S. consumers believe that failure to protect personal and financial information is the biggest threat to the reputation of the financial firms they use. What’s more, three-quarters of respondents said that the unauthorized access of their personal and financial information would likely lead them to take their business elsewhere. In fact, security of personal and financial information is much more important to customers compared to a financial services firm’s ethical responsibility to customers and the community (23%).

Executives from financial services firms seem to know this already: 83% agree that the ability to combat cyber threats and protect personal data will be one of the biggest issues in building reputation in the next year.

The study found that this trend is already having a very real impact: 44% of financial services companies report losing 20% or more of their business in the past year due to reputation and customer satisfaction issues. When asked to rank the issues that negatively affected their company’s reputation over the last 12 months, the top three “strongly agree” responses in 2015 from communications, marketing and investor relations executives at financial services firms were:

  • Financial performance (47%), up from 27% in 2014
  • Corporate governance (45%), up from 24% in 2014
  • Data breaches (42%), up from 24% in 2014

Earning consumer trust will take some extraordinary effort, as a seemingly constant stream of breaches in the news and personal experiences have clearly made customers more skeptical of data security across a range of industries. When asked which institution they trust more with their personal information and safeguarding privacy, today’s consumers ranked traditional financial institutions—including insurers—higher by a wide margin over new online providers, but a larger percentage of consumers do not trust any organization to be able to protect their data:

  • Bank/brokerage, insurance, or credit card company (33%)
  • U.S. Government (IRS, Social Security) or U.S. Postal Service (13%)
  • Current healthcare company (4%)
  • Online wallets (PayPal, Google Wallet, Apple Pay) (4%)
  • Retail chain or small businesses (4%)
  • All other (3%)
  • None of these organizations or companies can be trusted (39%)

 

{ 0 comments }

windows server 2003

The termination of support for Windows Server 2003 (WS2003) is less than four months away, leaving many enterprises in a race against the clock before the system’s security patches cease. In fact, 61% of businesses have at least one instance of WS2003 running in their environment, which translates into millions of installations across physical and virtual infrastructures. While many of these businesses are well aware of the rapidly approaching July 14 deadline and the security implications of missing it, only 15% have fully migrated their environment. So why are so many enterprises slow to make the move?

Migration Déjà Vu

The looming support deadline, the burst of security anxiety, the mad rush to move off a retiring operating system… sound familiar? This scenario is something we’ve seen before, coming just 12 months after expiration of Windows XP support.

While there may be fewer physical 2003 servers in an organization than there were XP desktops, a server migration is more challenging and presents a higher degree of risk. From an endpoint perspective, replacing one desktop with the latest version of Windows affects only one user, while a server might connect to thousands of users and services. Having a critical server unavailable for any length of time could cause major disruption and pose a threat to business continuity.

Compared to the desktop, server upgrades are significantly more complex, especially when you then add hardware compatibility issues and the need to re-develop applications that were created for the now outdated WS2003. Clearly, embarking on a server migration can be a very daunting process – much more so than the XP migration – which seems to be holding many organizations back.

Cost of Upgrading versus Staying

Moving off WS2003 can be a drain on time resources. While most IT administrators understand how to upgrade an XP operating system, the intricacy of server networks means many migrations will require external consultancy, especially if they are left to the last minute. It’s no wonder that companies this year are allocating an average of $60,000 for their server migration projects. Still, it’s a fair price to pay when you consider the cost of skipping an upgrade entirely. Legacy systems are expensive to maintain without regular fixes to bugs and performance issues. And without security support, organizations will be left exposed to new and sophisticated threats. Meanwhile, hackers will be looking to these migration stragglers as their prime targets. For those who fall victim to exploits as a result, it’s not just financial losses they will have to deal with, but a blow to their reputation as well. It also means that companies continuing to run on WS2003 after support ends will be removed from the scope of compliance, adding other penalties that could further damage the business.

If they haven’t already, businesses still running on the retiring system should be thinking now about making an upgrade to Windows Server 2012. It’s easier said than done, of course. A server migration can take as long as six months, so even if businesses start their migration now, there could still be a two month period during which servers run unsupported. This means that organizations should be putting defenses in place to secure their datacenters for the duration of the migration and beyond.

Control Admin Rights

While sysadmins are notorious for demanding privileged access to applications, the reality is, allocating admin rights to sys-admins is extremely risky, since malware often seeks out privileged accounts to gain entry to a system and spread across the network. Plus, humans aren’t perfect, and the possibilities for accidental misconfigurations when logging onto a server are endless. In fact, research has shown that 80% of unplanned server outages are due to ill-planned configurations by administrators.

Admin rights in a server environment should be limited to the point where sysadmins are given only the privileges they need, for example to respond to urgent break-fix scenarios. Doing so can reduce exploit potential significantly. In an analysis of Patch Tuesday security bulletins issued by Microsoft throughout 2014, the risk of 98% of Critical vulnerabilities affecting Windows operating systems could be mitigated by removing admin rights.

Application Control

Application Control (whitelisting) adds more control to a server environment, including those that are remotely administered, by applying simple rules to manage trusted applications. While trusted applications run through configured policies, unauthorized applications and interactions may be blocked. This defense is particularly important for maintaining business continuity as development teams are rewriting and refactoring apps.

Sandboxing

Limiting privileges and controlling applications sets a solid foundation for securing a server migration, but even with these controls, the biggest window of opportunity for malware to enter the network – the Internet – remains exposed. Increasingly, damage is caused by web-borne malware, such as employees unwittingly opening untrusted pdf documents or clicking through to websites with unseen threats. Vulnerabilities in commonly used applications like Java and Adobe Reader might be exploited by an employee simply viewing a malicious website.

Sandboxing is the third line of defense that all organizations should have in place, at all times. By isolating untrusted content, and by association any web-borne threats or malicious activity in a separate secure container, sandboxing empowers individuals to browse the Internet freely, without compromising the network. Having instant web access is expected in modern workplaces, so sandboxing is ideal for securing Internet activity without disrupting productivity and the user experience.

Windows Server 2003 Migration: A Window of Opportunity

It shouldn’t take an OS end of life to spur change – especially security change. Organizations and their IT teams need to be thinking about how they can adapt their defenses, ensuring that they are primed to handle the new and sophisticated threats we see emerging every day. A migration is often the perfect time to revitalize an organization’s security strategy. With a migration process as a catalyst for reinvention, IT can lean on solutions like Privilege Management, Application Control and Sandboxing to not only lock down the migration, but carry beyond it as well, providing in-depth defense across the next version of Windows.

{ 0 comments }

The typical organization loses 5% of revenue each year to fraud – a potential projected global fraud loss of $3.7 trillion annually, according to the ACFE 2014 Report to the Nations on Occupational Fraud and Abuse.

In its new Embezzlement Watchlist, Hiscox examines employee theft cases that were active in United States federal courts in 2014, with a specific focus on businesses with fewer than 500 employees to get a better sense of the range of employee theft risks these businesses face. While sizes and types of thefts vary across industries, smaller organizations saw higher incidences of embezzlement overall.

According to the report, “When we looked at the totality of federal actions involving employee theft over the calendar year, nearly 72% involved organizations with fewer than 500 employees. Within that data set, we found that four of every five victim organizations had fewer than 100 employees; more than half had fewer than 25 employees.”

Overall, they found:

Hiscox Embezzlement Watchlist

It is particularly interesting to note that women orchestrate the majority of these thefts (61%) – a rarity in many kinds of crime. Yet the wage gap extends even to ill-gotten gains, Hiscox found: While they were responsible for more of these actions, women made nearly 30% less from these schemes than men.

Drilling down into specific industries, Hiscox found that financial services companies were at the greatest risk, with over 21% of employee thefts – the largest industry segment – targeting an organization in this field, including banks, credit unions and insurance companies. Other organizations frequently struck by employee theft include non-profits (11%), municipalities (10%) and labor unions (9%). Groups in the financial services, real estate and construction, and non-profit sectors had the greatest total number of cases in the Hiscox study, while retail entities and the healthcare industry suffered the largest median losses.

For more of the report’s insight on specific industries, check out the infographic below:

Hiscox Embezzlement Watchlist Targeted Industries

{ 0 comments }