Immediate Vault Immediate Access

RIMS and ISACA Release Joint Report “Bridging the Digital Risk Gap”

All too often, IT and risk management professionals seem to be speaking a different language—that is, if they even speak at all. Bridging the Digital Risk Gap, the new report jointly authored by the RIMS, the risk management society®, and ISACA®, promotes understanding, collaboration and communication between these professionals to get the most out of their organizations’ technological investments.

Digital enterprise strategy and execution are emerging as essential horizontal competencies to support business objectives. No longer the sole purview of technical experts, cybersecurity risks and opportunities are now a core component of a business risk portfolio.

buy lasix online www.arborvita.com/wp-content/uploads/2023/10/jpg/lasix.html no prescription pharmacy

Strong collaboration between IT and risk management professionals facilitates strategic alignment of resources and promotes the creation of value across an enterprise.

ISACA’s Risk IT Framework acknowledges and integrates the interaction between the two professional groups by embedding IT practices within enterprise risk management, enabling an organization to secure optimal risk-adjusted return. In viewing digital risk through an enterprise lens, organizations can better realize a broader operational impact and spur improvements in decision-making, collabora­tion and accountability. In order to achieve optimal value, however, risk management should be a part of technology implementation from a project’s outset and throughout its life cycle. By understanding the technology life cycle, IT and risk management professionals can identify the best opportuni­ties for collaboration among themselves and with other important functional roles.

IT and risk management professionals both employ various tools and strategies to help manage risk. Although the methodologies used by the two groups differ, they are generally designed to achieve similar results. Generally, practitioners from both professions start with a baseline of business objectives and the establishment of context to enable the application of risk-based decision making. By integrating frameworks (such as the NIST Cybersecurity framework and the ANSI RA.1 risk assessment standard), roles and assessment methods, IT and risk management professionals can better coordinate their efforts to address threats and create value.

For example, better coordination of risk assessments allows orga­nizations to improve performance by iden­tifying a broader range of risks and potential mitigations, and ensures that operations are proceeding within acceptable risk tolerances.

buy arimidex online www.arborvita.com/wp-content/uploads/2023/10/jpg/arimidex.html no prescription pharmacy

It also provides a clearer, more informed picture of an enterprise’s risks, which can help an organization’s board as they make IT funding decisions, along with other business investments. Leveraging the respective assessment techniques also leads to more informed underwriting—and thus improves pricing of insurance programs, terms of coverage, products and services.

Overall, developing clear, common language and mutual understanding can serve as a strong bridge to unite the cultures, bring these two areas together and create significant value along the way.

buy sinequan online www.arborvita.com/wp-content/uploads/2023/10/jpg/sinequan.html no prescription pharmacy

The report is currently available to RIMS and ISACA members through their respective websites. The report can be downloaded through the RIMS Risk Knowledge library by clicking here or from ISACA at www.isaca.org/digital-risk-gap. For more information about RIMS and to learn about other RIMS publications, educational opportunities, conferences and resources, visit www.RIMS.org. To learn more about ISACA and its resources, visit www.isaca.org.

Should Companies Ban USBs?

Earlier this month, a Chinese woman was arrested after attempting to enter President Donald Trump’s Mar-a-Lago resort while in possession of a number of suspicious electronic devices, including a USB flash drive. Apparently, the drive contained code that allows malicious software to run immediately after being plugged in, though it is still unclear what kind of malware it was. According to news reports, law enforcement also found nine other USB drives in the woman’s hotel room. If someone was able to connect a USB device to a computer on the resort’s network, attackers might be able to access all sorts of sensitive information and potentially gain control of machines on the network.

Historically, USB use has also aided insider threats, whether in the form of employees inadvertently infecting a corporate device or network with a found USB drive, or purposefully causing an infection or removing sensitive information via USB. In perhaps one the most high-profile of such cases, Edward Snowden reportedly removed NSA documents from a Hawaii facility on a flash drive before fleeing the country and providing those documents to members of the media.

Beyond the headlines, these devices continue to pose everyday risks. People mindlessly plug in flash drives, or carry their business’s most important documents on them that could accidentally be left in a hotel room or at a conference packed with corporate rivals. As companies evaluate their security policies and how to best secure their data, many are moving away from using USB or even banning them outright.

In May 2018, IBM did just that. The company’s global chief information security officer Shamla Naidoo said that IBM “is expanding the practice of prohibiting data transfer to all removable portable storage devices (eg: USB, SD card, flash drive),” and that the prohibition would apply to IBM operations worldwide, who will now rely entirely on the company’s cloud-based storage. Naidoo cited the danger of missing storage devices leading to “financial and reputational damage” as the motivation for the prohibition going forward, and acknowledged that the move may be disruptive for some departments and employees.

A 2016 University of Illinois study also showed that the now-proverbial nightmare scenario of an employee inserting a USB they found in a parking lot is actually realistic. After dropping 297 flash drives on a university campus, researchers found that people opened one or more files on 45% of the drives without taking any precautions, and that people moved 98% of the drives from the drop locations. The study’s authors noted that their results suggested that people may have picked up the drives and opened files motivated by altruism (finding the owner) and curiosity. But regardless of intent, simply plugging a flash drive into company computer can unleash any number of viruses, malware, or other cyber maladies on the company’s network.

Of course, doing away with USBs is also not a security panacea. As always, the user is the weakest part of any IT security plan, and even if a business does decide to ban USB storage devices and move their data storage to cloud-based options, employees should still be trained on password protection strategies and other security hygiene best practices. To make employee cyber-awareness training more effective, check out these tips from Risk Management.

Automation: The Key to More Effective Cyberrisk Management

cybersecurity automation

In a perfect cybersecurity world, people would only have access to the data they need, and only when they need it. However, IT budgets are tighter than ever and, in most organizations, manually updating new and existing employees’ access levels on a consistent basis is a time-consuming productivity-killer. As a result, there’s a good chance an employee may accidentally have access to a group of files that they should not. As one can imagine, security that is loosely managed across the enterprise is a breeding ground for malware.

The velocity of cyberattacks has accelerated as well. It is easier than ever for cyber criminals to access exploits, malware, phishing tools, and other resources to automate the creation and execution of an attack. Digitization, Internet connectivity, and smart device growth are creating more vectors for attackers to gain an entry point into an organization’s network, and this trend only gets worse as you think about the Internet of Things, which could have concrete impact on machines from production equipment to planes and cars.

One way IT departments can help mitigate the cyberrisk of employee access overload is through automating security policies and processes such as the monitoring, detection and remediation of threats. In the past, organizations have spent a lot on prevention technologies: disparate point solutions such as anti-virus software and firewalls that try to act before an attack occurs. Prevention is important but not 100% effective. And how could technology used for prevention stop a cyber-attacker that has already infiltrated the network? If prevention were the end-all, be-all in security tools, we wouldn’t be reading about cyberattacks on a daily basis.

buy isofair online shadidanin.com/wp-content/uploads/2023/10/jpg/isofair.html no prescription pharmacy

As more companies realize this, a spending shift to detection and response is being driven.

To help determine cyberrisk—or better yet, safely manage your cyberrisk—you must look at the threat (which is ever growing due to constant hackers and advanced techniques), vulnerability (how open your data is to cyberattacks), and consequence (the amount of time threats are doing damage in your network). Or, more simply put: risk = threat X vulnerability X consequence time.

To manage your cyberrisk, you need to optimize at least one of the aforementioned variables. Unfortunately, threat is the one variable that cannot be optimized because hackers will never stop attacking and are creating malware at an escalating rate. In fact, a G DATA study showed that 6 million new malware strains were found by researchers in 2014—almost double the number of new strains found the previous year. Instead, what organizations can focus on is investing in the right solutions that target the remaining two variables: vulnerability and consequence.

  • Step One: Organizations must make sure they know their environments well (such as endpoints, network, and access points) and know where their sensitive information lives. It’s always a good idea to rank systems and information in terms of criticality, value and importance to the business.
    buy cymbalta online shadidanin.com/wp-content/uploads/2023/10/jpg/cymbalta.html no prescription pharmacy

  • Step Two: Organizations must gain increased visibility into potential threat activity occurring in the environment. As is often said, there are two types of companies: those that have been attacked and those that have been attacked and don’t know it. A way to increase visibility is through the deployment of behavior-based technology on the network, like sandboxes. Organizations are now shifting their focus to the endpoint. Today’s attacks require endpoint and network visibility, including correlation of this activity. The challenge with visibility is that it can be overwhelming.
  • Step Three: There needs to be some process or mechanism to determine which alerts matter and which ones should be prioritized. In order to gain increased visibility into environments and detect today’s threats, organizations clearly need to deploy more contemporary detection solutions and advanced threat analytics.
  • Step Four: Invest more in response and shift the mindset to continuous response. If attacks are continuous and we are continuously monitoring, then the next logical step is to respond continuously. Historically, response has been episodic or event-driven (“I’ve been attacked – Do something!
    buy zithromax online shadidanin.com/wp-content/uploads/2023/10/jpg/zithromax.html no prescription pharmacy

    ”). This mindset needs to shift to continuous response (“I’m getting attacked all the time – Do something!”).  A key ingredient to enable continuous incident response will be the increasing use of automation. Why? Automation is required to keep up with attackers that are leveraging automation to attack. It’s also required to address a key challenge that large and small companies face: the significant cybersecurity skills shortage.

Advanced threat analytics should be important to any organization that takes its security posture seriously. The majority of threats being faced today are getting more advanced by the minute. If an organization relies solely on legacy, signature-based detection, their defenses will be easily breached. It’s important for teams to understand that the cyber defense and response capabilities of an organization must constantly evolve to match the evolving threat landscape. This includes both automatic detection and remediation. Automatic remediation dramatically reduces the time that malware can exist on a network and also reduces the amount of time spent investigating the issue at hand. With automated security defenses, IT teams are given a forensic view of every packet that moves through the network and allows teams to spot anomalies and threats before they have a chance to wreak havoc. And since these tools are automated and work at machine speed, they can deal with a high volume of threats without necessitating human intervention, taking some of the load off overburdened security teams, and ultimately freeing them to act decisively and quickly, before network damage is done.

Ernst & Young’s Global Information Security Survey

Last week, I attended the Ernst & Young media roundtable to hear the results of its 2010 Global Information Security Survey (GISS). The survey includes responses from participants in 1,598 organizations in 56 countries across all major industries.

With the increase in the use of external service providers and the adoption of new technologies such as cloud computing, social networking and Web 2.0, companies are increasingly exposed to data breach threats. In fact, 60% of respondents perceived an increase in the level of risk they face due to the use of social networking, cloud computing and personal devices in the enterprise. And according to the survey, companies are taking a proactive stance as 46% indicated that their annual investment in information security is increasing. Though IT professionals are trying, not all are succeeding in keeping up with new tech threats.

“I’ve never seen this kind of shift in IT before,” said Jose Granado, the America’s practice leader for information security services within Ernst & Young. “Security professionals are trying to keep up with the pace, but aren’t really doing a great job. The have limited resources and a limited budget.”

A concern for IT professionals is mobile computing. Demands of the mobile workforce are driving changes to the way organizations support and protect the flow of information. In fact, 53% of respondents indicated that increased workforce mobility is a significant or considerable challenge to effectively delivering their information security initiatives. Aside from investing more on data loss prevention technologies, 39% of respondents are making policy adjustments to address the potential new or increased risks.

“You have to implement realistic policies,” said Chip Tsantes, principal within the financial services division of Ernst & Young. “They need to be liveable and workable, or else people will go around them. You can’t simply ban things.”

Another major concern for IT pros is the gaining popularity of cloud computing. Both Granado and Tsantes were shocked to learn that 45% of respondents (primarily those on the non-financial services side) are currently using, evaluating or are planning to use cloud computing services within the next 12 months.

“From the standpoint of a traditional IT security professional, endorsing or supporting a cloud environment is counter-intuitive,” said Granado. “How do I know where my data is and how do I know it is protected?”

So how do companies increase their confidence in cloud computing? According to the survey, 85% say that external certification would increase their trust.

So I asked Granado and Tsantes if they could tell me when they believed there would be a universal set of standards for cloud computing providers. Granado feels there is a two-to-three year timeline in regards to having something solidified. He says businesses are going to drive it; If businesses continue to push, “cloud providers would have to follow.” With more and more sensitive data calling the cloud home, let’s hope Granada is being conservative with his estimate.

cloud computing2