Immediate Vault Immediate Access

RIMS TechRisk/RiskTech: Opportunities and Risks of AI

On the first day of the RIMS virtual event TechRisk/RiskTech, author and UCLA professor Dr. Ramesh Srinivasan gave a keynote titled “The Opportunities and Downside Risks of Using AI,” touching on the key flashpoints of current technological advancement, and what they mean for risk management. He noted that as data storage has become far cheaper, and computation quicker, this has allowed risk assessment technology to improve. But with these improvements come serious risks.

Srinivasan provided an overview of where artificial intelligence and machine learning stand, and how companies use these technologies. AI is “already here,” he said, and numerous companies are using the technology, including corporate giants Uber and Airbnb, whose business models depend on AI. He also stressed that AI is not the threat portrayed in movies, and that these portrayals have led to a kind of “generalized AI anxiety,” a fear of robotic takeover or the end of humanity—not a realistic scenario.

However, the algorithms that support them and govern many users’ online activities could end up being something akin to the “pre-cogs” from Minority Report that predict future crimes because the algorithms are collecting so much personal information. Companies are using these algorithms to make decisions about users, sometimes based on data sets that are skewed to reflect the biases of the people who collected that data in the first place.

Often, technology companies will sell products with little transparency into the algorithms and data sets that the product is built around. In terms of avoiding products that use AI and machine learning that are built with implicit bias guiding those technologies, Srinivasan suggested A/B testing new products, using them on a trial or short-term basis, and using them on a small subset of users or data to see what effect they have.

When deciding which AI/machine learning technology their companies should use, Srinivasan recommended that risk professionals should specifically consider mapping out what technology their company is using and weigh the benefits against the potential risks, and also examining those risks thoroughly and what short- and long-term threats they pose to the organization.

Specific risks of AI (as companies currently use it) that risk professionals should consider include:

  • Economic risk in the form of the gig economy, which, while making business more efficient, also leaves workers with unsustainable income
  • Increased automation in the form of the internet of things, driverless vehicles, wearable tech, and other ways of replacing workers with machines, risk making labor obsolete.
  • Users do not get benefits from people and companies using and profiting off of their data.
  • New technologies also have immense environmental impact, including the amount of power that cryptocurrencies require and the health risks of electronic waste.
  • Issues like cyberwarfare, intellectual property theft and disinformation are all exacerbated as these technologies advance.
  • The bias inherent in AI/machine learning have real world impacts. For example, court sentencing often relies on biased predictive algorithms, as do policing, health care facilities (AI giving cancer treatment recommendations, for example) and business functions like hiring.

Despite these potential pitfalls, Srinivasan was optimistic, noting that risk professionals “can guide this digital world as much as it guides you,” and that “AI can serve us all.”

RIMS TechRisk/RiskTech continues today, with sessions including:

  • Emerging Risk: AI Bias
  • Connected & Protected
  • Tips for Navigating the Cyber Market
  • Taking on Rising Temps: Tools and Techniques to Manage Extreme Weather Risks for Workers
  • Using Telematics to Give a Total Risk Picture

You can register and access the virtual event here, and sessions will be available on-demand for the next 60 days.

On Data Privacy Day, Catch Up on These Critical Risk Management and Data Security Issues

Happy Data Privacy Day! Whether it is cyberrisk, regulatory risk or reputation risk, data privacy is increasingly intertwined with some of the most critical challenges risk professionals face every day, and ensuring security and compliance of data assets is a make or break for businesses.

buy prevacid online www.soundviewmed.com/wp-content/uploads/2023/10/jpg/prevacid.html no prescription pharmacy

In Cisco’s new 2021 Data Privacy Benchmark Report, 74% of the 4,400 security professionals surveyed saw a direct correlation between privacy investments and the ability to mitigate security losses. The current climate is also casting more of a spotlight on privacy work, with 60% of organizations reporting they were not prepared for the privacy and security requirements to manage risks with the shift to remote work and 93% turning to privacy teams to help navigate these pandemic-related challenges. Amid COVID-19 response, headline-making data breaches and worldwide regulatory activity, data privacy is also a critical competency area for risk professionals in executive leadership and board roles, with 90% of organizations now asking for reporting on privacy metrics to their C-suites and boards.

“Privacy has come of age—recognized as a fundamental human right and rising to a mission-critical priority for executive management,” according to Harvey Jang, vice president and chief privacy officer at Cisco. “And with the accelerated move to work from anywhere, privacy has taken on greater importance in driving digitization, corporate resiliency, agility, and innovation.”

In honor of Data Privacy Day, check out some of Risk Management’s recent coverage of data privacy and data security:

CPRA and the Evolution of Data Compliance Risks

Also known as Proposition 24, the new California Privacy Rights Act (CPRA) aims to enhance consumer privacy protections by clarifying and building on the expectations and obligations of the California Consumer Privacy Act (CCPA).

Frameworks for Data Privacy Compliance

As new privacy regulations are introduced, organizations that conduct business and have employees in different states and countries are subject to an increasing number of privacy laws, making the task of maintaining compliance more complex. While these laws require organizations to administer reasonable security implementations, they do not outline what specific actions should be taken. Proven security frameworks like Center for Internet Security (CIS) Top 20, HITRUST CSF, and the National Institute of Standards and Technology (NIST) Framework can provide guidance.

Protecting Privacy by Minimizing Data

New obligations under data privacy regulation in the United States and Europe require organizations not only to rein in data collection practices, but also to reduce the data already held. Furthering this imperative, over-retention of records or other information can lead to increased fines in the case of a data breach.

buy ocuflox online www.soundviewmed.com/wp-content/uploads/2023/10/jpg/ocuflox.html no prescription pharmacy

As a result, organizations are moving away from the practice of collecting all the data they can toward a model of “if you can’t protect it, don’t collect it.”

3 Tips for Protecting Remote Employees’ Data

As COVID-19 continues to force many employees to work from home, companies must take precautions to protect sensitive data from new cyberattack vulnerabilities. That means establishing organization-wide data-security policies that take remote workers into account and inform them of the risks and how to avoid them. These three tips can help keep your organization’s data safe during the work-from-home era.

What to Do After the EU-US Privacy Shield Ruling

It was previously thought that the EU-US Privacy Shield aligned with the EU’s General Data Protection Regulation (GDPR), but following the CJEU’s recent ruling, the Privacy Shield no longer provides a mechanism for legitimizing cross-border data flows to the United States. This has far-reaching consequences for all organizations that currently rely on it. In light of the new ruling, risk professionals must help their organizations to reevaluate data strategies and manage heightened regulatory risk going forward.

The Risks of School Surveillance Technology

Schools confront many challenges related to students’ safety, from illnesses, bullying and self-harm to mass shootings. To address these concerns, they are increasingly turning to a variety of technological options to track students and their activities. But while these tools may offer innovative ways to protect students, their inherent risks may outweigh the potential benefits. Tools like social media monitoring and facial recognition are creating new liabilities for schools.

2020 Cyberrisk Landscape

As regulations like CCPA and GDPR establish individuals’ rights to transparency and choice in the collection and use of their personal data, one can expect to see more people exercise these rights.

buy doxycycline online www.soundviewmed.com/wp-content/uploads/2023/10/jpg/doxycycline.html no prescription pharmacy

In turn, businesses need to ensure they have formal and efficient processes in place to comply with such requests in the clear terms and prompt manner these regulations require, or risk fines and reputation fallout. These processes will also need to provide sufficient documentation to attest to compliance, so if businesses have not yet already, they should be building auditable and iterative procedures for “data revocation.”

Data Privacy Governance in the Age of GDPR

As personal information has become a monetizable asset, risk, compliance and data experts have increasingly been forced to address the regulatory and operational ramifications of the rapid, mass availability of personal customer and employee data circulated both inside and outside of organizations. With new data protection regulations, Canadian and U.S. companies must reassess how they process and safeguard personal information.

Key Features of India’s New Data Protection Law

Among the new data protection laws on the horizon is India’s Personal Data Protection Bill. While the legislation has not yet been approved and is likely to undergo changes before it is enacted, its fundamental structure and broad compliance obligations are expected to remain the same. Companies both inside and outside India should familiarize themselves with its requirements and begin preparing for how it will impact their data processing activities.

Strategies to Prevent Internal Fraud

As employees can be key perpetrators of fraud, creating and implementing best practices with regard to insiders is a key part of an enterprise’s everyday risk management procedures. For example, developing internal controls that involve multiple layers of review for financial transactions, and arranging independent reviews of the company’s financial records can prevent malfeasance, detect ongoing fraud and prevent it from continuing.

buy atarax online iddocs.net/images/photoalbum/gif/atarax.html no prescription pharmacy

In fact, according to Kroll’s 2019 Global Fraud and Risk Report, businesses discovered insider fraud by conducting internal audits 38% of the time, through external audits 20% of the time and from whistleblowers 11% of the time.

Technology solutions provider Column Case Investigative recently examined five common types of fraud that businesses face, including employees falsifying their timesheets to steal money from the company, taking intellectual property or passing off counterfeit items as genuine, funneling money away from vendors to themselves, or soliciting favors or compensation from clients or vendors for preferential treatment. These tactics can impact a company’s profits and expose it to possible litigation, but also pose risk to its reputation with customers and partners, as well as its competitiveness.

buy zantac online iddocs.net/images/photoalbum/gif/zantac.html no prescription pharmacy

To best mitigate these risks, the provider recommended that companies do their due diligence in the hiring process to detect any warning signs that applicants may have a motive to commit fraud. To limit intellectual property theft and misuse, they should limit access to important information and materials.

buy robaxin online iddocs.net/images/photoalbum/gif/robaxin.html no prescription pharmacy

Enterprises can also create clear ethical standards for employee conduct and a positive culture in which workers are happier, more committed to the company and more comfortable reporting fraud when they see or suspect it happening.

Check out the infographic below for more best practices to mitigate employee fraud risks:

Assessing the Legal Risks in AI—And Opportunities for Risk Managers

Last year, Amazon made headlines for a developing a human resources hiring tool fueled by machine learning and artificial intelligence. Unfortunately, the tool came to light not as another groundbreaking innovation from the company, but for the notable gender bias the tool had learned from the data input and amplified in the candidates it highlighted for hiring.

buy oseltamivir online thecifhw.com/wp-content/uploads/2023/10/jpg/oseltamivir.html no prescription pharmacy

As Reuters reported, the models detected patterns from resumes of candidates from the previous decade and the resulting hiring decisions, but these decisions reflect that the tech industry is disproportionately male. The program, in turn, learned to favor male candidates.

buy robaxin online thecifhw.com/wp-content/uploads/2023/10/jpg/robaxin.html no prescription pharmacy

As AI technology draws increasing attention and its applications proliferate, businesses that create or use such technology face a wide range of complex risks, from clear-cut reputation risk to rapidly evolving regulatory risk. At last week’s RIMS NeXtGen Forum 2019, litigators Todd J. Burke and Scarlett Trazo of Gowling WLG pointed toward such ethical implications and complex evolving regulatory requirements as highlighting the key opportunities for risk management to get involved at every point in the AI field.

For example, Burke and Trazo noted that employees who will be interacting with AI will need to be trained to understand its application and outcomes. In cases where AI is being deployed improperly, failure to train the employees involved to ensure best practices are being followed in good faith could present legal exposure for the company. Risk managers with technical savvy and a long-view lens will be critical in spotting such liabilities for their employers, and potentially even helping to shape the responsible use of emerging technology.

buy inderal online thecifhw.com/wp-content/uploads/2023/10/jpg/inderal.html no prescription pharmacy

To help risk managers assess the risks of AI in application or help guide the process of developing and deploying AI in their enterprises, Burke and Trazo offered the following “Checklist for AI Risk”:

  • Understanding: You should understand what your organization is trying to achieve by implementing AI solutions.
  • Data Integrity and Ownership: Organizations should place an emphasis on the quality of data being used to train AI and determine the ownership of any improvements created by AI.
  • Monitoring Outcomes: You should monitor the outcomes of AI and implement control measures to avoid unintended outcomes.
  • Transparency: Algorithmic decision-making should shift from the “black box” to the “glass box.”
  • Bias and Discrimination: You should be proactive in ensuring the neutrality of outcomes to avoid bias and discrimination.
  • Ethical Review and Regulatory Compliance: You should ensure that your use of AI is in line with current and anticipated ethical and regulatory frameworks.
  • Safety and Security: You should ensure that AI is not only safe to use but also secure against cyberattacks. You should develop a contingency plan should AI malfunction or other mishaps occur.
  • Impact on the Workforce: You should determine how the implementation of AI will impact your workforce.

For more information about artificial intelligence, check out these articles from Risk Management: