Immediate Vault

RIMS TechRisk/RiskTech: Emerging Risk AI Bias

On the second day of the RIMS virtual event TechRisk/RiskTech, CornerstoneAI founder and president Chantal Sathi and advisor Eric Barberio discussed the potential uses for artificial intelligence-based technologies and how risk managers can avoid the potential inherent biases in AI.

Explaining the current state of AI and machine learning, Sathi noted that this is “emerging technology and is here to stay,” making it even more imperative to understand and account for the associated risks. The algorithms that make up these technologies feed off data sets, Sathi explained, and these data sets can contain inherent bias in how they are collected and used. While it is a misconception that all algorithms have or can produce bias, the fundamental challenge is determining whether the AI and machine learning systems that a risk manager’s company uses do contain bias.

The risks of not rooting out bias in your company’s technology include:

  • Loss of trust: If or when it is revealed that the company’s products and services are based on biased technology or data, customers and others will lose faith in the company.
  • Punitive damage: Countries around the world have implemented or are in the process of implementing regulations governing AI, attempting to ensure human control of such technologies. These regulations (such as GDPR in the European Union) can include punitive damages for violations.
  • Social harm: The widespread use of AI and machine learning includes applications in legal sentencing, medical decisions, job applications and other business functions that have major impact on people’s lives and society at large.

Sathi and Barberio outlined five steps to assess these technologies for fairness and address bias:

  1. Clearly and specifically defining the scope of what the product is supposed to do.
  2. Interpreting and pre-processing the data, which involves gathering and cleaning the data to determine if it adequately represents the full scope of ethnic backgrounds and other demographics.
  3. Most importantly, the company should employ a bias detection framework. This can include a data audit tool to determine whether any output demonstrates unjustified differential bias.
  4. Validating the results the product produces using correlation open source toolkits, such as IBM AI Fairness 360 or MS Fairlearn.
  5. Producing a final assessment report.

Following these steps, risk professionals can help ensure their companies use AI and machine learning without perpetuating its inherent bias.

The session “Emerging Risk AI Bias” and others from RIMS TechRisk/RiskTech will be available on-demand for the next 60 days, and you can access the virtual event here.

RIMS TechRisk/RiskTech: Opportunities and Risks of AI

On the first day of the RIMS virtual event TechRisk/RiskTech, author and UCLA professor Dr. Ramesh Srinivasan gave a keynote titled “The Opportunities and Downside Risks of Using AI,” touching on the key flashpoints of current technological advancement, and what they mean for risk management. He noted that as data storage has become far cheaper, and computation quicker, this has allowed risk assessment technology to improve. But with these improvements come serious risks.

Srinivasan provided an overview of where artificial intelligence and machine learning stand, and how companies use these technologies. AI is “already here,” he said, and numerous companies are using the technology, including corporate giants Uber and Airbnb, whose business models depend on AI. He also stressed that AI is not the threat portrayed in movies, and that these portrayals have led to a kind of “generalized AI anxiety,” a fear of robotic takeover or the end of humanity—not a realistic scenario.

However, the algorithms that support them and govern many users’ online activities could end up being something akin to the “pre-cogs” from Minority Report that predict future crimes because the algorithms are collecting so much personal information. Companies are using these algorithms to make decisions about users, sometimes based on data sets that are skewed to reflect the biases of the people who collected that data in the first place.

Often, technology companies will sell products with little transparency into the algorithms and data sets that the product is built around. In terms of avoiding products that use AI and machine learning that are built with implicit bias guiding those technologies, Srinivasan suggested A/B testing new products, using them on a trial or short-term basis, and using them on a small subset of users or data to see what effect they have.

When deciding which AI/machine learning technology their companies should use, Srinivasan recommended that risk professionals should specifically consider mapping out what technology their company is using and weigh the benefits against the potential risks, and also examining those risks thoroughly and what short- and long-term threats they pose to the organization.

Specific risks of AI (as companies currently use it) that risk professionals should consider include:

  • Economic risk in the form of the gig economy, which, while making business more efficient, also leaves workers with unsustainable income
  • Increased automation in the form of the internet of things, driverless vehicles, wearable tech, and other ways of replacing workers with machines, risk making labor obsolete.
  • Users do not get benefits from people and companies using and profiting off of their data.
  • New technologies also have immense environmental impact, including the amount of power that cryptocurrencies require and the health risks of electronic waste.
  • Issues like cyberwarfare, intellectual property theft and disinformation are all exacerbated as these technologies advance.
  • The bias inherent in AI/machine learning have real world impacts. For example, court sentencing often relies on biased predictive algorithms, as do policing, health care facilities (AI giving cancer treatment recommendations, for example) and business functions like hiring.

Despite these potential pitfalls, Srinivasan was optimistic, noting that risk professionals “can guide this digital world as much as it guides you,” and that “AI can serve us all.”

RIMS TechRisk/RiskTech continues today, with sessions including:

  • Emerging Risk: AI Bias
  • Connected & Protected
  • Tips for Navigating the Cyber Market
  • Taking on Rising Temps: Tools and Techniques to Manage Extreme Weather Risks for Workers
  • Using Telematics to Give a Total Risk Picture

You can register and access the virtual event here, and sessions will be available on-demand for the next 60 days.

Inclusion Does Not Stop Workplace Bias, Deloitte Survey Shows

In Deloitte’s 2019 State of Inclusion Survey, 86% of respondents said they felt comfortable being themselves all or most of the time at work, including 85% of women, 87% of Hispanic respondents, 86% of African American respondents, 87% of Asian respondents, 80% of respondents with a disability and 87% of LGBT respondents. But other questions in the company’s survey show a more troubling, less inclusive and productive office environment, and may indicate that simply implementing inclusion initiatives is not enough to prevent workplace bias.

While more than three-fourths of those surveyed also said that they believed their company “fostered an inclusive workplace,” many reported experiencing or witnessing bias (defined as “an unfair prejudice or judgment in favor or against a person or group based on preconceived notions”) in the workplace. In fact, 64% said that they “had experienced bias in their workplaces during the last year” and “also felt they had witnessed bias at work” in the same time frame. A sizable number of respondents—including 56% of LGBT respondents, 54% of respondents with disabilities and 53% of those with military status—also said they had experienced bias at least once a month.

Listening to those who say they have witnessed or experienced bias is especially important. When asked to more specifically categorize the bias they experienced and/or witnessed in the past year, 83% said that the bias in those incidents was indirect and subtle (also called “microaggression”), and therefore less easily identified and addressed. Also, the study found that those employees who belonged to certain communities were more likely to report witnessing bias against those communities than those outside them. For example, 48% of Hispanic respondents, 60% of Asian respondents, and 63% of African American respondents reported witnessing bias based on race or ethnicity, as opposed to only 34% of White, non-Hispanic respondents. Additionally, 40% of LGBT respondents reported witnessing bias based on sexuality, compared to only 23% of straight respondents.

While inclusion initiatives have not eliminated bias, Deloitte stresses that these programs are important and should remain. As Risk Management previously reported in the article “The Benefits of Diversity & Inclusion Initiatives,” not only can fostering diversity and inclusion be beneficial for workers of all backgrounds, it can also encourage employees to share ideas for innovations that can help the company, keep employees from leaving, and insulate the company from accusations of discrimination and reputational damage.

But building a more diverse workforce is only the first step, and does not guarantee that diverse voices are heard or that bias will not occur. Clearly, encouraging inclusion is not enough and more can be done to curtail workplace bias. And employees seeing or experiencing bias at work has serious ramifications for businesses. According to the survey, bias may impact productivity—68% of respondents experiencing or witnessing bias stated that bias negatively affected their productivity, and 70% say bias “has negatively impacted how engaged they feel at work.”

Deloitte says that modeling inclusion and anti-bias behavior in the workplace is essential, stressing the concept of “allyship,” which includes, “supporting others even if your personal identity is not impacted by a specific challenge or is not called upon in a specific situation.” This would include employees or managers listening to their colleagues when they express concerns about bias and addressing incidents of bias when they occur, even if that bias is not apparent to them or directly affecting them or their identity specifically.

According to the survey, 73% of respondents reported feeling comfortable talking about workplace bias, but “when faced with bias, nearly one in three said they ignored bias that they witnessed or experienced.” If businesses foster workplaces where people feel comfortable listening to and engaging honestly with colleagues of different backgrounds, create opportunities for diversity on teams and projects, and most importantly, address bias whenever it occurs, they can move towards a healthier, more productive work environment.