Global Heat Waves Signal Climate Risks

India is currently suffering under a heat wave that has lasted over a month, with temperatures reaching a record 118 degrees Fahrenheit (48 degrees Celsius) in New Delhi on June 10 and 122 degrees (50 degrees Celsius) in the western city of Churu. The death toll has been estimated to be at least 36, though some sources put the number at more than 150. Europe is also preparing for its own massive heat wave this week, with temperatures expected to be 36 degrees Fahrenheit (20 degrees Celsius) higher than the seasonal average of 72 degrees (22 degrees Celsius).

This pattern of heat waves has become a yearly occurrence across the globe. Europe faced similar heat last year, as did Asia, with Japan experiencing record-breaking temperatures in 2018, which sent more than 71,000 to hospitals, killing 138. North America also saw extended higher temperatures in 2018, with 41 heat records across the United States, and heat-related deaths overwhelming Montreal’s city morgue.

Experts say that these global record-breaking incidents are the result of climate change, and likely forecast a new normal of dangerous summer heat. According to Stefan Rahmstorf, co-chair of Earth System Analysis at the Potsdam Institute for Climate Research (PIK), “Monthly heat records all over the globe occur five times as often today as they would in a stable climate. This increase in heat extremes is just as predicted by climate science as a consequence of global warming caused by the increasing greenhouse gases from burning coal, oil and gas.” French national meteorological service Météo-France echoed these concerns, saying that heat waves’ frequency “is expected to double by 2050.” And according to a 2017 study from The Lancet Planetary Health journal, the number of deaths resulting from weather-related disasters could skyrocket in the future, killing as many as 152,000 people each year between 2071 and 2100, more than 50 times greater than the average annual deaths from 1980 to 2010.

As Risk Management has previously reported, these changes are also already impacting business operations globally, with direct economic losses from climate-related disasters (including heat waves) increased 151% from 1998 to 2017, according to the United Nations Office for Disaster Risk Reduction. Heat waves have serious effects on business operations, impacting things like road conditions and agriculture, as well as workers’ health and safety. More than 15 million U.S. workers have jobs requiring time outdoors, and according to the World Bank, even for indoor workers, productivity declines by 2% per degree Celsius above room temperature.

Many countries have taken steps to mitigate the effects of heat waves on their populations. For example, since 2016, India has been providing shelter for homeless people, opening water stations for hydration, cutting building heat absorption by painting roofs white and imposing working hour changes, curfews and restrictions on outdoor activities. These efforts have successfully reduced heat-related deaths from more than 2,400 in 2015 to 250 in 2017.

The U.S. Environmental Protection Agency (EPA) recommends similar steps to the ones India is taking, as well as ensuring that energy and water systems are properly functioning, establishing hotlines for reporting cases of high-risk individuals and encouraging energy conservation to reduce the chances of overwhelming electric systems. The U.S. Occupational Safety and Health Administration (OSHA) recommends that employers and workers facing higher temperatures in the workplace pay close attention for the signs of heat stroke, and keep three words in mind: water, rest and shade.

While these on-the-ground measures can reduce the immediate effects on workers and vulnerable populations like the elderly, children and the homeless, PIK’s Rahmstorf warns that “Only rapidly reducing fossil fuel use and hence CO2 emissions can prevent a disastrous further increase of weather extremes linked to global heating.”

Inside a Business Email Compromise Operation

A new report from cybersecurity company Agari’s Cyber Intelligence Division outlines the operations of a business email compromise (BEC) gang in West Africa, showing that criminals who engage in BEC online theft can have a diverse portfolio of online criminal activity that they use to build their capabilities, and use sophisticated methods to scam their victims, including businesses and government agencies.

BEC is a cyberfraud tactic in which a scammer will contact a target using phishing emails imitating a fellow employee of the target (often someone in the finance department or management) usually seeking to convince the victim to conduct a business transaction, most likely a money transfer to an account run by the scammer. The scammers may also try to trick their victims into clicking a link in an email or visiting a scam website, which could provide the scammers with the victim’s online credentials or download malware onto the victim’s computer and gain access to their company’s network.

As Risk Management previously reported, Beazley Breach Response Services found that BEC-related attacks cost victims an average of $70,960, but the FBI’s Internet Crime Complaint Center has estimated that the total “revenues” of BEC attacks doubled in 2018 to $1.3 billion. BEC attacks are also extremely common—approximately two-thirds of IT executives are reportedly dealing with them.

Agari’s report, titled “Scattered Canary: The Evolution of a West African Cybercriminal Startup,” shows that cybercriminal gangs diversify their criminal schemes, using their established infrastructure from one type of scam to facilitate others. Agari researchers named the group Scattered Canary and compared it to a tech startup because of its recruitment and expansion strategy. Scattered Canary has pursued a variety of different criminal social engineering efforts, including:

  • Romance scams: Creating a fake online romantic relationship with a victim and requesting gifts, access to their bank or retirement accounts, or services related to other scams.
  • Check fraud: A scammer offers to purchase an item for more than its advertised price with a check (which is fraudulent), then requests that the seller send the extra amount to a third party (a fictional shipping company, for example).
  • Credential harvesting: Tricking victims into providing their online credentials, including log-in information for online financial services.

Agari says that Scattered Canary built up a network of members and the skills to easily transfer from one scheme to another. The group has used multiple BEC tactics over time, transitioning from tricking employees into carrying out wire transfers from their companies’ bank accounts to convincing victims to buy gift cards that scammers would then cash out via cryptocurrency exchanges. More recently, the group has targeted human resource departments to change the direct deposit information for a company’s executive, then cashed out the deposits using prepaid debit cards.

Businesses should train their staff at all levels on how to spot BEC and other types of online scams. If employees can recognize phishing emails and websites, and know not to click links or provide information in response to either, this can protect companies from fraud and significant financial loss. In addition to training staff, the FBI suggests always verifying requests to send money, even if the email requesting the transfer is urgent, by speaking directly to the person who seems to be requesting the money on the phone (using the previously known number, not the one provided in the email) or in person. The FBI also suggests setting up filters that flag email addresses that are similar to the company’s email, and creating an email rule that notes emails coming from outside the company, among other technical steps.

For more from Risk Management about controlling the risks of BEC and other social engineering fraud, check out:

RIMS NeXt Gen Forum Offers Insights for Rising Risk Professionals

“We’re becoming numb to the news,” said risk management veteran and author Joseph Mayo. “We’ve seen a 1,200% increase in daily record loss in the last five years. Globalization has created faster-moving and infinitely more complex risks and that’s what we have to adapt to.”

In his keynote, “Don’t Tell Me What I Know, Tell Me What I Don’t Know,” at last week’s RIMS NeXt Gen Forum 2019 for rising risk professionals, Mayo discussed environmental, social and governance (ESG) risk events and how they will continue to impact the risk management community, noting that a 1,000% increase in ESG events has occurred from 2010 to 2018 compared to each of the three prior decades. 

(Hear a preview from his RIMScast interview.)

Despite flaws in actuarial approaches and the challenges surrounding artificial intelligence such as bias and adversarial machine learning, Mayo said that the profession’s outlook is “not all doom and gloom.”

“The future of risk management is to make decisions with incomplete, inaccurate and obfuscated information,” he said. “We will have to embrace fuzzy logic because decisions need to be made quicker. We no longer have decades to develop actuarial models.”

Shortly afterward, Robin Joines of Sedgwick and Kristy Coleman of Turner Broadcasting System hosted risk management “Jeopardy!” While not quite as fast-paced nor as well-funded as the long-running game show, the hosts provided a forum for discussion and debate on explored topics from business travel etiquette and travel risk to communication and corporate politics. Discussing the images people project when they cross their arms, for example, while many agreed that it projects rigidity, one audience member cited a recent Wired video that reported it could also be considered a method of self-soothing rather than hostility or reservation.

Joines and Coleman were open-minded in their scoring and even led a quick tongue twister that kept the atmosphere light and fun. “Final Jeopardy” focused on public speaking, offering some practical speech delivery tips that would benefit any professional. For example, Joines said, “Talk from your knowledge base, and not from your note cards, and you’ll come across as confident.”

The forum closed with “You are Your Brand – How to Distinguish Yourself in Your Career,” presented by Kathleen Crowe, chair of the RIMS Rising Risk Professionals Advisory Group, and Steve Pottle, RIMS vice president.

Despite their differences in age and experience, the duo explained how their careers followed similar patterns. Neither presenter had begun on a risk management track, with Pottle starting out as a budding Canadian radio personality and Crowe initially expecting to work for an incumbent U.S. senator. Taking career risks brought them into risk management, and they shared lessons from their respective journeys that ultimately influenced them to be active leaders in their organizations and the industry at large.

One key tip of theirs was planning a personal goal that aligns with a long-term strategy of an organization, which can be an early indicator of a transition to a leadership role. From there, they said, you can build your personal brand regardless of your industry.

“Your personal brand lies somewhere in between how you see yourself and how others see you,” Pottle said. 

Click here for more NeXt Gen Forum coverage on the “Legal Checklist for AI Risk.”

Assessing the Legal Risks in AI—And Opportunities for Risk Managers

Last year, Amazon made headlines for a developing a human resources hiring tool fueled by machine learning and artificial intelligence. Unfortunately, the tool came to light not as another groundbreaking innovation from the company, but for the notable gender bias the tool had learned from the data input and amplified in the candidates it highlighted for hiring. As Reuters reported, the models detected patterns from resumes of candidates from the previous decade and the resulting hiring decisions, but these decisions reflect that the tech industry is disproportionately male. The program, in turn, learned to favor male candidates.

As AI technology draws increasing attention and its applications proliferate, businesses that create or use such technology face a wide range of complex risks, from clear-cut reputation risk to rapidly evolving regulatory risk. At last week’s RIMS NeXtGen Forum 2019, litigators Todd J. Burke and Scarlett Trazo of Gowling WLG pointed toward such ethical implications and complex evolving regulatory requirements as highlighting the key opportunities for risk management to get involved at every point in the AI field.

For example, Burke and Trazo noted that employees who will be interacting with AI will need to be trained to understand its application and outcomes. In cases where AI is being deployed improperly, failure to train the employees involved to ensure best practices are being followed in good faith could present legal exposure for the company. Risk managers with technical savvy and a long-view lens will be critical in spotting such liabilities for their employers, and potentially even helping to shape the responsible use of emerging technology.

To help risk managers assess the risks of AI in application or help guide the process of developing and deploying AI in their enterprises, Burke and Trazo offered the following “Checklist for AI Risk”:

  • Understanding: You should understand what your organization is trying to achieve by implementing AI solutions.
  • Data Integrity and Ownership: Organizations should place an emphasis on the quality of data being used to train AI and determine the ownership of any improvements created by AI.
  • Monitoring Outcomes: You should monitor the outcomes of AI and implement control measures to avoid unintended outcomes.
  • Transparency: Algorithmic decision-making should shift from the “black box” to the “glass box.”
  • Bias and Discrimination: You should be proactive in ensuring the neutrality of outcomes to avoid bias and discrimination.
  • Ethical Review and Regulatory Compliance: You should ensure that your use of AI is in line with current and anticipated ethical and regulatory frameworks.
  • Safety and Security: You should ensure that AI is not only safe to use but also secure against cyberattacks. You should develop a contingency plan should AI malfunction or other mishaps occur.
  • Impact on the Workforce: You should determine how the implementation of AI will impact your workforce.

For more information about artificial intelligence, check out these articles from Risk Management: