Guardrail Failure: AI bias is causing companies to lose revenue and customers

New survey shows that 80% found problems in U.S. businesses despite bias monitoring and algorithm tests being in place.

Image: Shutterstock/Celia Ong

According to Data Robot, tech companies in the U.S.A and U.K. have not done enough to avoid bias in artificial intelligence algorithms. This problem is already affecting these organizations in the form lost customers as well as lost revenue.

DataRobot surveyed over 350 U.S. and U.K. technology leaders to learn how they are dealing with AI bias. The survey respondents were CIOs, IT directors and managers, data scientists, and development leaders who use or plan on using AI. This research was done in collaboration with the World Economic Forum (WEF) and global academic leaders.

The survey, 36% of respondents claimed that AI bias has affected their organization in some way. The damage caused by AI bias in one or more algorithms was severe for many of these companies: 

  • 62% lost revenue
  • 61% lost customers
  • AI bias resulted in 43% of employees being fired
  • 35% of legal fees incurred due to a legal action or lawsuit

Respondents reported that their organization’s algorithms had inadvertently led to bias against a variety of groups.

  • Gender: 34%
  • Age: 32%
  • Race: 29%
  • 19%
  • Religion: 18%

Survey respondents were also asked their opinions about regulation. Surprisingly, 81% believe government regulations would help to address two specific components of this problem: defining and precluding bias. Over half of the tech leaders fear that these regulations will increase costs and make it harder for people to adopt them. A third of the respondents indicated that 32% are worried about whether regulation will cause harm to certain groups. 

SEE: 5 questions you should ask about your AI and IoT projects

Emanuel de Bellis is a professor at University of St. Gallen’s Institute of Behavioral Science and Technology. He stated in a press release, “The Institute of Behavioral Science and Technology has made the following announcement: Proposed regulation of AI by the European CommisisonThese concerns could be addressed. 

De Bellis stated, “AI offers countless opportunities to businesses and offers means for fighting some of the greatest issues of our times.” “Ai also poses legal risks, including opaque decision-making (the Black-Box effect), discrimination (based upon biased data or algorithms), privacy issues and liability issues.”   

AI bias is failing

Companies recognize the risks of bias in algorithms and have taken steps to protect themselves. 77% of respondents stated that they had an AI bias test or algorithm test before discovering bias. 80% of American organizations had AI bias monitoring and/or algorithm tests in place before bias discovery. This is higher than the 63% in the U.K. 

The U.S. tech executives are also more confident in their ability detect bias. 75% said that they could spot bias, while 56% of U.K. respondents did the same. 

Here are some steps companies are taking right now to detect bias

  • Checking data quality: 69%
  • 51% of employees trained on AI bias and how they can be prevented
  • 51% of the cost to hire an AI bias/ethics expert 
  • Measuring AI decision making factors: 50% 
  • 47%: Monitoring the time when data changes. 
  • Implementing algorithms to detect and reduce hidden biases within training data: 45% 
  • 35% Introduction of explainable AI tools
  • Not taking any actions: 1%

Eighty-four% of respondents stated that their companies plan to invest more money in AI bias prevention initiatives over the next 12 months. The survey found that these actions include more money for model governance, more hiring of AI trust managers, more advanced AI systems, and more explanations.

Also see