Why Are Managers Paying Little Attention to the Responsible Use of AI?
08/04/2021
Ethics and AI: The Issues Explored
In a startling result, just 6% of executives said they ensure Artificial Intelligence (AI) is used ethically and responsibly by making development teams diverse. A new report from FICO and Corinium has found that many companies are deploying various forms of AI throughout their businesses with little consideration for the ethical implications of potential problems. The executives, serving in roles ranging from Chief Data Officer to Chief AI Officer, represent enterprises that bring in more than $100 million in annual revenue and were asked about how their companies ensure AI is used responsibly and ethically.
The implications for developing responsible and accountable AI systems in companies are significant. I have blogged before about ethics and AI, focusing on developing a framework for integrating into organizational systems. In this blog, I will examine some of the most important issues surrounding the ethical use of AI and organizational shortcomings.
According to the report, there have been hundreds of examples over the last decade of the many disastrous ways AI has been used by companies, from facial recognition systems unable to discern darker skinned faces to healthcare apps that discriminate against African American patients to recidivism calculators used by courts that skew against certain races. Despite these examples, FICO's State of Responsible AI report shows business leaders are putting little effort into ensuring that the AI systems they use are both fair and safe for widespread use.
Almost 70% of respondents could not explain how specific AI model decisions or predictions are made, and only 35% said their organization made an effort to use AI in a way that was transparent and accountable. Just 22% responded that their organization had an AI ethics board that could make decisions about the fairness of the technology they used, and the other 78% said they were "poorly equipped to ensure the ethical implications of using new AI systems."
Nearly 80% said they had significant difficulty in getting other senior executives to even consider or prioritize ethical AI usage practices. Few, if any, executives truly understood the business and reputational risks associated with unfair, unethical, or mismanaged AI usage.
More than 65% said their enterprise had "ineffective" processes in place to make sure that all AI projects complied with any regulations, and nearly half called these processes "very ineffective."
The problem, as I see it, is to build awareness of the way AI interacts with organizational systems and whether adequate internal controls exist to eliminate biases in the data and improper decision-making using AI-produced information.
Writing for the CFO Network, Glenn Gow looks at the practices of Google, Facebook, and Microsoft and point out positive steps to reduce bias and enhance controls over AI data.
Here are some fundamental principles addressed.
- Fairness: AI systems should treat all people fairly and avoid creating or reinforcing unfair bias
- Inclusiveness: AI systems should empower everyone and engage people, and include cultural diversity
- Reliability and Safety: AI systems should perform reliably and safely to avoid unintended results
- Transparency: AI systems should be understandable and explainable
- Privacy and Security: AI systems should be secure and respect privacy and provide privacy safeguards
- Accountability: AI systems should have algorithmic accountability to enable appropriate human direction and control
The following suggestions are offered in the article that can be implemented now.
- Make the ethics of AI a board-level discussion
- Examine the many different risks associated with the ethics of AI
- Ensure senior executives understand how and where AI-based systems are being considered and implemented
- Develop an approach to building systems that considers ethical dilemmas before the system is built.
It is a fact that senior management often do not have the knowledge required to spot ethical flaws in their organization’s AI, which, according to a Harvard Business Review article, puts “the company at risk, both reputationally and legally.” The example they give is when a product team is prepared to deploy AI but first needs the approval of an executive who knows little about the ethical risks of the product, the reputation of the brand can be at risk. If executives do not adequately understand the ethical risks of AI, it is unlikely they will understand their new responsibilities or their importance.
By addressing the issues discussed in this blog now, companies will reduce the risks of AI-generated decision-making having or recommend decisions that imperil the company. The question is: Are companies aware of the reputational, regulatory, and legal risks associated with the ethics of their AI systems?
Organizations need to adopt AI principles as its use expands and now covers major systems of gathering and processing data. AI needs to be better understood. It needs to be better controlled. Most important, AI risks must be better monitored to ensure transparency and accountability.
Posted by Dr. Steven Mintz, The Ethics Sage, on August 4, 2021. Steve is the author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/. Follow him on Facebook at: https://www.facebook.com/StevenMintzEthics and on Twitter at: https://twitter.com/ethicssage.