Ways to Promote EDI in the Workplace

Ethics, AI, and Managing Ethics Risks

It’s Time for Companies to Have an Ethics Committee

I have blogged before about the use of artificial intelligence (AI) to solve ethical quandaries and the key to using AI intelligently. Given the attention I have given to this subject in the past, it occurs to me that organizations should have ethics committees to deal with these issues.

The Role of an Ethics Committee

Governance and accountability issues relate to who creates the ethics standards for AI, who governs the AI system and data, who maintains the internal controls over the data and who is accountable when unethical practices are identified. The internal auditors have a key role to play in this regard. They should assess risk, determine compliance with regulations and report their findings directly to the audit committee of the board of directors.

According to the Harvard Business Review, an ethics committee can be a new entity within the organization or an existing body that is assigned responsibility to oversee ethics risks should as the internal auditors. A key function of AI is to identify and mitigate the ethical risks of AI products that are developed in-house or purchased from third-party vendors systematically and comprehensively. When operational teams bring in an opportunity that would benefit from an AI solution, the ethics committee should ensure that the solution poses no serious ethical risks; recommend changes to the operational systems, if necessary; and if they are adopted, give it a second review or advise against developing or procuring the solution altogether.

Ethics Risks

We often here that AI systems must mitigate bias by understanding the workings of sophisticated, autonomous algorithms to take steps to eliminate unfair bias over time as they continue to evolve. Moreover, universal standards for fairness and trust should inform overall management policies for the ethical use of AI.

AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.

Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.

Algorithmic Accountability Act

According to the Algorithmic Accountability Act that was introduced in the U.S House of Representatives in 2019, high-risk automated decision systems include those that (1) may contribute to inaccuracy, bias, or discrimination; or (2) facilitate decision-making about sensitive aspects of consumers' lives by evaluating consumers' behavior. Further, an automated-decision system, or information system involving personal data, is considered high-risk if it (1) raises security or privacy concerns, (2) involves the personal information of a considerable number of people, or (3) systematically monitors a large, publicly accessible physical location. Ethics committees should be cognizant of these challenges in AI systems and develop a game plan to effectively deal with them in an unbiased manner.

Assessments of high-risk automated-decision systems must (1) describe the system in detail, (2) assess the relative costs and benefits of the system, (3) determine the risks to the privacy and security of personal information, and (4) explain the steps taken to minimize those risks, if discovered. Assessments of high-risk information systems involving personal information must evaluate the extent to which the system protects the privacy and security of such information.

The Council of Europe identifies common ethical challenges as follows.

Image1Google’s Cloud Unit

On September 6, 2021, it was announced that Google’s cloud unit investigated using AI to help a financial firm decide whom to lend money to. It turned down the client’s idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender.

Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system.

Panels of executives or other leaders curbed all these technologies, according to interviews with AI ethics chiefs at the three U.S. technology giants.

“There are opportunities and harms, and our job is to maximize opportunities and minimize harms,” said Tracy Pizzo Frey, who sits on two ethics committees at Google Cloud as its managing director for Responsible AI. “Judgments can be difficult.”

The bottom line is AI is a nascent concern in organizations. Like other evolving matters, AI needs to be scrutinized because they can cause damage if not monitored closely. An ethics committee can do just that. I also recommend organizations have a Chief Ethics Officer to oversee the work of the committee to ensure it evaluates ethics risks as opportunities present themselves.

Blog posted by Dr. Steven Mintz, The Ethics Sage, on June 21, 2022. You can sign up for Steve’s newsletter and learn more about his activities on his website  (https://www.stevenmintzethics.com/) and by following him on Facebook at: https://www.facebook.com/StevenMintzEthics and on Twitter at: https://twitter.com/ethicssage.

 

Comments