Defining the Role of the Chief AI Ethics Officer
I’m not a big fan of acronyms. Companies use them quite freely. It's getting crowded in the C-Suite. There’s the CBDO (Chief Business Development Officer) COO (Chief Operating Officer), CMO (Chief Marketing Officer) CAO (Chief Accounting Officer), CTO (Chief Technology Officer), CRO (Chief Risk Officer), CFO (Chief Financial Officer), CAE (Chief Audit Executive), CEO (Chief Executive Officer), CCO (Chief Compliance Officer, CLO (Chief Legal Officer), CIO (Chief Information Officer), CCO (Chief Communications Officer), CDO (Chief Diversity Officer), and CHRO (Chief Human Resources Officer). Well, add a new one to the list: CAIEO (Chief AI Ethics Officer).
What Does the CAIEO do?
I recently read an article about the expanding role of the CAIEO and how the position is on the rise at leading enterprises as digital transformation becomes more complex and AI adoption grows rapidly across industries. On reason includes that forward-looking companies are turning to the CAIEO role to put into operation corporate values related to AI across the organization's divisions. CAIEOs need to ensure that the AI technology being developed, used and deployed is trustworthy; and that developers have the right tools, education, and training to easily embed these properties in what they produce.
I’m particularly impressed by the core ethical values identified (in italics) and related to the work of the CAIEO. A new report by the Global Future Council on AI for Humanity explores educating the CAIEO role and others to put into operation AI fairness across an organization. We can include issues related to diversity and inclusion as well.
There is no doubt that AI is booming in the world of technology and its use by companies. It affects the lives of billions of people by transforming our society and creating many ethical challenges. It’s a new field of ethics so we all would benefit by learning more about the intersection of ethics and AI.
Ethical Risks of AI…and Ethics
Alongside its positive effects, some AI applications raise legitimate concerns and risks. AI ethics is crosses many disciplines and stakeholder interests that aim to define and implement technical and non-technical solutions to address these concerns and mitigate the risks.
AI can improve human decision-making, but it has its limits. The possibility exists that bias in algorithms can create an ethical risk that brings into question the reliability of the data produced by the system. Bias can be accounted for through explainability of the data, reproducibility in testing for consistent results and auditability.
Other ethical risks include a lack of transparency, erosion of privacy, poor accountability and workforce displacement and transitions. The existence of such risks affects whether AI systems should be trusted. To build trust through transparency, organizations should clearly explain what data they collect, how it is used and how the results affect customers.
AI solutions could, for example, unintentionally generate discriminatory outcomes because the underlying data is skewed towards a particular population segment. This could deepen existing structural injustices, skew power balances further, threaten human rights and limit access to resources and information.
An article in the World Economic Forum, points out that some AI systems could behave like black boxes with little or no explanation of why they make their decisions. According to FICO's latest report dedicated to the State of Responsible AI, two-thirds (65%) of respondent companies can’t explain how specific AI-based decisions or predictions are made. This could erode trust in AI and thus hamper its adoption, reducing the positive impacts of this technology. It could also damage a company’s reputation and the trust of its clients, as well as contradicting company values.
The main goal of a CAIEO is to make AI ethics principles part of operations within a company, organization or institution. A CAIEO advises and builds accountability frameworks for CEOs and boards on the unintended risks posed by AI to the organization. They should help companies comply with existing or expected AI regulations and oversee the implementation of many of the organization’s AI ethics governance and education functions.
I have blogged before about ethics and AI. CAIEO’s should be aware of basic principles of ethics and AI. The accounting firm, KPMG, identifies them as follows.
- Transforming the workplace: Massive change in roles and tasks that define work, along with the rise of powerful analytic and automated decision-making, will cause job displacement and the need for retraining.
- Establishing oversight and governance: New regulations will establish guidelines for the ethical use of AI and protect the well-being of the public.
- Aligning cybersecurity and ethical AI: Autonomous algorithms give rise to cybersecurity risks and adversarial attacks that can contaminate algorithms by tampering with the data. KPMG reported in its 2019 CEO Outlook that 72 percent of U.S. CEOs agree that strong cybersecurity is critical to engender trust with their key stakeholders, compared with 15 percent in 2018.
- Mitigating bias: Understanding the workings of sophisticated, autonomous algorithms is essential to take steps to eliminate unfair bias over time as they continue to evolve.
- Increasing transparency: Universal standards for fairness and trust should inform overall management policies for the ethical use of AI.
Ethics and Accountability
The “Algorithmic Accountability Act of 2019” was introduced in the U.S. House of Representatives on April 10, 2019 and referred to the House Committee on Energy and Commerce. The bill requires an assessment of the risks posed by automated decision systems to the privacy or security of personal information of consumers and the risks that the systems may result in or contribute to inaccurate, unfair, biased or discriminatory decisions impacting consumers.
Governance and accountability issues relate to who creates the ethics standards for AI, who governs the AI system and data, who maintains the internal controls over the data and who is accountable when unethical practices are identified. The internal auditors have an important role to play in this regard. They should assess risk, determine compliance with regulations and report their findings directly to the audit committee of the board of directors.
Corporate governance is essential to develop and enforce policies, procedures and standards in AI systems. Chief ethics and compliance officers have an important role to play, including identifying ethical risks, managing those risks and ensuring compliance with standards.
Governance structures and processes should be implemented to manage and monitor the organization’s AI activities. The goal is to promote transparency and accountability while ensuring compliance with regulations and that ethical standards are met.
A research study by Genesys found that more than one-half of those surveyed say their companies do not currently have a written policy on the ethical use of AI, although 21 percent expressed a definite concern that their companies could use AI in an ethical manner. The survey included 1,103 employers and 4,207 employees regarding the current and future effects of AI on their workplaces. The 5,310 participants were drawn from six countries: the U.S., Germany, the U.K., Japan, Australia and New Zealand. Additional results include:
- 28 percent of employers are apprehensive their companies could face future liability for an unforeseen use of AI.
- 23 percent say there is currently a written corporate policy on the ethical use of AI.
- 40 percent of employers without a written AI ethics policy believe their companies should have one.
- 54 percent of employees believe their companies should have one.
The ethical use of AI should be addressed by all organizations to build trust into the system and satisfy the needs of stakeholders for accurate and reliable information. A better understanding of machine learning would go a long way to achieve this result.
Professional judgment is still necessary in AI to decide on the value of the information produced by the system and its uses in looking for material misstatements and financial fraud. In this regard, the acronym GIGO (“garbage in, garbage out”) may be appropriate. Unless the data is reliably provided and processed, AI will produce results that are inaccurate, incomplete or incoherent, and machine learning would be compromised with respect to ethical AI.
Posted by Dr. Steven Mintz, The Ethics Sage, on October 20, 2021. Steve is the author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/. Follow him on Facebook at: https://www.facebook.com/StevenMintzEthics and on Twitter at: https://twitter.com/ethicssage.