Making AI Systems Accountable
The question of whether artificial intelligence can be used in an ethical, responsible manner has been discussed a lot lately especially in regard to workplace ethics. AI should be used to make our lives easier but it should do so in a responsible, ethical manner.
A recent survey by Deloitte Insights evaluates the potential AI risks of top concerns to companies. The chart below shows that over 50 percent of respondents are concerned about the cybersecurity vulnerabilities of AI. Others are concerned about responsible ethics and decision-making as follows:
- Making the wrong strategic decisions based on AI (43%).
- Legal responsibility for decisions/actions made by AI systems (39%).
- Regulatory noncompliance risk (37%).
- Erosion of customer trust from AI failures (33%).
- Ethical risks of AI (32%).
A new workplace survey by Genesys examines the views of employers and employees with respect to AI ethics policies, potential misuse, liability and regulation. More than half of the employers questioned in a multi-country opinion survey say their companies do not currently have a written policy on the ethical use of AI or bots and 21 percent expressed a concern that companies could use AI in an unethical manner. The research shows that both employers and employees support increased use of AI-enabled technologies in the workplace. Millennials are most apprehensive about the ethical use of AI. They worry about liability related to AI misuse and unethical uses of AI-produced data.
There is somewhat of a global consensus about what should be the ethical principles in AI. They tend to include values such as transparency, non-maleficence, justice, responsibility and privacy. A brief discussion follows.
Transparency. The ability to understand the decisions of AI.
Non-maleficence. Never cause foreseeable or unintentional harm using AI, including discrimination, violation of privacy, or bodily harm.
Justice. Monitor AI to prevent or reduce bias.
Responsibility. Those involved in developing AI systems should be held accountable for their work.
Privacy. An ethical AI system promotes privacy both as a value to uphold and a right to be protected.
Some of the moral issues to address include the possibility of hacking, protecting privacy rights, avoid bias in the data produced in AI systems and retaining decision-making skills so a human can step in when AI systems are not producing what is intended.
An example of biased results is the case of David Heinemeier Hansson who condemned Apple Card in tweets for providing him a credit limit that is 20 times higher than his wife, even though the couple files joint tax returns and his wife has a higher credit score. The New York Department of Financial Services is looking into allegations of gender discrimination against users of the Apple Card, which is administered by Goldman Sachs.
In these kinds of cases, users must demand more transparency to better understand how the “black Box” system works and who is responsible for monitoring the data input to ensure bias does not occur. Trust but verify are good words to live by in evaluating AI systems.
I have previously blogged about the “Algorithmic Accountability Act of 2019” that was introduced in the U.S. House of Representatives on April 10, 2019 and referred to the House Committee on Energy and Commerce. The bill requires an assessment of the risks posed by automated decision systems to the privacy or security of personal information of consumers and the risks that the systems may result in or contribute to inaccurate, unfair, biased or discriminatory decisions impacting consumers.
Governance and accountability issues are underlying concerns of the Act as follows:
- Who creates the ethics standards for AI?
- Who governs the AI system and data?
- Who maintains the internal controls over the data?
- Who is accountable when unethical practices are identified?
The internal auditors have an important role to play in this regard. They should assess risk, determine compliance with regulations and report their findings directly to the audit committee of the board of directors.
AI is an evolving process and we will, no doubt, learn more about how the systems work, are controlled, and corporate oversight. In the meantime, sound ethical principles, such as those above, should be developed by companies as a way of identifying “best practices” in AI.
Posted by Steven Mintz, aka Ethics Sage, on May 7, 2020. Dr. Mintz recently published a book, Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior, that is available on Amazon. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/. Follow him on Facebook at: https://www.facebook.com/StevenMintzEthics.