The Key to Ethical AI is Transparency
06/16/2020
Trust and Verify
Artificial Intelligence systems can obscure the data in a way that creates biased information depending on the human values of those who enter the data from its source. For example, an algorithm developed from mortgage loan data might include a variable for the expectation of homeowners’ repayment of the debt. A bias may exist if one or more geographical locations are rated higher on the risk scale for past repayment problems. These areas may include largely minority groups. But, should they be biased against simply because problems existed in the past and with certain individuals? In other words, should one person(s) suffer the consequences of being turned down for a loan because past residents had problems with repayment?
The question of whether AI can be used in an ethical, responsible manner has been discussed a lot lately especially in regard to workplace ethics. AI should be used to make our lives easier but it should do so in a responsible, ethical manner.
Mario Guillen writes that the recent scandal over the use of personal and social data by Facebook and Cambridge Analytica has brought ethical considerations to the fore. As AI applications require increasing amounts of data to help machines learn and perform tasks previously reserved for humans, companies are facing new public scrutiny. Tesla and Uber have scaled down their efforts to develop autonomous vehicles in the wake of widely reported accidents. Guillen asks: “How do we ensure the ethical and responsible use of AI? How do we bring more awareness about such responsibility, in the absence of a global standard on AI?”
A recent survey by Deloitte Insights evaluates the potential AI risks of top concerns to companies. The chart below shows that over 50 percent of respondents are concerned about the cybersecurity vulnerabilities of AI. Others are concerned about responsible ethics and decision-making as follows:
- Making the wrong strategic decisions based on AI (43%).
- Legal responsibility for decisions/actions made by AI systems (39%).
- Regulatory noncompliance risk (37%).
- Erosion of customer trust from AI failures (33%).
- Ethical risks of AI (32%).
There is somewhat of a global consensus about what should be the ethical principles in AI. They tend to include values such as transparency, non-maleficence, justice, responsibility and privacy. A brief discussion follows.
Transparency. The ability to understand the decisions of AI.
Non-maleficence. Never cause foreseeable or unintentional harm using AI, including discrimination, violation of privacy, or bodily harm.
Justice. Monitor AI to prevent or reduce bias.
Responsibility. Those involved in developing AI systems should be held accountable for their work.
Privacy. An ethical AI system promotes privacy both as a value to uphold and a right to be protected.
There needs to be full transparency and disclosures about how AI machines are producing data and making decisions, how that data is being used and what are the outcomes of such use to the organization.
Internal auditors have a critical role to play in overseeing AI systems and ensuring they are doing what they are supposed to do including to make sure AI is used ethically – responsibly – and there is full transparency. This enhances trust in AI systems. Internal auditors should assess risk, determine compliance with regulations and report their findings directly to the audit committee of the board of directors.
I have previously blogged about the “Algorithmic Accountability Act of 2019” that was introduced in the U.S. House of Representatives on April 10, 2019 and referred to the House Committee on Energy and Commerce. The bill requires an assessment of the risks posed by automated decision systems to the privacy or security of personal information of consumers and the risks that the systems may result in or contribute to inaccurate, unfair, biased or discriminatory decisions impacting consumers.
Governance and accountability issues are underlying concerns of the Act as follows:
- Who creates the ethics standards for AI?
- Who governs the AI system and data?
- Who maintains the internal controls over the data?
- Who is accountable when unethical practices are identified?
Simply stated, the key to an ethical AI-system is to enter unbiased data, build in reliability, explainability, reproducibility and auditability. These are some of the ethical values that should underlie the development and maintenance of AI-systems to ensure they can be trusted by users of the data and those who may be directly or indirectly affected by the decisions made with this data, such as in the mortgage loan situation.
Posted by Steven Mintz, aka Ethics Sage, on June 16, 2020. Dr. Mintz recently published a book, Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior, that is available on Amazon. You can sign up for his newsletter and learn more about his activities at: https://www.stevenmintzethics.com/. Follow him on Facebook at: https://www.facebook.com/StevenMintzEthics.