Ethics & AI
This blog was first published on September 19, 2023, by Enrique Corro, who has been with VMware for 15 years. He currently works as a staff engineer.
The development and deployment of artificial intelligence (AI), particularly generative AI and large language models (LLMs), have opened up many opportunities for businesses, researchers, and society. However, these opportunities come with significant ethical considerations.
VMware Ethical Principles
VMware adopted a comprehensive set of ethical principles for AI in 2023 to help drive fairness, accountability, sustainability, and responsibility. Materializing ethical principles for AI into concrete processes and actions can be daunting. Fortunately, the NIST AI Risk Assessment Framework (AI RMF) provides sound guidance on building trustworthy AI systems following a four-step process: Govern, Map, Measure, and Manage. While one size does not fit all, the framework is extremely useful for process planning. In addition to NIST’s AI RMF, our ethical principles for AI were also inspired by VMware’s EPIC2 values and our 2030 Agenda.
Inclusiveness underscores VMware’s recognition of the fact that diversity breeds innovation. In the context of generative AI, diverse teams can contribute perspectives that help mitigate model bias, enabling the creation of systems that reflect the heterogeneity of human society.
Fairness in AI is not merely an ideal; it is a necessity. Generative models, such as large language models, are trained on vast amounts of data, which inherently might contain biases. When evaluating third-party LLMs, VMware considers their commitment to detecting and mitigating these biases at various levels, from organizational norms to computational processes. Ensuring that AI systems are devoid of racial, gender, or discrimination is crucial for maintaining trust, integrity, and respect for human rights.
- Explainability and Transparency
Generative AI models are often considered ‘black boxes’ with their inner workings being challenging to understand. VMware focuses on explainability and transparency, developing ways to understand and audit models’ outputs. Practicing transparency will foster trust and enable users to know why an AI instance generated a particular outcome, making AI more accessible to non-experts.
- Reliability and Safety
The reliability and safety of AI systems are paramount for widespread adoption. VMware strives to develop systems that function as intended, consider potential risks, and implement control mechanisms. Generative AI involves continuous monitoring and validation and implementing risk management frameworks that reassess the AI system throughout its lifecycle.
- Privacy and Security
Generative models often handle sensitive information. VMware’s guidelines demand adherence to privacy policies and the implementation of technologies that safeguard individual data. This is especially significant as generative AI can potentially make inferences about individuals, leading to legal ramifications. Recognizing AI models as sensitive data further extends the horizon of privacy considerations.
VMware continues to develop processes that define clear lines of responsibility for each stage of AI development. Building accountability into the development process ensures that ethics are not merely a checklist but a continuous commitment. In generative AI, accountability safeguards against misuse and ensures that the principles are applied consistently.
AI’s environmental impact is an often-overlooked aspect. VMware’s commitment to net zero carbonemissions will extend to AI development, aligning it with ecological responsibility. Sustainable practices in developing large-scale generative models can significantly contribute to the global sustainability agenda.
- Respect for Original Work Ownership
Intellectual property is central to innovation, and VMware upholds this by aligning AI development with proper legal considerations. This protects creative efforts and promotes a fair environment where innovation can flourish.
Summing it Up
VMware’s ethical principles for AI are designed to support a comprehensive and robust framework that will help drive fairness, accountability, sustainability, and responsibility. These principles resonate profoundly in the context of generative AI, where the convergence of technological innovation and ethical mindfulness can drive the responsible development of AI systems. Following these principles will help ensure that large language models and generative algorithms contribute to technological advancement and mirror the values and principles that hold our society together. We continuously strive to put them into action across the business to ensure that the future of AI will be ethically sound, and aligned with human values and societal expectations.
Posted by Dr. Steven Mintz, aka Ethics Sage, on October 5, 2023. You can learn more about Steve’s activities by checking out his website at: https://www.stevenmintzethics.com/ and signing up for his newsletter.