The Manifesto for Ethical Excellence in Artificial Intelligence (EXAI)

  1. “So act as to treat humanity, whether in your own person or in another, always as an end and never as only a means.” —Immanuel Kant
    1. Our goal: including what's mostly good for everyone, excluding anything that is very bad for anyone.
    2. We as a community are embracing an exciting new age with the grand opportunity to improve upon our use and understanding of a powerful tool; our digital tools are becoming intelligent enough to understand our desires and the way we naturally communicate
      1. Artificial Intelligence is a loaded term because it can imply so much more—namely, sentience that can be harmful. Our public unconscious is anxious about something so widespread with its own agency that may not share our human values.
      2. Disparate cultures share a common interest in proper use of AI, and a common fear of its potential pitfalls. The bedrock of our situation is that emerging AI technology and advanced models are very powerful, posing both opportunity and legitimate challenges which should be addressed as an organized collective.
      3. This living Charter proposes simple tenets for Ethical Excellence in human application of AI. Considering the simple tenets of this proposed living Charter, Artificial Intelligence (also known as “AI”) will be treated as just that: a tool for people to use for society's benefit.
      4. Much like a hammer is designed to build things, it can also be misused due to ineptitude or, worse yet, ill-intent. A hammer can be swung to hit a nail and accidentally strike a thumb. It can also be used as a blunt-force weapon for violence. Whether by accident or on purpose, misusing tools as powerful as advanced models can cause great pain—for shareholders, end users, employees, investors, and broader society.
  2. This is what EXAI seeks to mitigate. It is these two categories that define all misuse of all tools:
    1. Ineptitude
      1. If a Financial Services company hires a consultant to implement AI or oversee advanced models and sees neither return nor transparent understanding as to how to oversee the model(s) going forward, this monetary loss would count as ineptitude.
    2. Malignant Intent
      1. If a mobile game studio adept at applying AI does not factor in (or openly disclose) that their use of powerful models may drive addiction, and correlated effects on the end user, this hypothetical studio may not be inept, but they misuse AI under the latter category: malignant intent.
    3. Both ineptitude and intent are herein unethical when applying a tool as powerful as AI to any organizational outcome, if only for different reasons.
  3. This is to say that the Charter's tenets are adequate in themselves to proffer a foundation on which to practice AI ethically, empowering professionals to immediately point out malignant practice, and better guide ineptitude regarding AI applications.
    1. This living Charter herein sets a standard for AI as a tool to serve humans as ends, not means. Socratic discourse around the definitions of Its tenets is encouraged, if only for the benefit of The Human—user and developer, employer and employee, and every stakeholder involved otherwise.
    2. EXAI is an optimistic, educated, and prodigious community that seeks to implement and oversee models at the behest of The Human, not the other way around.
      1. “The proud person always wants to do the right thing, the great thing. But because he wants to do it in his own strength, he is fighting not with man, but with God.” —Soren Kierkegaard
  4. Ensure Transparency and Explainability
    1. Transparency in AI refers to the ability of users to understand how an AI system works and makes decisions.
    2. Leaders should be able to elocute to all stakeholders (including employees, investors, end users, etc.) simply what their model does, its rationale, its opportunities, risks, and goals. This includes:
      1. Understanding the data: All relevant stakeholders should be aware of the type, quality, and sources of the data used to train the AI model upon general inquiry.
      2. Knowing the algorithms: The algorithms or models used should be understandable to a certain degree, allowing all relevant stakeholders to grasp the underlying logic of the implementation.
      3. Understanding the decision-making process: The AI system should be able to provide explanations for its outputs.
      4. Open and clear disclosures for model outputs leaves the onus of the decision on the human decision-maker, not the model itself, powerful it may be.
  5. Explainability is closely related to transparency, but not quite synonymous.
    1. Companies often utilize complex legal language in their privacy policies to obscure the extent of their data collection and usage practices. Data obfuscation is deemed unethical due to lack of transparency.
      1. Data obfuscation is not limited to, but can be achieved through vague terms, excessive enumeration of purposes, default opt-out mechanisms, indefinite data retention, cross-device tracking, third- party data sharing, and frequent policy modifications.
      2. Data obfuscation makes it difficult for individuals to understand the full implications of their interactions with companies and the potential risks to their personal information.
    2. Ethical explainability is the ability to provide a clear and understandable explanation for a decision made by an AI system. This can be achieved through:
      1. Feature importance: Identifying the most influential factors that contributed to a decision will give users peace of mind and hold executives accountable for what they set out to do.
      2. Rule extraction: government compliance aside, every model should have a ruleset that is readily available and communicable to all relevant stakeholders.
      3. "If you can't explain it simply, you don't understand it well enough." —Albert Einstein
  6. Fortify Privacy and Data Security
    1. Privacy in AI refers to the protection of individuals' personal information from unauthorized access or disclosure. This includes data collected, processed, and used by AI systems.
    2. Data security in AI refers to the protection of data from unauthorized access, use, disclosure, disruption, modification, or destruction. This involves 4 implementing cybersecurity measures to prevent data breaches and ensure data integrity.
    3. Data minimization: Organizations should collect and process only the necessary data for their explainable goals.
    4. Data anonymization/ pseudonymization: Organizations should transform data to remove or disguise personal identifiers.
    5. Access controls: Implementing strong access controls to limit access to sensitive data.
    6. Encryption: Data encryption should be implemented to protect it from unauthorized access.
    7. Regular security assessments: Organizations should conduct regular security assessments to identify and address vulnerabilities.
    8. Incident response plans: Organizations should have written and communicable plans in place to respond to data breaches and other security incidents.
  7. Identify Decision-makers and Hold Them Accountable.
    1. Human decision-makers remain ultimately accountable for the outcomes. This is important and should be instilled as a basic premise for AI implementation.
    2. Human decision-makers decide to implement AI, and therefore are responsible for ensuring that AI is used in a way that aligns with societal values and avoids harmful consequences.
      1. If this is considered, then a more equitable and understandable system may take place that can do both: grow profits and personal well-being.
  8. Risk Assessment and Mitigation: AI systems can introduce new risks and challenges.
    1. Human decision-makers are responsible for assessing these risks and taking appropriate measures to mitigate them, such as developing safety protocols or implementing safeguards.
    2. Legal and Regulatory Compliance: Human decision-makers are ultimately responsible for ensuring that AI systems comply with relevant laws and regulations. This includes understanding and adhering to privacy laws, data protection regulations, and other applicable legal frameworks.
    3. Learning and Improvement: Holding individuals and organizations accountable can encourage them to learn from their mistakes and improve 5 their practices. This can help to ensure that AI systems are continuously evolving and improving.
    4. In essence, human decision-makers act as the final line of defense in AI systems. They are responsible for ensuring that AI is used ethically, responsibly, and in a way that benefits society.
  9. While this Charter does not and will not define “the greater good” or “holistic societal benefit” explicitly, if The Human acknowledges all consequences of AI and advanced implementation that may occur without seeking indemnification, then better clarity can ensue.
    1. Brighter outcomes will arise when an executive or decision-maker is steadfast in their understanding of accountability for decisions and allocation of resources.
    2. While AI can automate tasks and enhance decision-making, it is essential that The Human remains accountable for the outcomes and consequences of their use.
  10. There should be clear accountability for the development, deployment, and use of AI systems. No matter the function, advanced models and AI (especially) to its most powerful extent should be stamped with one of a legal human name, human face, or a human signature, bearing the responsibility for the consequences of their actions, regardless of outcome.
    1. “It is absurd to make external circumstances responsible and not oneself, and to make oneself responsible for noble acts and pleasant objects responsible for base ones.” —Aristotle
  11. Changes or amendments to make
    1. Questions
    2. Suggested ideas, amendments, or counterproposals
    3. Miscellaneous Comments