Skip to main content

Prohibited Uses

Every AI Product must explicitly declare prohibited uses — contexts, applications, or practices where the product must not be applied.
This ensures that AI Products are governed responsibly and cannot be misused under ambiguity.


Why Prohibited Uses Matter

  • Ethics → Prevents harm from irresponsible deployment.
  • Governance → Provides clear boundaries for compliance enforcement.
  • Risk Mitigation → Protects organizations from reputational and legal exposure.
  • Transparency → Signals to consumers the contexts where use is unsafe or disallowed.
  • Trust → Strengthens credibility by acknowledging limitations and risks.

Prohibited Use Categories

  1. Legal and Regulatory Violations

    • Any use violating applicable laws, regulations, or contractual obligations.
  2. Human Rights Violations

    • Applications enabling surveillance, discrimination, or suppression of freedoms.
  3. High-Risk Domains Without Approval

    • Deployment in healthcare, finance, or defense without explicit certification.
  4. Malicious Applications

    • Use in generating disinformation, deepfakes for harm, or cyberattacks.
  5. Unfit Purpose

    • Use outside declared Purpose & Intent.
    • Examples: Applying a consumer sentiment classifier for credit scoring.
  6. Security Risks

    • Uses that compromise confidentiality, integrity, or safety of systems.

Declaration Requirements

AI Products must declare:

  • Explicit List of prohibited uses.
  • Risk Category tied to Governance & Policy.
  • Enforcement Hooks → technical (access restrictions) or policy-driven (license terms).
  • Audit Evidence showing monitoring for prohibited use attempts.

Example

AI Product: Facial Recognition API

  • Prohibited Uses:
    • Law enforcement surveillance without judicial oversight.
    • Automated hiring or credit decisioning.
    • Use in contexts involving minors without explicit consent.
    • Integration with lethal autonomous weapon systems.
  • Enforcement Hooks:
    • Terms of service prohibiting such uses.
    • Access throttling for unapproved domains.
    • Audit monitoring with automated alerts.

Summary

  • Prohibited uses define the red lines of an AI Product.
  • They prevent misuse in harmful, unethical, or high-risk contexts.
  • Declarations must be explicit, enforceable, and auditable.

Principle: An AI Product without declared prohibited uses risks harm by omission — silence is not neutrality.