Skip to main content

Examples

The following examples illustrate how AIPROD can be applied in practice across different classes of AI Products.
Each example demonstrates how the specification’s elements — from identity to governance — come together to define a true AI Product.


  • Identity:

    • Product ID: urn:aiprod:legal-summarizer:v1.0.0
    • Owner: LexAI Governance Unit, Global LawTech Inc.
  • Purpose & Intent:

    • Purpose: Summarize legal contracts into plain-language abstracts.
    • Intent: Support corporate legal teams; not a replacement for licensed legal advice.
  • Capability Type: Generative (multilingual summarization).

  • Inputs & Outputs:

    • Input: Contract text (PDF, TXT).
    • Output: JSON summary + confidence score.
  • Lineage & Provenance:

    • Base Model: Fine-tuned LLaMA-3 70B.
    • Data: Licensed case law + proprietary annotated contracts.
  • Governance & Policy:

    • High-risk (legal).
    • Prohibited for consumer-facing contract validation.
  • Quality Metrics:

    • ROUGE-L ≥ 0.70, BLEU ≥ 0.55.
    • Error flagged if summary omits legally binding clauses.
  • Observability:

    • Provides rationale highlighting key clauses.
    • Logs 1% of anonymized contracts for audit.

Example 2: Fraud Detection Classifier (Predictive AI Product)

  • Identity:

    • Product ID: urn:aiprod:fraud-detector:v2.1.0
    • Owner: Enterprise Risk Division, FinTrust Bank.
  • Purpose & Intent:

    • Purpose: Classify financial transactions as fraudulent or legitimate.
    • Intent: To assist fraud teams, not for automated denial of service without human oversight.
  • Capability Type: Predictive (binary classification).

  • Inputs & Outputs:

    • Input: Transaction record (Parquet schema).
    • Output: JSON fraud score + probability.
  • Lineage & Provenance:

    • Source Model: Gradient-boosted tree ensemble.
    • Data: 10 years of proprietary transaction logs.
  • Governance & Policy:

    • Classified as high-risk under EU AI Act.
    • Quarterly bias audits required.
  • Quality Metrics:

    • AUROC ≥ 0.95.
    • False positive rate ≤ 2%.
  • Observability:

    • Provides SHAP values for feature-level explanations.
    • Real-time monitoring of subgroup accuracy.

Example 3: Enterprise Research Assistant (Agentic AI Product)

  • Identity:

    • Product ID: urn:aiprod:research-agent:v0.9-beta
    • Owner: Knowledge Systems Lab, EduGlobal Consortium.
  • Purpose & Intent:

    • Purpose: Retrieve, synthesize, and draft literature reviews.
    • Intent: Academic research support; not intended for clinical or legal decision-making.
  • Capability Type: Agentic (multi-step reasoning, retrieval-augmented generation).

  • Inputs & Outputs:

    • Input: Research query.
    • Output: Structured report with references.
  • Lineage & Provenance:

    • Base Models: Combination of embedding retriever + GPT-family LLM.
    • Sources: Open-access corpora + curated academic databases.
  • Governance & Policy:

    • Moderate risk (academic support).
    • Prohibited for grading or high-stakes student evaluation.
  • Quality Metrics:

    • Relevance score ≥ 0.85 on benchmark queries.
    • Citations verifiable 95% of the time.
  • Observability:

    • Shows retrieval sources and confidence per reference.
    • Drift detection if citation accuracy falls below thresholds.

Summary

  • Generative AI Products (e.g., summarizers) must balance capability with clear intent and limits.
  • Predictive AI Products (e.g., fraud classifiers) demand rigorous governance and fairness metrics.
  • Agentic AI Products (e.g., research assistants) raise novel issues of autonomy, orchestration, and provenance.

Principle: Examples demonstrate that AI Products are not defined by algorithms alone — but by their identity, governance, trust signals, and lifecycle accountability.