AIPCH07 — Trustworthy (via Trust Signals)
“Emits Trust Signals”
What AIPCH07 is really asserting
AIPCH07 is not asserting that:
“The AI Product is trustworthy.”
It is asserting that:
The AI Product continuously emits objective, measurable, and machine-consumable trust signals that allow independent verification of its performance, reliability, risk posture, and compliance — without relying on claims, documentation, or producer assurances.
Trust is not declared.
Trust is computed and observed.
The Essence (HDIP + AIPS Interpretation)
An AI Product is trustworthy if and only if:
- Trust is expressed through signals, not statements
- These signals are continuously updated at runtime
- They enable independent assessment by consumers, platforms, and agents
If trust depends on:
- documentation
- certifications
- manual reviews
- producer claims
then AIPCH07 is not met, even if those artifacts exist.
Trust Signals — What They Represent
Trust signals may include:
- performance metrics (accuracy, precision, recall, etc.)
- behavioral consistency
- drift indicators
- risk tier (R0–R4)
- compliance status (policy adherence)
- fairness and bias metrics
- explainability availability
- usage reliability (uptime, latency)
- incident and override history
👉 These are not optional attributes.
They are:
the observable evidence of product quality and responsibility
Positive Criteria — When AIPCH07 is met
AIPCH07 is met when all of the following are true:
1. Trust is expressed as measurable signals
The AI Product emits:
- quantifiable metrics
- structured signals
- time-series or state-based values
Examples:
- accuracy: 0.94
- drift score: 0.12
- risk tier: R2
- fairness metric: within threshold
Not:
- “high quality”
- “enterprise-grade”
- “compliant”
2. Signals are continuously updated
Trust signals:
- reflect current behavior, not historical snapshots
- update based on:
- new data
- usage patterns
- model or system changes
- are not static or stale
3. Signals are independently accessible
Consumers and systems can:
- access trust signals via APIs or registry
- evaluate them without contacting the producer
- compare across products
This enables:
marketplace-style trust evaluation
4. Signals are machine-interpretable
Trust signals:
- are structured (not free text)
- can be parsed and evaluated by agents
- support automated decision-making
This enables:
- PMDD scoring
- policy enforcement
- automated selection/composition
5. Signals reflect the full AI Product (not just components)
For composed AI Products:
- trust signals represent end-to-end behavior
- not just:
- individual models
- isolated components
This ensures:
product-level trust, not component-level illusion
Negative Criteria — When AIPCH07 is not met
AIPCH07 is not met if any of the following are true:
❌ Trust is based on claims or documentation
Examples:
- “model is accurate”
- “system is compliant”
- certifications without runtime signals
These are assertions, not evidence.
❌ Metrics exist but are not exposed
Examples:
- internal dashboards only
- not accessible to consumers
- hidden within teams
This prevents independent evaluation.
❌ Signals are static or outdated
Examples:
- metrics updated once during training
- no runtime monitoring
- no drift tracking
This creates false confidence.
❌ Signals are not machine-readable
Examples:
- reports in PDF
- screenshots of dashboards
- unstructured descriptions
This prevents automation.
❌ Only component-level metrics exist
Examples:
- model accuracy tracked
- but:
- orchestration behavior not measured
- end-to-end reliability unknown
This creates incomplete trust.
Edge Cases (Important Guidance for Agents)
Case 1: “Model evaluation metrics available (offline only)”
⚠️ Partial
Rationale:
- useful but insufficient
- lacks runtime trust
- does not reflect real-world behavior
Case 2: “Trust dashboard exists but not exposed via API”
⚠️ Partial
Rationale:
- human-visible
- not machine-consumable
- limits automation
Case 3: “DPP provides structured, real-time trust signals”
✅ Met
Rationale:
- signals are:
- structured
- accessible
- continuously updated
- supports PMDD and governance
Case 4: “Trust signals derived only from components”
❌ Not met
Rationale:
- ignores system-level behavior
- misleading representation of trust
Evidence Signals an Agent Should Look For
Authoritative evidence:
- DPP (Digital Product Passport) with trust signal fields
- API endpoints exposing trust metrics
- time-series or state-based signal updates
Supporting evidence:
- drift detection outputs
- fairness/bias monitoring signals
- compliance status indicators
- incident or override logs
Red flags:
- reliance on documentation or certifications
- lack of runtime monitoring
- metrics hidden or inaccessible
- mismatch between claimed and observed behavior
How an Agent Should Decide
Decision rule (simplified):
If trust in the AI Product cannot be independently computed from continuously available, machine-readable signals, AIPCH07 is not met.
Why AIPCH07 Is Non-Negotiable
Without AIPCH07:
- trust becomes subjective
- governance becomes manual
- risk cannot be continuously managed
- composition becomes unsafe
AIPCH07 enables:
- continuous trust verification
- automated governance (policy + PMDD)
- safe reuse and composition
- marketplace-driven selection of AI Products
Canonical Statement (for AIPS)
AIPCH07 is satisfied only when an AI Product continuously emits structured, measurable, and machine-interpretable trust signals reflecting its performance, reliability, risk posture, and compliance, enabling independent and automated assessment without reliance on claims, documentation, or producer assurances.