AIPCH02 — Deployable
“Independently Deployable AI Capability via Self-Service”
What AIPCH02 is really asserting
AIPCH02 is not asserting that:
“An AI Product can be deployed to production.”
It is asserting that:
The AIPRO (or their domain) can define, compose, publish, and evolve the AI Product end-to-end using intent-driven self-service — without requiring engineering expertise or intervention.
This is about who holds agency over AI capability creation, not whether deployment pipelines exist.
The Essence (HDIP + AIPS Interpretation)
An AI Product is deployable if and only if:
- The domain actor (AIPRO) can move the product through its lifecycle
- All interaction is expressed in intent, behavior, and constraints
- Technical realization is fully abstracted and compiled by the platform
If creating or changing the AI Product requires:
- writing code
- designing pipelines
- selecting models
- configuring infrastructure
- orchestrating tools
then AIPCH02 is not met, even if deployment is automated.
Positive Criteria — When AIPCH02 is met
AIPCH02 is met when all of the following are true:
1. Lifecycle is self-service from the domain perspective
The AIPRO (or delegate) can:
- declare intent (what the AI should do)
- define behavior and expected outcomes
- specify constraints (risk, policy, safety)
- compose capability (including other AI/Data Products)
- trigger publish or update
- observe status and outcomes
All without:
- writing code
- designing models
- configuring pipelines
- selecting infrastructure
This aligns with HDIP:
Declare → Compose → Publish → Observe → Evolve
2. Inputs are business-native, not technical
Valid inputs include:
- decision intent (e.g., “detect fraud”, “approve credit”)
- behavioral expectations (e.g., explainability, latency)
- risk posture (R0–R4 implications)
- policy selections (compliance, safety, usage boundaries)
- references to other AI/Data Products (composition)
- usage context and consumer expectations
Invalid inputs include:
- model types (e.g., “use XGBoost”, “use GPT-4”)
- pipeline definitions
- feature engineering logic
- orchestration graphs
- infrastructure parameters
Those belong to the platform.
3. Platform acts as a Product Factory Intelligence
The platform:
- selects models and techniques
- orchestrates workflows and agents
- provisions infrastructure
- binds policies and governance
- configures observability and evaluation
The domain never needs to know how this is done.
This is not tooling — this is:
an automated technology organization
4. No engineering translation layer exists
The AIPRO:
- does not rely on engineers to “implement” intent
- does not hand off requirements for translation
- does not wait for technical backlog prioritization
There is no human compiler between intent and execution.
Negative Criteria — When AIPCH02 is not met
AIPCH02 is not met if any of the following are true:
❌ Deployment requires engineering involvement
Examples:
- “Raise a ticket for model deployment”
- “Ask ML team to retrain”
- “Engineering must approve pipeline changes”
Even if fast, this breaks deployability.
❌ Self-service interfaces expose technical constructs
Examples:
- UI asks users to:
- choose model types
- configure hyperparameters
- design pipelines or DAGs
- select compute resources
This is engineering disguised as UI, not self-service.
❌ AI Product lifecycle is controlled by engineers
Examples:
- engineers define model behavior
- retraining controlled by ML teams
- deployment tied to code releases
This is centralized AI delivery, not productization.
❌ Composition requires engineering orchestration
Examples:
- combining AI Products requires writing code
- chaining models requires pipelines
- agent orchestration requires scripts
This violates composability as a product capability.
Edge Cases (Important Guidance for Agents)
Case 1: “Low-code AI builder with model selection options”
❌ Not met
Rationale:
- user is still making technical decisions
- abstraction is incomplete
- domain must think like an ML engineer
Case 2: “Prompt-based systems without full lifecycle control”
⚠️ Partial
Rationale:
- intent expression exists
- but lacks:
- governance binding
- lifecycle control
- deployment ownership
Case 3: “Intent-driven composition + one-click publish”
✅ Met
Rationale:
- domain expresses intent
- platform compiles and deploys
- no engineering dependency
This is the target state.
Case 4: “Human-in-the-loop deployment approval required”
⚠️ Context-dependent
Rationale:
- acceptable for high-risk (R3/R4) scenarios
- but:
- should not require engineering intervention
- should be governance-driven, not technical dependency
Evidence Signals an Agent Should Look For
Authoritative evidence:
- intent-driven AI Product onboarding interface
- one-click publish/update capability
- no domain-authored code repositories
Supporting evidence:
- automatic generation of AIPDS and deployment artifacts
- platform-driven model and pipeline selection
- runtime provisioning without manual intervention
Red flags:
- Git repositories owned by domain teams
- deployment instructions involving engineers
- UI fields referencing models, pipelines, or infrastructure
How an Agent Should Decide
Decision rule (simplified):
If the domain must think like an ML engineer or depend on engineering teams to create, modify, or deploy the AI Product, AIPCH02 is not met.
Why AIPCH02 Is Non-Negotiable
Without AIPCH02:
- AI creation does not scale
- engineering becomes the bottleneck
- democratization becomes superficial
- innovation is constrained by capacity
AIPCH02 ensures that:
AI capability creation scales through platform intelligence, not human engineering effort
Canonical Statement (for AIPS)
AIPCH02 is satisfied only when an AI Product can be defined, composed, deployed, and managed end-to-end by the domain using intent-driven self-service, with all technical and infrastructural complexity fully abstracted and compiled by the platform.