Software Onboarding
Process Transformation
Discovery Findings, Recommendations, and Implementation Roadmap
Based on 14 stakeholder sessions and 35+ interviews across Architecture, Product, Security, Risk Management, Vendor Management, Finance, Legal, and Compliance teams. Covering 11 governance domains with actionable 30/60/90/120-day implementation plans.
Agenda
Executive Overview
- Executive Summary
- End-to-End Workflow and Pain Points
- System Landscape and Integration Gaps
- Cross-Cutting Operating Models (slides 5-7)
- High-Level Roadmap (30/60/90/120)
Governance Domain Deep Dives
Each domain: Current State + Recommendations (merged)
.
- Compliance
- AI Governance
- Privacy
- Commercial Counsel
- Third-Party Risk Management
- SR 11-7 Model Risk Management (slides 21-25)
Future State Process Design
- Process Story & Forms (slides 21-34)
- Committee Voting
- Vendor Questionnaire Pre-Screening (SP0)
- RACI Matrix
- DMN Decision Tables
- Bottleneck Analysis
- Measurement Dashboard
Competitive Positioning
- Executive Summary
- Customer Technology Landscape
- What's Built
- Head-to-Head Comparison
- AI Opportunity Map
- Contract AI and AI Model Flexibility
- Committee Voting: Platform Comparison
- Governance and Regulatory Compliance
- Where Each Platform Wins
- Knowledge-Driven Governance Flywheel
- Analytics and Decision Intelligence
- Knowledge Base Architecture
- SLA Governance Ontology (Interactive)
- Persona-Based Intake Wizard (SP0)
- Continuous Learning & Feedback System
Executive Summary
Current State Understanding
The software onboarding process spans 6 to 9 months end-to-end, driven by 18 sequential committees, 5+ disconnected intake channels, and critical resource bottlenecks in Security and Legal. The START initiative (9 months old) created centralized awareness but did not integrate underlying team processes. Competitors with less mature processes achieve 60-90 day cycles.
Biggest Challenges
Sequential Reviews
Requesters present to ARB, TBC, AI Governance, and DART sequentially. Each committee 5+ weeks apart with overlapping scope.
Requester Burden
DART formation falls entirely on the requester, who must independently contact 5-6 teams, submit separate intake forms, and manage scheduling.
Resource Crisis
2 people negotiate 30+ contracts/month. Security is the primary SLA bottleneck. Architecture recently reduced. "Half a person" owns the START process.
Highest-ROI Investment Areas
Parallel Evaluation
Replace 18 sequential committees with 5 parallel evaluation streams. Architecture Lead validated this model.
Unified Intake + Deal-Killer Gate
Consolidate 5+ intake channels. Block non-starters at day 1 before consuming reviewer capacity.
Contract Automation
Automate contract review for the 2-person team handling 30+/month. Identified as "Dumpster Fire #1" by TPRM Lead.
End-to-End Workflow: Current Pain Points
5+ channels
No formula
Locked forms
75 day review
#1 bottleneck
2wk ARB SLA
60+ queue
2 people
Up to 1.5yr
Critical Bottlenecks (by Impact)
| Bottleneck | Impact | Source |
|---|---|---|
| Contract Negotiation | Critical | 2 people / 30+ contracts monthly |
| Security Review Capacity | Critical | "Biggest bottleneck... reason our SLA takes 2 weeks" |
| Sequential Committees (18) | Critical | Same presentation repeated 3-4 times |
| DART Formation | Critical | Requester manages 5-6 teams independently |
| AI Governance Queue | High | 60+ items, 3 separate AI committees |
| Business Council Quorum | High | 2-3 of 8-10 members; now email voting |
What the Organization Does Well
Architecture Governance
Dedicated Governance Facilitator pre-screens artifacts, manages follow-ups, and runs ARB/SDRB. The most disciplined team interviewed.
TPRM Due Diligence
Reduced DD from 144 days to 75. Output doubled to 335 assessments/year. Vendor questionnaire completion 12 days ahead of target.
Acquisition 2.0
Brings all teams together at the start. Enables collective go/no-go at first tollgate. Acknowledged as time-consuming but necessary.
Staffing Gaps
| Function | Staff | Workload | Gap |
|---|---|---|---|
| Legal / Contracts | 2 | 30+/mo | Critical |
| Security Arch (AI) | ~1 | All AI reviews | Critical |
| Risk / DD | 8 | 335/yr | At capacity |
| Architecture | 2-3 | ARB + SDRB | Reduced |
| Vendor Mgmt | 6 (2 at 50%) | Full facilitation | Insufficient |
"If we want this to really click... I can't have the architect review group being a critical portion with only two people."
Risk Management Lead"I need three of me right now."
Security Architect, on AI review capacitySystem Landscape and Integration Gaps
The governance lifecycle spans 8+ specialized systems. Each team owns tooling optimized for their domain — none of these are changing.
| System | Owner | Function | Integration Status |
|---|---|---|---|
| Jira | Product / EA / Cyber | Technical SME task management, intake coordination | Bi-directional sync (built) |
| OneTrust | Risk / Compliance | Vendor risk assessments, TPRM questionnaires, monitoring | Standalone, manual PDF exports |
| Ariba | Sourcing / Procurement | NDAs, RFPs, vendor contracts, supplier management | API to Oracle only |
| Oracle | Finance | Financial analysis, budget approval, vendor payments | Manual hand-off from Ariba |
| AppFox / Confluence | Enterprise Architecture | Architecture review approvals, technical content | Plugin-based, no workflow integration |
| iManage | Legal | Contract drafting, redlining, version management | No downstream integration |
| Box | Legal | Executed contract storage (system of record) | Manual upload |
| ServiceNow | IT Operations | ITSM: incidents, changes, configuration items | No upstream integration |
| SLA (Camunda 8) | Governance | E2E process orchestration across all systems | Target platform (Jira live) |
Current: Manual Handoffs
PDF exports between OneTrust and ServiceNow. Manual Ariba-to-Oracle financial hand-off. No single audit trail spanning the full lifecycle across all 8+ systems.
Target: SLA Orchestration Layer
Camunda 8 coordinates across all systems via API. Single process instance tracks a request from intake through assessment, contracting, and go-live — with regulatory compliance evidence assembled automatically.
Concierge / Quarterback Model
Architecture's Governance Facilitator role provides the blueprint for end-to-end process orchestration across all domains.
What Works Today (Architecture)
Governance Facilitator
- Pre-screens all design artifacts
- Removes incomplete items from agenda
- Captures action items, manages follow-ups
- Runs ARB and SDRB
- JIRA integration for tracking
The most disciplined governance function identified across all 14 sessions.
Proposed E2E Extension
Process Quarterback
- Single point of contact for requesters (eliminates 5-6 team self-navigation)
- Automated DART formation (replaces requester burden)
- Quality gates at each phase boundary
- Status visibility and proactive notifications
- Cross-functional escalation authority
Eliminates the requester burden: "It's completely on the onus of the requester."
Simultaneous Engagement Model
Replace sequential DART formation with parallel engagement of all review streams. Architecture Lead's explicit recommendation.
Current State (Sequential)
Target State (Simultaneous)
3 Request Types and Distributed Pod Model
Request Type Routing
The current process treats all requests identically. v3 routes them through distinct paths based on request type.
| Type | Description | Process Path | Frequency |
|---|---|---|---|
| Defined Need | Business owner knows requirements, has vendor selected | Standard 6-phase | Most common |
| Forced Update | Existing vendor, product changes (on-prem to SaaS, EOL, new AI) | Re-evaluation path (skip intake, start at SP3) | Growing |
| Speculative / Exploratory | Advisory support, no sponsorship, generating interest | Idea funnel (pre-SP1), not standard process | Frequent, clogs pipeline |
Distributed Pod Model
Domain-specific pods controlling their own prioritization, meeting cadence, and workflow speed. Central team provides consistency.
| Pod | Controls | Central Team Provides |
|---|---|---|
| Cybersecurity | Prioritization, meeting frequency, review speed | Consistent SLA framework |
| Architecture | Technical review cadence, domain assignment | Artifact standards |
| Legal / Contracts | Contract template usage, negotiation approach | Risk appetite alignment |
| AI Governance | AI risk posture, review depth | Regulatory compliance |
| TPRM | Assessment methodology, vendor scoring | Cross-pod visibility |
Implementation Roadmap: 30 / 60 / 90 / 120 Days
The first 30 days focus on process consolidation and quick wins. Subsequent phases build automation, governance refinement, and organizational change.
Days 1-30: Consolidate
- ✓ Unified intake form replacing 5+ channels
- ✓ Deal-killer pre-screen gate and AI no-go list
- ✓ Completeness quality gate at submission
- ✓ 3 request types: Defined Need / Forced Update / Speculative
- ✓ 3-pathway routing: Buy / Build / Enable
- ✓ Quarterback role definition (Architecture Facilitator model)
- ✓ Simultaneous engagement rules defined
- ● NDA timing decision (Security + Legal alignment)
- ✓ Prioritization scoring with capacity impact (05-Priority Scoring)
Days 31-60: Automate
- Parallel evaluation replacing sequential committees
- Automated DART team formation
- Progressive forms (ask only what's needed per stage)
- Contract review automation pilot
- Security tiered assessment (baseline / elevated / major)
- AI governance consolidation (3 committees to 1 stream)
- Finance rework loop (avoid full restart)
- Workload visibility dashboard (pilot)
Days 61-90: Optimize
- AI fast-track pathway (2-week target vs 6-9 months)
- Enable pathway live (Vendor Affinity, skip funding validation)
- Time-bound conditional approvals
- Mandatory ownership assignment at onboarding
- NDA-first gate enforcement
- Automated security baseline checks
- Shift-left: self-service Vendor Questionnaire tools
- SLA enforcement with escalation ladder
Days 91-120: Scale
- Pre-onboarding idea funnel (feedback platform integration)
- Full workload dashboard across all teams
- Exception routing for rapid risk assessments
- Post-onboarding utilization tracking
- Distributed pod model pilot
- Annual ownership validation process
- Process mining and continuous improvement
- Executive KPI reporting
Intake
Current State
ServiceNow START, AI Use Case form, AI Governance form, Rapid Risk Assessment (Power Apps), email/chat. No unified routing.
First-time requesters struggle with complexity. "Nothing directly points folks to intake forms... discovered ad hoc" (Security Architect)
RAE form has 80 questions. Business partners frequently cannot complete accurately. Questions asked from the writer's perspective, not the user's.
Technology teams especially prone to bypassing standard processes. Business partners arrive with pre-selected vendors, skipping sourcing.
RACI: R Business (submits request) | A Business (owns request) | C Governance, Compliance
Recommendations
Unified Intake Gateway
Single entry point absorbing all channels. Dynamic routing based on request type: Standard, AI, E...
Deal-Killer Pre-Screen
DMN-driven no-go check at submission. Inputs: vendor name, AI model, data residency. Blocked requ...
Completeness Quality Gate
AI-assisted pre-screening validates minimum viable fields before routing to SME review teams. Inc...
Request Classification
Automated classification: Defined Need, Forced Update, or Speculative. Each type routes to the ap...
Prioritization
Current State
Teams "horse trade" internally. Each requestor views their request as most important. No force-ranking mechanism across the enterprise.
EVP support pushes other reviews down. No SLA enforcement possible without fundamental process fixes.
Monthly meetings draw 2-3 of 8-10 members. Evolved to email voting with manual facilitation by an executive.
No auto-approval for low-risk items. No de-prioritization guidelines for exception cases.
RACI: R A Governance | C Business, Finance, Technical Assessment
Recommendations
WSJF Scoring Formula
Weighted Shortest Job First: WSJF = Cost of Delay / Job Size. DMN-driven with 7 inputs (business impact, alignment, urgency, risk tier, capacity, cost of delay, job size). Produces continuous score for true queue ranking plus P1/P2/P3 tiers. Small urgent requests surface above large low-priority ones.
Tiered Governance Fast-Track
4-tier classification at intake: Express (renewals, low-risk, 2-3 days), Standard (known category, 2-3 weeks), Enhanced (new vendor/high-risk, 4-8 weeks), Board-Level (AI/ML, regulatory, 8-12 weeks). DMN-11 evaluates fast-track eligibility automatically.
Cost of Delay Quantification
Capture estimated $/week financial burn rate per request at intake. Regulatory deadline cliff detection auto-escalates time-critical items. Backlog dashboard shows accumulating cost in real-time.
Dynamic Re-Prioritization
Weekly automated queue re-evaluation via timer events. Triggered by capacity changes, new requests, deadline proximity, or strategy shifts. Quarterback reviews and confirms priority changes with delta indicators.
AI-Augmented Triage (Future)
Auto-classification at intake using historical approval patterns. Similar request matching suggests fast-track eligibility. Predictive cycle time estimation sets expectations at submission. Full explainability for audit (EU AI Act, SR 11-7).
WSJF Prioritization Deep-Dive
| Request | Impact | Urgency | Risk | Job Size | WSJF | Tier |
|---|---|---|---|---|---|---|
| A: Urgent patch | 6 | 9 | 3 | 3 | 60 | P1 |
| B: Large platform | 8 | 5 | 4 | 13 | 13 | P2 |
Funding / Finance
Current State
Cannot reroute coding matrix issues to FP&A. Minor cosmetic corrections (dates, alignment) require full denial and restart.
Shows 2024 in 2026. Form is locked, password unknown (owner left organization). Downloaded, completed offline, uploaded to shared folder.
Vendor Affinity products require no organizational investment, but the process still requires funding justification. Creates unnecessary friction.
Hard to navigate. Business case justification required multiple times across different forms.
RACI: R A Finance | C Governance, Procurement | I Oversight
Recommendations
Finance Rework Loop
Add correction pathway for coding matrix issues. Minor fixes route directly to FP&A without full ...
Enable Pathway Bypass
Vendor Affinity requests skip funding validation entirely. DMN routing detects "no org investment...
Modernized Financial Form
Replace locked, outdated form with dynamic digital version. Pre-populate from intake data. Condit...
Consolidated Business Case
One business case captured at intake, enriched progressively. Eliminates repeated justification a...
Sourcing
Current State
Target: 14 days. Actual: 28-29 days (2x target). Assigns inherent risk tier and determines DD level.
Skip logic enabled. Vendor completion: 30 days (ahead of 42-day target). Internal review: 75 days (down from 144).
They manage contract lifecycle. True sourcing activity falls on requesters.
Business partners bypass competitive sourcing, arriving with pre-selected vendors. Weakens pricing leverage.
RACI: R Procurement | A Governance | C Compliance | I Oversight
Recommendations
NDA-First Gate
Enforce NDA execution before detailed vendor engagement. Single discussion allowed before NDA req...
Shift-Left: Self-Service Vendor Questionnaire
Empower requesters with structured RFP tools before formal onboarding. Codify sourcing knowledge ...
Tiered Due Diligence
Risk tier determines DD depth. Low-risk: automated checks only. Medium: abbreviated review. High/...
Vendor-Level Aggregation
Single vendor spanning 10+ business units shares vendor-level compliance artifacts. Per-request a...
Cybersecurity
Current State
Understaffed security architecture team. The primary reason ARB SLA takes 2 weeks. EA could "get our SLA way down" without security constraints.
Request volume increasing continuously, specifically for AI reviews. Current process cannot scale with existing staffing.
Teams don't know minimum security controls. Enforcement is "fairly loose." Only a fraction of systems covered by identity management.
Technology risk management, cybersecurity, and third-party risk management each contact the vendor independently. Creates redundancy and vendor frustration.
RACI: R A Technical Assessment | C Governance, Compliance
Recommendations
Tiered Security Assessment
DMN-driven. Inputs: risk tier + data classification + AI component. Baseline: automated checks on...
Consolidated Vendor Contact
Single coordinated vendor engagement for security, replacing 3 independent contact points. One qu...
Security Baseline Definition
Define minimum control requirements (the "secure by design" standard). Publish baseline, elevated...
Parallel Evaluation (not Sequential)
Security runs concurrently with Architecture, Compliance, and Financial reviews. Architecture Lea...
Enterprise Architecture
Current State
Dedicated facilitator pre-screens all designs, removes incomplete artifacts from agenda, manages follow-ups. Runs both ARB and SDRB.
Present, start workflow, reviews by security, EA leaders, distinguished architects. SDRB: same-day if no issues.
Every design has a JIRA ticket. Facilitator documents notes and action items. Approvers ask questions in tickets.
Cursor AI for diagram generation. Custom AI agent scans design documents for pattern conformance.
RACI: R A Technical Assessment | C Governance
Recommendations
Scale the Governance Facilitator Model
The Architecture Facilitator role is the blueprint for the broader end-to-end "quarterback." Expa...
Define Clear Committee Scope
Architecture questions asked by architects only. TBC covers business case. AI Governance covers A...
Simultaneous Engagement Model
"When a request comes in, simultaneous engagement of all major players that have a vote." Replace...
Domain-Based Auto-Assignment
Automated routing based on requesting domain. EA leader to architect by bandwidth. Cross-enterpri...
Compliance
Current State
PII found in uncontrolled systems. Compliance enforcement is "fairly loose." No consistent standard for escalation triggers.
Risk, Legal, Privacy, and Compliance each conduct their own review without shared findings or coordinated assessment.
TBC business case overlaps with financial review. Architecture teams assessing financials (outside expertise). Financial teams assessing architecture.
No structured format for tracking contract deviations. Unknown compliance status for older contracts.
RACI: R A Compliance | C Governance, Oversight
Recommendations
Consolidated Compliance Stream
Single compliance review stream combining Risk, Legal, Privacy, and Compliance assessments. Share...
Regulatory Annotation Framework
Every process task mapped to applicable regulations. OCC 2023-17, DORA, GDPR/CCPA, SOX, EU AI Act...
Contract Deviation Tracking
Structured format for recording and reporting contract deviations in OneTrust. Automated alerting...
Compliance Quality Gates
Phase-boundary compliance checks at each transition. Automated validation that required artifacts...
AI Governance
Current State
Multiple tools submitted for the same function with no alignment to AI strategy. "Crawl, walk, run" messaging rejected by stakeholders.
AI Risk Working Group, AI Cyber Review, AI Risk Review, AI Governance Committee. Working Committee "somewhat redundant with TBC process." Sequential processing adds months.
Causes extended vendor negotiations. External firms frequently push back on AI terms. Different teams have varying risk acceptance thresholds. EU AI Act landscape "ever-changing, different every other day."
"I have no idea why they're there... I'm really annoyed that they're even in existence." Working to merge into single dataset.
RACI: R A AI Review | C Technical Assessment, Compliance | I Governance
Recommendations
Consolidate 3 AI Committees to 1 Stream
Single AI governance review stream replacing Working Group, Cyber Review, Risk Review, and Govern...
AI Fast-Track Pathway
Pre-defined risk posture for common AI scenarios. AI tools run AI Governance + Security in parall...
Unified AI Questionnaire
Merge 3 "snuck up" questionnaires into single dataset. One vendor engagement, one questionnaire, ...
AI No-Go List
Communicate non-starter models/vendors early. Enterprise Risk Management decision matrix for imme...
Privacy
Current State
Privacy SME participates in vendor risk reviews, AI governance, and compliance. No dedicated privacy review stream in the current process.
End-of-year compliance failures include PII discovery outside managed environments. Data classification not consistently enforced at onboarding.
Different teams have varying risk acceptance thresholds for AI-related privacy concerns. No unified privacy risk classification.
Many RAE questions about data hosting, storage, and transmission could be resolved through proper sourcing events before formal assessment.
RACI: R A Compliance (Privacy) | C Governance, Business
Recommendations
Data Classification at Intake
Mandatory data classification fields in the unified intake form. Determines privacy review depth:...
Privacy Review as Parallel Stream
Dedicated privacy evaluation branch running concurrently with security and compliance. Not embedd...
Unified Privacy Risk Classification
Single privacy risk framework replacing team-by-team thresholds. DMN-driven: inputs include data ...
Early Data Residency Screening
Data hosting, storage, and transmission requirements validated at sourcing stage. Prevents late-s...
Commercial Counsel
Current State
Manual review process sustained for 4 years at unsustainable levels. Contract lifecycle management system requested for 5-6 years without funding.
Security exhibits drive the longest negotiations. No reportable format for contract deviations. Unknown compliance status for older contracts.
Legal bottleneck forces sourcing team into areas beyond their expertise. Creates risk and potential liability.
Legal capacity hasn't scaled with organizational growth. Solutions often underused after purchase.
RACI: R A Contracting | C Finance, Governance, Compliance | I Oversight
Recommendations
Contract Review Automation
AI-assisted contract review for standard clauses, redlining, and deviation detection. Human revie...
Standardized Contract Templates
Pre-approved MSA/SOW templates by vendor tier and risk classification. Reduces negotiation scope ...
Parallel Contracting
Begin contract negotiation in parallel with due diligence (not sequentially). Architecture Lead a...
Contract Deviation Register
Structured tracking of all contract deviations with risk classification, expiry dates, and automa...
Third-Party Risk Management
Current State
ServiceNow (intake), Ariba (contracts), OneTrust (risk), Oracle (AP). Data lives in 4+ systems with manual PDF exports between them.
"Somebody needs to be empowered to say I own this... we will not allow you to do it any other way." START assigned "half a person."
No clear ownership for incident communication. Same questions asked multiple times. "We're all doing the same thing to be helpful."
Business Owner identified. Vendor Owner tracked by TPRM team of 6. Product/Tech Owner weakly tracked, often different from requesting team.
RACI: R A Governance | C Procurement, Contracting, Technical Assessment, Compliance | I Oversight
Recommendations
Empowered Process Owner
Dedicated strategic owner with authority and resources: 1 workflow manager, 2-3 project managers....
Mandatory Ownership Assignment
Business Owner, Technical Owner, and Support Owner assigned at onboarding completion. Annual owne...
Integrated System of Record
Connect ServiceNow, Ariba, OneTrust, and Oracle into unified workflow. Eliminate manual PDF expor...
Incident Response Coordination
Single vendor contact point for security incidents and data breach notifications. Clear ownership...
SR 11-7
Model Risk Management
Federal Reserve SR 11-7 is the foundational supervisory guidance for Model Risk Management in U.S. banking. Our governance framework provides ~95% coverage across all four pillars through executable BPMN processes, DMN decision tables, and structured Camunda forms.
Model Inventory & Risk Classification — SR 11-7 §II
Model Inventory Register
Camunda 8 JSON form capturing all SR 11-7 §II required fields per model.
Identification
Model ID (AIMDL-YYYY-NNN), name, purpose, version
Classification
9 model types, 5 development types, 6 lifecycle states
Ownership
Named model owner, developer/vendor, business unit, validator
Risk & Validation
DMN-9 risk tier, EU AI Act class, validation dates, baselines
DMN-9: AI Risk Tier Classification
FIRST hit policy, 15 rules, 5 input dimensions scored 1–10.
Materiality
Capital
Complexity
Sensitivity
Level
Tier 1
Any score ≥9 or high combos. Challenger model required. Monthly + continuous monitoring.
Tier 2
Any score ≥5 or moderate combos. Standard validation. Quarterly + semi-annual.
Tier 3
All scores 1–4. Self-assessment with documentation. Annual review.
model-inventory-register.form • DMN-9-ai-risk-tier-classification.dmn • legacy-model-inventory-backfill.bpmnIndependent Model Validation — SR 11-7 §IV
2-lane BPMN process (AI Review + Governance) enforcing validator independence from model developers.
Validation Process Flow
Three Validation Components
A. Conceptual Soundness
Model theory, design assumptions, limitations. Explainability assessment for Tier 1. Findings rated Critical / Major / Minor / Observation.
B. Process Verification
Internal models: data quality, implementation testing, integration, process controls.
Vendor models: SOC 2 Type II, model cards, vendor validation reports, API test results.
C. Outcomes Analysis
Backtesting, benchmarking, sensitivity analysis. Establishes accuracy and fairness baselines for Phase 8 drift monitoring.
sr11-7-independent-validation.bpmnAI Model Monitoring Loop — SR 11-7 §V
4-lane BPMN (Automation, AI Review, Governance, Oversight) with DMN-8 driven cadence and event-based triggers.
Monitoring Cycle
KS, PSI, JS
Fairness metrics
Human analysis
DMN-8 Drift Thresholds
| Risk Tier | Watch | Warning | Action | Fairness |
|---|---|---|---|---|
| High + AI | 5% | 10% | 15% | 10% |
| Limited + AI | 8% | 15% | 20% | 10% |
| Minimal + AI | 15% | 25% | 30% | 15% |
| Unacceptable | 3% | 5% | 8% | 5% |
Model Status Decision
Continue Monitoring
Loop-back to DMN-8 cadence for next cycle
Retire Model
3+ ActionRequired cycles, regulatory prohibition, or Board decision
Suspend Model
Immediate cessation for critical accuracy or fairness failures
ai-model-monitoring-loop.bpmn • DMN-8-monitoring-cadence-assignment.dmn (17 rules, 7 output columns)SR 11-7 Artifact Inventory & Coverage
| SR 11-7 Section | Artifact(s) | What It Covers | Coverage |
|---|---|---|---|
| §II Model Inventory | model-inventory-register.formlegacy-model-inventory-backfill.bpmnDMN-9-ai-risk-tier-classification.dmn |
30+ field registration form, one-time backfill for pre-governance models, 5-dimension risk classification (15 rules) | 95% |
| §III Model Use | All governance BPMNs 8 DMN decision tables |
DMN-first design: business logic externalized in versioned decision tables, not embedded in gateway conditions. Audit-ready. | 90% |
| §IV Validation | sr11-7-independent-validation.bpmnai-governance-review.bpmn |
3-part validation (A/B/C), vendor vs. internal branching, Tier 1 challenger evaluation, quality gate, governance approval | 95% |
| §V Monitoring | ai-model-monitoring-loop.bpmnDMN-8-monitoring-cadence-assignment.dmn |
Cadence-driven loop, automated drift/bias detection, 4-level escalation, champion-challenger, board MRM reporting, retirement | 95% |
End-to-End Process Orchestrator
Hierarchical model with 9 collapsed sub-processes, decision gateways, vendor pool with message flows, NDA gate, governance committee call activities, and 3 request type routing (Defined Need, Forced Update, Speculative).
Request and Triage
The front door to onboarding. Requesters describe their need, existing solutions are checked for reuse, documentation is gathered, and requests are triaged, classified, and routed.
Key Decisions
- Bypass Gate: Can an existing solution fulfill the need?
- Completeness Gate: Automated validation before classification
- Request Type: Defined Need → NDA → Planning | Forced Update → Fast-track | Speculative → Idea Funnel
Forms & Data Collected
- Review Existing — catalog search, reuse decision, cost avoidance
- Gather Documentation — business case, data classification, budget authorization
- Completeness Gate — 6 automated validation checkpoints
- Initial Triage — strategic alignment score, risk indicators, duplicate detection
- Classify Request — type determination driving downstream routing
Planning and Routing
Validated requests are analyzed for strategic fit, scored for priority, and routed to Buy, Build, or Enable pathways via DMN decision tables.
Key Decisions
- Needs Backlog? Does the request need full backlog prioritization or can it proceed directly?
- Pathway Routing (DMN): Automated Buy / Build / Enable pathway assignment based on risk tier, complexity, and strategic alignment
- Priority Scoring (DMN): Composite score driving resource allocation
Forms & Data Collected
- Preliminary Analysis — business impact, risk appetite alignment, DPIA screening, capacity impact score, vendor affinity check
- Backlog Prioritization — strategic value, urgency, resource availability, priority tier assignment
Risk and SME Assessments
The most complex phase. Five parallel assessment streams evaluate simultaneously — eliminating the sequential bottleneck that currently takes 75 days.
Tech Architecture
Scalability, integration, enterprise standards
Security
Encryption, MFA, pen testing, SOC 2, incident response
Risk & Compliance
GDPR, OCC 2023-17, DORA, data residency
Financial
TCO, ROI, budget authorization, funding source
Vendor Landscape
Market research, shortlist, viability scoring
Evaluation: Forms and Data Architecture
The heaviest data collection phase: 8 structured forms capture 147+ fields across security, risk, financial, vendor, and AI governance dimensions.
Security Assessment
25 fields across 6 groups
- Security tier classification (6 levels)
- Encryption at rest/transit, key management
- Vulnerability scan results, pen test dates
- Breach history, incident response SLA
- SOC 2 Type II, ISO 27001 certifications
Tech Architecture Review
18 fields
- Architecture pattern, deployment model
- Integration points, API compatibility
- Scalability and performance ratings
- Tech debt and maintainability assessment
Risk, Compliance & Legal
15 fields
- Regulatory exposure assessment
- Data residency, cross-border transfer
- Consent management, DPIA results
- OCC 2023-17 / DORA alignment
Financial Analysis
16 fields
- Total cost of ownership model
- ROI projection, payback period
- Budget authorization chain
- Funding source identification
AI Governance Review
32 fields — largest single form
- EU AI Act risk classification
- Model transparency, explainability
- Bias testing, fairness metrics
- SR 11-7 model risk alignment
Vendor Due Diligence
18 fields + vendor landscape (11 fields)
- Financial stability, operational resilience
- Fourth-party risk, subcontractor disclosure
- Market positioning, viability scoring
- Weighted evaluation matrix
Governance Committee Voting
Structured deliberation replaces ad-hoc email voting. A reusable call activity invoked from any governance phase provides full audit trail of who voted, why, and what conditions were set.
7-Step Deliberation Lifecycle
Brief
Period
Votes
Conditions
Loop ↻
Summary
Deliberation-Aware Voting
- Risk context banner on every ballot
- Q&A summary table with peer questions and answers
- Conditional fields: conditions visible only for "Approve with Conditions," veto justification only for "Veto"
- Structured 3-part rationale capture
Transparent Condition Consolidation
- Per-voter attribution table
- Structured list with category, owner, and deadline per condition
- Explicit mapping rationale from individual to consolidated conditions
Audit-Ready Narrative
- Per-voter rationale table (who voted what and why)
- Color-coded outcome banner, decision timeline
- All variables extractable via Tasklist API
Remediation Loop
- Previous round vote table shows each voter's position
- Addressed-concerns checklist maps concerns to actions
- Evidence attachments for updated documents
Contracting
Two pathways converge here. Buy: refine requirements, proof of concept, contract negotiation with AI-powered Contract Inherent Risk Score (CIRS). Build: define requirements, then the full Product Development Life Cycle (PDLC).
Buy Path Forms
- Refine Requirements — final spec from evaluation insights
- Proof of Concept — structured PoC with pass/fail criteria
- Negotiate Contract — OCC 2023-17, DORA Art. 30, GDPR Art. 28 provisions
- Review Contract Risk — 8-dimension CIRS score, 07-Deal Killer Screen routing, regulatory gap analysis
- Finalize Contract — internal approvals, term verification
Build Path Forms
- Define Build Requirements — functional specs, architecture constraints
- Architecture Review — enterprise standards validation
- Development — secure coding, CI/CD pipeline tracking
- Testing & Integration — test coverage, defect resolution, deployment
UAT and Go-Live
The finish line. User acceptance testing validates the solution, final approval confirms governance compliance, software is onboarded, ownership is assigned, and the request is formally closed.
Key Activities
- Pilot / UAT: Structured testing with pass/fail criteria, defect tracking, user satisfaction scoring
- Final Approval: Third line of defense governance sign-off based on complete evidence package
- Condition Verification: For conditional approvals, verify all conditions met before onboarding
- Dual Ownership: Business Owner + Vendor Owner assigned at completion
Forms & Data Collected
- Perform UAT — test scenarios, pass/fail rates, critical defects, user satisfaction (1-10)
- Final Approval — governance sign-off, risk acceptance, audit evidence
- Onboard Software — catalog entry, access provisioning, monitoring setup
- Assign Ownership — Business Owner + Vendor Owner designation
- Close Request — lessons learned, satisfaction survey, archive
Execute NDA(s)
Legal prerequisite gate. Non-disclosure agreements are executed before sensitive vendor information is exchanged. Tracks NDA status, routes to legal for execution, and verifies completion before proceeding.
NDA Processing Flow
Detailed sub-process for NDA lifecycle management. Handles NDA generation, internal review routing, vendor signature collection, and fully-executed document storage. Supports multiple NDA types (mutual, one-way) with configurable review paths.
Vendor Sourcing
Market research and vendor shortlisting sub-process. Evaluates the vendor landscape, scores candidates against requirements, and produces a shortlist for detailed due diligence evaluation.
Per-Vendor Evaluation
Individual vendor assessment flow within the due diligence phase. Each shortlisted vendor undergoes parallel evaluation across technical, security, compliance, and financial dimensions with structured scoring and consolidated recommendation output.
Product Development Life Cycle (PDLC)
Nested sub-process within Risk Assessment and Contracting. Covers the full Build pathway: architecture review, development, testing with retry loops, and integration verification before UAT.
Vendor Pool: The External Partner Journey
Running in parallel with the enterprise process, vendors follow their own structured journey. 10 tasks span intake through deployment support, with message flows synchronizing handoffs at key milestones.
Vendor Intake
Proposal
Tech Demo
Security & Compliance
Contract
Deploy & Close
- Vendor Intake — legal entity, tax ID, ownership, sanctions screening, insurance
- Security Questionnaire — SOC 2, pen testing, encryption, incident response
- Compliance Documentation — regulatory certifications, DPA, sub-processor disclosures
- Contract Execution — MSA, schedules, authorized signatory verification
Message Flow Handoffs
→ Due Diligence Request
Enterprise sends RFP/assessment requirements to vendor
← Vendor Response
Vendor submits proposal and completed questionnaires
→ Contract Draft
Enterprise sends negotiated contract for vendor review
← Signed Contract
Vendor returns executed contract with authorized signature
Vendor Questionnaire: Self-Service Vendor Pre-Screening (SP0)
Before formal intake, requesters complete a lightweight 9-step wizard that identifies deal-killers early and pre-populates downstream forms. This eliminates wasted reviewer capacity on non-starters.
9-Step Pre-Screening Wizard
Deal-Killer Detection (07-Deal Killer Screen)
Real-time alerts flag non-starters before any reviewer time is consumed:
- ● Active regulatory sanctions against the vendor
- ● Unresolvable data residency conflicts
- ● Prohibited AI use case categories (EU AI Act)
- ● Missing mandatory certifications for data tier
Pre-Screening to Formal Intake Flow
Variables pre-populated
Before reviewer time spent
RACI Matrix by Governance Topic
| Topic | Business | Governance | Finance | Procurement | Contracting | Technical | AI Review | Compliance | Oversight | Automation | Vendor |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Intake | R/A | C | C | - | - | - | - | C | - | - | - |
| Prioritization | C | R/A | C | - | - | C | - | - | - | - | - |
| Funding | - | C | R/A | C | - | - | - | - | I | - | - |
| Sourcing | - | A | - | R | - | C | - | C | I | - | R |
| Cyber | - | C | - | - | - | R/A | - | C | - | - | C |
| EA | - | C | - | - | - | R/A | - | - | - | - | C |
| Compliance | - | C | - | - | - | - | - | R/A | C | - | C |
| AI Governance | - | I | - | - | - | C | R/A | C | - | - | - |
| Privacy | C | C | - | - | - | - | - | R/A | - | - | - |
| Comm. Counsel | - | C | C | - | R/A | - | - | C | I | - | R |
| TPRM | - | R/A | - | C | C | C | - | C | C | C | C |
R = Responsible | A = Accountable | C = Consulted | I = Informed | Finance = new functional area (split from Governance per discovery findings)
Governance Topic Coverage Across Onboarding Phases
Hover over any cell to see the tasks that address each governance topic in that phase.
| Governance Topic | Request and Triage |
Planning and Routing |
Evaluation and Due Diligence |
Contracting and Build |
UAT and Go-Live |
Vendor Pool |
|---|---|---|---|---|---|---|
| Intake | ✓Review Existing, Leverage Existing, Gather Docs, Submit Request, Initial Triage |
✓Close Request |
||||
| Prioritization | ✓Preliminary Analysis, Backlog Prioritization, Pathway Routing |
|||||
| Funding | ✓Gather Documentation |
✓Preliminary Analysis |
✓Financial Analysis |
✓Negotiate Contract |
||
| Sourcing | ✓Assess Vendor Landscape, Vendor DD, Evaluate Response |
✓Refine Requirements, Perform PoC |
✓Vendor Intake, Proposal Submission, Contract Execution |
|||
| Cyber | ✓Security Assessment, Tech Architecture Review |
✓Tech Risk Evaluation, Perform PoC |
✓Integration Testing |
✓Vendor Security Questionnaire |
||
| EA | ✓Tech Architecture Review, Perform PoC |
✓Tech Risk Evaluation, PDLC Sub-Process |
✓Onboard Software |
✓Vendor Demo |
||
| Compliance | ✓Initial Triage |
✓Risk Compliance Legal, Vendor DD |
✓Finalize Contract |
✓Final Approval, UAT |
✓Vendor Compliance Review |
|
| AI Governance | ✓Preliminary Analysis (AI flag) |
✓AI Governance Review, Risk Compliance Legal |
||||
| Privacy | ✓Gather Documentation |
✓Preliminary Analysis |
✓Risk Compliance Legal |
|||
| Comm. Counsel | ✓Negotiate Contract, Finalize Contract |
✓Vendor Contract Review, Contract Execution |
||||
| TPRM | ✓Initial Triage |
✓Risk Compliance Legal, Vendor Landscape, Vendor DD, Evaluate Response |
✓Tech Risk Evaluation, Negotiate Contract, Finalize Contract |
✓Final Approval, Onboard Software |
✓Vendor Intake, Vendor Onboarding, Close Request |
DMN Decision Tables: Risk Tier & SLA Breach Escalation
01-Risk Classification: Risk Tier Classification
UNIQUE Phase 25 risk dimensions (1-10) assign one of 4 tiers. Unacceptable terminates immediately.
| Data Sensitivity | Regulatory Exposure | Operational Criticality | AI Complexity | 3rd-Party Dependency | Risk Tier | Eligible? |
|---|---|---|---|---|---|---|
| ≥ 9 | ≥ 9 | - | - | - | Unacceptable | No |
| ≥ 9 | - | ≥ 9 | - | - | Unacceptable | No |
| - | ≥ 9 | ≥ 9 | - | - | Unacceptable | No |
| ≥ 9 | - | - | ≥ 8 | - | Unacceptable | No |
| - | - | - | ≥ 9 | ≥ 9 | Unacceptable | No |
| ≥ 7 | ≥ 7 | - | - | - | High | No |
| ≥ 7 | - | ≥ 7 | - | - | High | No |
| - | ≥ 7 | - | ≥ 6 | - | High | No |
| - | - | ≥ 7 | - | ≥ 7 | High | No |
| [4..7) | [4..7) | - | - | - | Limited | Yes |
| [4..7) | - | [4..7) | - | - | Limited | Yes |
| - | [4..7) | [4..7) | - | - | Limited | Yes |
| - | - | [4..7) | [3..6) | [4..7) | Limited | Yes |
| < 4 | < 4 | < 4 | < 3 | - | Minimal | Yes |
| < 4 | < 4 | < 4 | - | < 4 | Minimal | Yes |
04-SLA Escalation: SLA Breach Escalation
FIRST Cross-Cutting4-level escalation ladder triggered by SLA timer boundary events.
| Breached Phase | Days Beyond SLA | Risk Tier | Escalation Action | Notify | Auto-Action |
|---|---|---|---|---|---|
| Phase3, Phase4 | ≥ 15 | High | Auto-Reject | Board + Sponsor | Terminate process |
| Phase3, Phase4 | ≥ 10 | High | Board-Alert | Advisory Board | Escalation meeting |
| Any | ≥ 10 | Limited | Escalate-Governance | Governance Lead | Priority reassignment |
| Phase1, Phase2 | ≥ 5 | High | Escalate-Governance | Governance Lead | Priority reassignment |
| Any | ≥ 5 | Limited | Notify-Sponsor | Business Sponsor | Status update |
| Phase5 | ≥ 5 | - | Notify-Sponsor | Business Sponsor | Status update |
| Phase1, Phase2 | ≥ 3 | Minimal | Notify-Sponsor | Business Sponsor | Email reminder |
| Phase3 | ≥ 7 | - | Escalate-Governance | Governance Lead | Vendor follow-up |
| Phase4 | ≥ 5 | Minimal | Notify-Sponsor | Business Sponsor | Committee scheduling |
| Any | ≥ 2 | - | Monitor | Process Owner | Dashboard alert |
DMN Decision Tables: Pathway, Governance, Prioritization, Security, Contract Risk
02-Pathway Routing: Pathway Routing
UNIQUE Phase 1Routes to Fast-Track, Standard-Buy, Standard-Build, or Hybrid based on 6 inputs.
| Reuse | Build/Buy | Capacity | Strategic | TtV | Vendor? | Pathway |
|---|---|---|---|---|---|---|
| ≥8 | "Buy" | - | - | ≥7 | true | Fast-Track |
| ≥7 | "Buy" | - | ≥7 | ≥7 | true | Fast-Track |
| - | "Buy" | - | - | - | - | Standard-Buy |
| - | "Build" | ≥6 | - | - | - | Standard-Build |
| - | "Hybrid" | - | - | - | - | Hybrid |
| - | "Build" | <6 | - | - | - | Hybrid |
03-Governance Gate: Governance Routing
UNIQUE Phase 4Routes governance review to Advisory Board, Committee, Fast Path, or Auto-Approve based on risk tier and pathway.
| Risk Tier | Pathway | Review Body | SLA |
|---|---|---|---|
| Unacceptable | - | Rejected | - |
| High | - | Advisory Board | 7 days |
| Limited | Standard-* | Committee | 5 days |
| Limited | Fast-Track | Fast Path | 2 days |
| Minimal | Fast-Track | Auto-Approve | 1 day |
| Minimal | Standard-* | Fast Path | 2 days |
05-Priority Scoring: WSJF Prioritization Scoring
FIRST Phase 27 inputs produce priority tier (P1/P2/P3) AND continuous WSJF score for true queue ranking. Cost of Delay ≥ $100K/week auto-escalates.
| Impact | Urgency | Risk | CoD $/wk | Job Size | WSJF | Tier |
|---|---|---|---|---|---|---|
| - | - | - | ≥100K | - | 99 | P1 |
| - | ≥9 | - | - | - | 95 | P1 |
| ≥8 | - | High | - | - | 90 | P1 |
| ≥8 | ≥7 | - | - | - | 88 | P1 |
| - | ≥5 | - | - | ≤5 | 70 | P2 |
| ≥6 | - | High | - | [1..4] | 65 | P2 |
| - | - | - | - | - | 40 | P3 |
06-Security Routing: Security Assessment Routing
UNIQUE Phase 3Routes security review depth based on risk tier, data classification, and AI component presence.
| Risk Tier | Data Class. | AI Component? | Assessment | SLA |
|---|---|---|---|---|
| High | Confidential | true | Major | 10 days |
| High | Confidential | false | Major | 10 days |
| High | Internal | true | Elevated | 5 days |
| Limited | Confidential | - | Elevated | 5 days |
| Limited | Internal | true | Elevated | 5 days |
| Limited | Internal | false | Baseline | Same-day |
| Minimal | - | false | Baseline | Same-day |
| Minimal | - | true | Elevated | 5 days |
8-dimension Contract Inherent Risk Score (CIRS) drives 4-way routing: AutoApprove (≤3.0) | EnhancedReview (3.1-5.5) | SeniorGovernance (5.6-7.5) | RejectRenegotiate (>7.5 or immutable violations). Inputs: CIRS score, immutable violations, regulatory gaps, risk tier.
Onboarding Bottleneck Analysis: Before vs. After
Key delay drivers identified during stakeholder interviews mapped against projected improvements from the proposed governance model.
75 Days → 20 Days
Total E2E cycle reduction through parallel processing and DMN-driven routing
18 → 4 Committees
Consolidated governance committees with tiered review paths
335 Assessments/Year
Current volume processed through the new automated intake and triage pipeline
60+ AI Queue → 0
Backlog eliminated with dedicated AI governance lane and SR 11-7 framework
Measurement Dashboard: Day 120 Targets
Key performance indicators with current baselines and Day 120 targets. These metrics enable data-driven proof of improvement.
| Metric | Baseline (Current) | Day 120 Target |
|---|---|---|
| E2E cycle time (standard) | 6-9 months | 60-90 days |
| E2E cycle time (Enable) | Same as standard | 30 days |
| E2E cycle time (AI fast-track) | Same as standard | 14 days |
| RAE completion | 28-29 days | 14 days |
| DD internal review | 75 days | 30 days |
| Security review (Baseline tier) | 2 weeks | Same-day (automated) |
| Form completion rate | Unknown | 90%+ first-pass |
| Intake rejection (deal-killer) | 0% (no pre-screen) | Measured |
| Contract cycle time | Varies (up to 1.5yr) | 90 days standard |
| Business Council decision | Monthly + email | 48-hour async SLA |
| Requester satisfaction | Unmeasured | NPS baseline established |
| Queue transparency | None | Real-time dashboard |
SLA Governance on Camunda 8
vs. ServiceNow
The customer's governance lifecycle spans 8+ specialized systems. No single platform can replace the ecosystem. SLA on Camunda 8 is the orchestration layer that connects them all — with auditable BPMN/DMN models regulators can inspect directly.
SLA on Camunda 8
Process models, 87 forms, dashboard, Jira integration, Vendor Questionnaire, Vendor Portal — in production on Camunda 8 Cloud.
ServiceNow Equivalent
No BPMN. No DMN. No multi-pool. Decompose into dozens of disconnected Flow Designer workflows. Rebuild 14 DMN tables in proprietary format.
E2E Scenarios Passed
Full gateway coverage — every pathway (fast-track, buy, build, hybrid) exercised against Camunda 8 Cloud.
One Governed Lifecycle
Jira, OneTrust, Ariba, Oracle, AppFox, iManage, Box, ServiceNow — orchestrated by a single auditable BPMN process.
This is an orchestration problem, not a platform problem.
A typical software onboarding request touches Jira, OneTrust, Ariba, Oracle, Confluence/AppFox, iManage, Box, and ServiceNow. No single platform owns this end-to-end. The question is not "which system?" but "how do we coordinate work across 8+ systems with SLA enforcement, regulatory traceability, and decision transparency?" That is what Camunda 8 does.
Customer Technology Landscape
Each team has invested in tooling optimized for their domain. None of these are going to change.
The Virtual Application Layer
SLA is the application that sits above all stovepipe systems — it doesn’t replace any of them.
People continue working in the systems they know. SLA provides what no individual system can: end-to-end visibility, SLA enforcement, and regulatory compliance evidence spanning all 8+ systems.
| Capability | How It Works | Why No Single System Can Do This |
|---|---|---|
| End-to-End Visibility | Single BPMN process orchestrates work across Jira, OneTrust, Ariba, Oracle, iManage, Box, Confluence, ServiceNow | Each system only sees its own slice of the lifecycle |
| SLA Enforcement | 18 phase-level SLA timers with 3 severity tiers (baseline/elevated/major), FEEL-configurable durations, Admin UI for live configuration | SLA timers spanning systems require an orchestrator outside any single system |
| Regulatory Evidence | BPMN model + Camunda Operate = proof that controls are in place and operating | Regulators need the "evidence package" across the full lifecycle |
| Decision Transparency | DMN tables document every routing decision — versioned, auditable, portable | Embedded rules in ServiceNow, OneTrust, or Ariba are platform-locked and opaque |
| Audit Trail | Every system interaction, decision, and approval timestamped in one process history | ServiceNow logs cover ServiceNow; Jira logs cover Jira; neither covers the hand-off |
What’s Built
Tangible deliverables proving the platform isn’t theoretical
Head-to-Head Comparison
Process modeling, cross-system integration, and governance capability
Process Modeling & Execution
BPMN 2.0 and DMN 1.3 vs. proprietary designers and embedded rules.
| Capability | SLA (Camunda 8) | ServiceNow | Edge |
|---|---|---|---|
| Process notation | BPMN 2.0 (ISO standard) | Proprietary Flow Designer / Playbooks | SLA |
| Decision tables | DMN 1.3 with FEEL, hit policies, DRDs | Proprietary Decision Builder (no FEEL, no hit policies) | SLA |
| Multi-lane modeling | 9+1 swim lanes with cross-lane routing | No lanes/pools concept in any designer | SLA |
| Multi-pool (vendor) | Enterprise + Vendor pool with message flows | No equivalent — manual API integration | SLA |
| Parallel gateways | Full AND/OR/XOR with token semantics | Parallel tasks exist but no formal token model | SLA |
| Long-running state | Zeebe maintains state across months | Transient — workarounds for long processes | SLA |
| Process versioning | Multiple versions running, in-flight migration | Changes affect all in-flight work | SLA |
| Portability | BPMN XML runs on any compliant engine | Locked to ServiceNow platform | SLA |
| Low-code citizen dev | Camunda Modeler requires BPMN knowledge | Flow Designer accessible to non-technical users | SN |
Cross-System Integration
The decisive factor — connecting 8+ specialized systems into one governed lifecycle.
| System | SLA (Camunda 8) | ServiceNow | Edge |
|---|---|---|---|
| Jira | Built — bi-directional task sync, in production | IntegrationHub spoke (separate license) | SLA |
| OneTrust | Service task creates assessment, retrieves via API | No native spoke; custom REST | SLA |
| Ariba | Service task triggers NDA/RFP, polls for completion | SAP Ariba spoke limited to basic PO/invoice | SLA |
| Oracle | Bridges Ariba-Oracle gap (currently manual) | Oracle spoke doesn't solve the gap | SLA |
| AppFox | Triggers AppFox approval, waits for outcome | Confluence spoke is read-only; no AppFox | SLA |
| iManage | Creates matter, tracks redlining status | No native integration | SLA |
| Box | Stores contract, updates metadata | Box spoke (basic CRUD) | Neutral |
| ServiceNow | Camunda+ServiceNow connector (GA — shipped November 2025) | Native — it IS ServiceNow | SN |
AI Opportunity Map
Every task categorized: Deterministic (D), AI-Assisted (AI), or Human Judgment (H)
Only 10% of tasks genuinely require human judgment. 90% automation potential (deterministic + AI-assisted) — this represents the target state, not current implementation.
Estimated 40–60% reduction in manual effort across evaluation, contracting, and documentation phases.
SP1 — Refine Request & Triage
SP2 — Planning & Routing
SP3 — Evaluation & Due Diligence (9 Parallel Tracks)
SP4 — Contracting & Build
SP5 — UAT & Go-Live
Contract AI — The Marquee Use Case
Highest-ROI AI opportunity: reduce contract review from 5–15 days to 1–3 days
Contract negotiation involves unstructured documents, domain expertise, high stakes, and significant manual effort. Legal focuses on the 20% requiring judgment; AI handles the other 80%.
Vendor Returns Redlined Contract
Received from iManage. Camunda service task triggers AI analysis pipeline.
AI: Clause Extraction & Classification
Immutable clauses (regulatory: audit rights, data protection, termination for breach). Flexible clauses (commercial: liability caps, indemnification). Standard clauses (force majeure, governing law).
AI: Deviation Analysis Against Playbook
Flag each redlined clause: accept / reject / counter-propose. Risk score per deviation with regulatory cross-reference (OCC 2023-17 §60, DORA Article 30).
AI: Negotiation Intelligence
Historical success rates for similar clause negotiations. Vendor-specific patterns. Market benchmarks vs. industry standard.
Human Review: Legal Team Decides
AI-generated redline summary: "Vendor proposes X change to liability cap — deviates from standard by Y, risk: HIGH, recommendation: reject/counter." Legal accepts, modifies, or overrides each recommendation.
AI: Generate Counter-Proposal Draft
Produces redlined response with legal team's decisions applied. Maintains clause-level audit trail in Box (system of record).
Camunda: Track, Enforce, Escalate
D3/D7/D11 reminder boundary timers on vendor response. SLA breach end event if no response within timeframe. Full traceability.
AI Model Flexibility
Open orchestration vs. walled garden
| Dimension | SLA on Camunda 8 | ServiceNow | Edge |
|---|---|---|---|
| Default AI model | None — customer chooses | Proprietary Now LLM | SLA |
| External model support | Any OpenAI-compatible API + native Anthropic/Bedrock/Azure | Azure OpenAI, Vertex AI, Bedrock + custom model import (Zurich) — still no self-hosted | SLA |
| Self-hosted models | Supported via custom Zeebe worker | Not supported | SLA |
| Cost model | Direct to provider (transparent, market-rate) | Consumption "assists" (30–45% licensing add-on) | SLA |
| AI portability | Change connector config; BPMN unchanged | AI investment locked to platform | SLA |
| MCP support | Published September 2025 | Unconfirmed as of March 2026 | SLA |
| Multi-model orchestration | Different models for different tasks in one process | Single provider per instance | SLA |
When the customer's AI strategy evolves — new models, new providers, new regulatory requirements — which platform makes that evolution a configuration change vs. a re-implementation?
Committee Governance Voting: Platform Comparison
The committee voting subprocess is a core differentiator — DMN-driven configuration, multi-instance parallel voting, quorum enforcement, and tiered SLA escalation vs. flat approval primitives.
| Capability | SLA on Camunda 8 | ServiceNow |
|---|---|---|
| Voting methods | 5 methods (Unanimous, Super-Majority, Majority, Veto, Single Reviewer) — DMN-driven | Binary approve/reject only |
| Committee configuration | DMN table selects method, quorum, roles, timers by risk tier × phase × contract value (8 rules) | Approval Configurator — conditional routing to groups, flat pool only |
| Per-group quorum | Native — one vote from each of N governance bodies via multi-instance + locked roster | Not supported — requires custom Business Rule scripting |
| Weighted voting | Configurable via DMN (veto-capable roles can block) | Not supported — all votes equal weight |
| Parallel deliberation | Multi-instance user tasks — all members vote simultaneously with shared brief | Group approval — anyone or all, no structured deliberation |
| Q&A phase | Optional multi-round Q&A sub-process, timer-bounded, DMN-configured per risk tier | No native equivalent — requires Teams/Slack integration |
| Vote options | 6 options: Approve, Conditional, Reject, Abstain, Recuse, Veto — with structured rationale | 2 options: Approved or Rejected |
| Conditions reconciliation | Automated conflict detection + governance officer mediation when conditional votes conflict | Not applicable — no conditional vote concept |
| Remediation loops | Configurable max iterations (DMN), auto-escalation to leadership after max reached | Manual — resubmit record and restart approval chain |
| SLA escalation timers | 3-tier boundary timers: Reminder → Escalation → Deadline with auto-default ruling | Requires separate SLA records + notification rules + scheduled job |
| Decision routing | DMN 1.3 with FEEL expressions, FIRST/UNIQUE hit policies, chained decisions | Decision Builder — FIRST-match only, no FEEL, no chaining |
| Reusable subprocess | BPMN call activity — same voting process called from 2+ governance checkpoints | Subflow — functional but without event subprocess or boundary timer patterns |
| Audit trail | Per-voter rationale, dissenting opinions, conditions history, remediation feedback | sysapproval_approver records — basic approve/reject with optional comments |
| Standards compliance | BPMN 2.0 / DMN 1.3 — portable, vendor-neutral, auditable XML | Proprietary Flow Designer — locked to ServiceNow platform |
4-8 Weeks Custom Dev to Replicate
ServiceNow would require custom GlideScript, Business Rules, and UI Policies to replicate quorum enforcement, weighted voting, and tiered SLA escalation.
8 DMN Rules vs. ~500 Lines GlideScript
Committee voting configuration lives in a single DMN table — business analysts can modify voting rules without developer involvement.
OCC 2023-17 Audit-Ready
Every vote, rationale, dissenting opinion, and remediation loop is captured as process variables — the regulatory evidence trail OCC 2023-17 and SR 11-7 require.
Governance & Regulatory Compliance
BPMN models ARE the audit artifacts
| Requirement | SLA Approach | ServiceNow Approach | Edge |
|---|---|---|---|
| OCC 2023-17 | 8-phase BPMN maps directly to guidance; DMN risk tiering; OneTrust integration | Manual TPRM module config to map guidance | SLA |
| SR 11-7 | AI governance lane + DMN tables as versioned audit artifacts | No native framework mapping; custom config | SLA |
| DORA (EU) | ICT register, resilience testing, incident response as BPMN sub-processes | DORM app (separately purchased, complex setup) | SLA |
| SOX | Phase transition pattern (completion, quality gate, approval, event) | OOTB control testing and attestation | SN |
| GDPR/CCPA | Privacy assessment as parallel eval track + DMN | Privacy Management module (add-on) | Neutral |
| Audit trail | Full token-level process history across all 8+ systems | Audit logs within ServiceNow only | SLA |
| Regulatory annotations | Embedded in BPMN (text annotations + camunda:properties) | Manual tagging on records | SLA |
| Compliance evidence graph | Neo4j knowledge graph (7 node types, 8 relationships) + PostgreSQL evidence schema (5 tables, 2 views) — cross-system compliance traceability | GRC evidence within ServiceNow only | SLA |
| AI-powered GRC | AI service tasks for risk assessment, document analysis — model-agnostic | Now Assist for IRM: AI issue resolution, smart assessment templates (Zurich) — within-ServiceNow only | SLA |
Effort to Production
What’s done, what remains, and what ServiceNow would need to replicate
SLA on Camunda 8 (Current State)
Core process, forms, dashboard, and Jira integration already in production.
What ServiceNow Would Need to Replicate
The scope of work required to build equivalent governance orchestration in ServiceNow.
| Workstream | Challenge |
|---|---|
| GRC + TPRM licensing | Contract negotiation, module selection — TPRM is redundant since customer already uses OneTrust |
| Custom workflow | Recreating 6-phase process without BPMN; no swim lanes; no token semantics; custom state machine in Flow Designer |
| Decision logic | 14 DMN tables into proprietary Decision Builder — loss of FEEL expressions, hit policies, and DRDs |
| 87 forms + dashboard | Rebuild in ServiceNow UI Builder + Performance Analytics |
| 8+ system integrations | OneTrust, Jira, Ariba, Oracle, AppFox, iManage, Box — most require custom IntegrationHub spokes or REST |
| Regulatory mapping | Manual control-to-regulation mapping (no BPMN annotations, no embedded compliance properties) |
| Cross-system SLA enforcement | No equivalent to boundary timer events spanning multiple external systems |
| UAT & validation | Financial services testing requirements across all integrated systems |
Honest Assessment: Where Each Platform Wins
Acknowledging ServiceNow strengths while highlighting SLA’s decisive advantages
Where ServiceNow Wins
Mature in areas adjacent to — but not core to — the orchestration problem.
Performance Analytics
More mature reporting and dashboards than custom-built alternatives
Mobile App
Native mobile experience vs. responsive web
ITSM Integration
For incidents, changes, and CIs post-deployment — ServiceNow IS the system of record
Organizational Familiarity
"We already use ServiceNow" reduces adoption friction for IT Operations teams
Process Mining & Task Mining
Zurich release adds descriptive visibility into work patterns — note: descriptive (what happened) vs. SLA’s prescriptive orchestration (what should happen)
Where SLA Wins (15 Decisive Advantages)
The fundamental question: who orchestrates the governance lifecycle spanning all 8+ systems?
Cross-System Orchestration
Single governed process across Jira, OneTrust, Ariba, Oracle, Confluence, iManage, Box, ServiceNow
Regulatory Auditability
BPMN model IS the audit artifact — regulators inspect the process directly, across all systems
DMN Decision Transparency
21 versioned, portable, FEEL-based decision tables vs. proprietary rules
Ariba-Oracle Automation
Bridges the current manual hand-off between procurement and finance — net-new value
Process Complexity
9 parallel eval tracks, multi-pool vendor flows, event sub-processes — impossible in ServiceNow
Already in Production
Process models, forms, dashboard, Jira sync, E2E tests, SLA timers — running on Camunda 8 Cloud today
Incremental Investment
Camunda 8 platform is already running; remaining work is integration bridges, not platform buildout
No Vendor Lock-in
BPMN/DMN are ISO standards — process models survive platform changes
Long-Running State
Zeebe handles 30–180 day onboarding lifecycles natively
Deal-Killer Pre-Screening
DMN-driven early rejection has no ServiceNow equivalent
Single Pane of Glass
One dashboard showing work status across Jira, OneTrust, Ariba, iManage, and Oracle
AI Model Freedom
Any LLM provider is just another service task — no platform markup or walled garden
AI + Deterministic in One Process
Single BPMN process invokes DMN tables and LLMs in consecutive steps, with full traceability
Contract AI as Orchestration
Highest-ROI AI use case requires orchestrating iManage + LLM + legal review + Box
Self-Service Pre-Screening
Vendor Questionnaire 9-step wizard with deal-killer alerts, multi-vendor parallel evaluation, vendor portal with secure response interface
Strategic Recommendation
Don’t position this as SLA vs. ServiceNow — or SLA vs. any single system. Position Software Lifecycle Automation as the orchestration layer that connects the customer’s existing investments — and as the AI-ready foundation that enables intelligent automation without platform lock-in.
The customer has made deliberate, funded decisions about their technology landscape. None of these are changing. The question is: who orchestrates the governance lifecycle that spans all of them?
Knowledge-Driven Governance: The Flywheel
Every governance decision is captured, compared to prior decisions, and used to improve future decisions. This feedback loop does not exist in any GRC platform today.
Knowledge drives decisions
Cycle time reports, vendor concentration, and SLA trends inform DMN routing rules and committee review scope
Decisions generate learning
Every DMN override is captured — automated output vs. human output, with rationale. Override patterns surface rule improvements
Learning improves future decisions
DMN override rate decreases as rules are tuned. Target: 50% reduction within 200 instances
Analytics and Decision Intelligence
Six analytics APIs deliver immediate demo value with zero ML, zero embeddings — structured queries on existing governance execution data.
| API | Question it Answers | Data Source | Compliance Value |
|---|---|---|---|
| Cycle Time Report | Why did this onboarding take 6 months? | PostgreSQL task_executions | OCC 2023-17 timeliness evidence |
| Concentration Risk | How dependent are we on a single vendor? | Neo4j Vendor graph | OCC 2023-17 concentration limits |
| SLA Breach Trends | Which lanes are chronically bottlenecked? | PostgreSQL sla_events | DORA operational resilience |
| DMN Override Capture | Where do humans disagree with automation? | PostgreSQL dmn_overrides | SR 11-7 model validation |
| Audit Trail Export | Can you show me the full evidence chain? | Template-driven HTML | SOX / OCC examination-ready |
| Board Health Report | What is our composite governance score? | Aggregate across all tables | Board risk committee reporting |
ServiceNow GRC Cannot Do This
ServiceNow captures approvals but cannot correlate decisions across cycles, track DMN override drift, or generate comparative governance analytics.
Audit Trail as Byproduct
The audit trail is not assembled manually — it is a deterministic projection of process execution data. Template-driven, same output every time.
Claude as Copilot (MCP)
Claude with MCP access to PostgreSQL and Neo4j — ask any question about governance history. Retrieval-only: cite or decline, never generate advisory.
Interactive Governance Topic Explorer
Topic Journeys: Intake, Prioritization, Funding, Sourcing
Intake
- Review Existing
- Leverage Existing
- Gather Docs
- Submit Request
- Initial Triage
- Close Request
Prioritization
- Preliminary Analysis
- Backlog Prioritization
- Pathway Routing
Funding
- Gather Docs
- Prelim Analysis
- Financial Analysis
- Negotiate Contract
Sourcing
- Vendor Landscape
- Vendor DD
- Evaluate Response
- Refine Reqs
- Perform PoC
Topic Journeys: Cyber, EA, Compliance, AI Gov, Privacy
Cyber
- Security Assessment
- Tech Arch Review
- Tech Risk Eval
- Perform PoC
- Integration Test
Enterprise Architecture
- Tech Arch Review
- Perform PoC
- Tech Risk Eval
- PDLC Sub-Process
- Onboard Software
Compliance
- Initial Triage
- Risk Compliance Legal
- Vendor DD
- Finalize Contract
- Final Approval
- UAT
AI Governance
- Prelim Analysis (AI flag)
- AI Governance Review
- Risk Compliance Legal
Privacy
- Gather Docs
- Prelim Analysis
- Risk Compliance Legal
Topic Journeys: Commercial Counsel and TPRM
Commercial Counsel
- Negotiate Contract
- Finalize Contract
- Contract Review
- Execution
TPRM (Third-Party Risk Management)
- Initial Triage
- Risk Compliance Legal
- Vendor Landscape
- Vendor DD
- Evaluate Response
- Tech Risk Eval
- Negotiate
- Finalize
- Final Approval
- Onboard Software
- Intake
- Onboarding
- Close
Knowledge Base Architecture
A dual-database knowledge platform that captures every governance decision, learns from every negotiation, and makes every future decision faster and better.
SLA Governance Ontology
Neo4j-powered knowledge graph connecting processes, contracts, regulations, and decisions. Drag nodes to explore relationships.
Persona-Based Intake Wizard (SP0)
Intelligent pre-process that identifies the requester, adapts questions to their context, classifies AI involvement, and sets governance journey expectations. Replaces the flat SP1 intake form.
8 Requester Archetypes
| Persona | Typical Pathway | Key Signal |
|---|---|---|
| The Visionary | Unsure | Idea only, no vendor |
| The Informed Buyer | Buy | Shortlisted vendors |
| The Mandated Upgrader | Fast-Track | Regulatory deadline |
| The Builder | Build | Technical specs, team |
| The Renewal/Replacer | Buy | Contract expiring |
| The Innovation Scout | Hybrid | AI/ML use case, PoC |
| The Executive Sponsor | Any (delegates) | Strategic mandate |
| The Compliance Responder | Buy | Audit finding |
7-Step Adaptive Flow
AI Classification Taxonomy (4 Dimensions)
Continuous Learning & Feedback System
Closed-loop feedback at every user touchpoint, routed through a BPMN process with AI sentiment analysis and DMN priority routing. The system learns from every interaction.
Feedback Collection Points
| Touchpoint | Type | Mechanism |
|---|---|---|
| Wizard Step | Micro-rating | Thumbs up/down per step |
| Solution Suggestion | Decision | Accept/reject with reason |
| Journey Explainer | Expectation | Timeline reasonable? Y/N |
| Task Completion | Experience | 5-star + text |
| Stage Gate | Friction | Was ask reasonable? |
| Process End | NPS | 0-10 + open text |
| Any Page | General | Floating FAB on all 8 pages |