1 / 57

Software Onboarding
Process Transformation

Discovery Findings, Recommendations, and Implementation Roadmap

Based on 14 stakeholder sessions and 35+ interviews across Architecture, Product, Security, Risk Management, Vendor Management, Finance, Legal, and Compliance teams. Covering 11 governance domains with actionable 30/60/90/120-day implementation plans.

14 Stakeholder Sessions 35+ Interviews 11 Governance Domains 24 Gap Findings 120-Day Roadmap

Agenda

Executive Summary

Current State Understanding

The software onboarding process spans 6 to 9 months end-to-end, driven by 18 sequential committees, 5+ disconnected intake channels, and critical resource bottlenecks in Security and Legal. The START initiative (9 months old) created centralized awareness but did not integrate underlying team processes. Competitors with less mature processes achieve 60-90 day cycles.

6-9 mo
Current E2E Cycle
18
Committees / Reviews
335
Assessments / Year
28-29 d
RAE (2x 14-Day Target)
75 days
DD Internal Review
60-90 d
Competitor Benchmark

Biggest Challenges

Sequential Reviews

Requesters present to ARB, TBC, AI Governance, and DART sequentially. Each committee 5+ weeks apart with overlapping scope.

Requester Burden

DART formation falls entirely on the requester, who must independently contact 5-6 teams, submit separate intake forms, and manage scheduling.

Resource Crisis

2 people negotiate 30+ contracts/month. Security is the primary SLA bottleneck. Architecture recently reduced. "Half a person" owns the START process.

Highest-ROI Investment Areas

Parallel Evaluation

Replace 18 sequential committees with 5 parallel evaluation streams. Architecture Lead validated this model.

Unified Intake + Deal-Killer Gate

Consolidate 5+ intake channels. Block non-starters at day 1 before consuming reviewer capacity.

Contract Automation

Automate contract review for the 2-person team handling 30+/month. Identified as "Dumpster Fire #1" by TPRM Lead.

End-to-End Workflow: Current Pain Points

1Intake
5+ channels
2Prioritization
No formula
3Funding
Locked forms
4Sourcing/DD
75 day review
5Cyber
#1 bottleneck
6EA Review
2wk ARB SLA
7AI Gov
60+ queue
8Legal
2 people
9Contract
Up to 1.5yr

Critical Bottlenecks (by Impact)

BottleneckImpactSource
Contract NegotiationCritical2 people / 30+ contracts monthly
Security Review CapacityCritical"Biggest bottleneck... reason our SLA takes 2 weeks"
Sequential Committees (18)CriticalSame presentation repeated 3-4 times
DART FormationCriticalRequester manages 5-6 teams independently
AI Governance QueueHigh60+ items, 3 separate AI committees
Business Council QuorumHigh2-3 of 8-10 members; now email voting

What the Organization Does Well

Architecture Governance

Dedicated Governance Facilitator pre-screens artifacts, manages follow-ups, and runs ARB/SDRB. The most disciplined team interviewed.

TPRM Due Diligence

Reduced DD from 144 days to 75. Output doubled to 335 assessments/year. Vendor questionnaire completion 12 days ahead of target.

Acquisition 2.0

Brings all teams together at the start. Enables collective go/no-go at first tollgate. Acknowledged as time-consuming but necessary.

Staffing Gaps

FunctionStaffWorkloadGap
Legal / Contracts230+/moCritical
Security Arch (AI)~1All AI reviewsCritical
Risk / DD8335/yrAt capacity
Architecture2-3ARB + SDRBReduced
Vendor Mgmt6 (2 at 50%)Full facilitationInsufficient

"If we want this to really click... I can't have the architect review group being a critical portion with only two people."

Risk Management Lead

"I need three of me right now."

Security Architect, on AI review capacity

System Landscape and Integration Gaps

The governance lifecycle spans 8+ specialized systems. Each team owns tooling optimized for their domain — none of these are changing.

SystemOwnerFunctionIntegration Status
JiraProduct / EA / CyberTechnical SME task management, intake coordinationBi-directional sync (built)
OneTrustRisk / ComplianceVendor risk assessments, TPRM questionnaires, monitoringStandalone, manual PDF exports
AribaSourcing / ProcurementNDAs, RFPs, vendor contracts, supplier managementAPI to Oracle only
OracleFinanceFinancial analysis, budget approval, vendor paymentsManual hand-off from Ariba
AppFox / ConfluenceEnterprise ArchitectureArchitecture review approvals, technical contentPlugin-based, no workflow integration
iManageLegalContract drafting, redlining, version managementNo downstream integration
BoxLegalExecuted contract storage (system of record)Manual upload
ServiceNowIT OperationsITSM: incidents, changes, configuration itemsNo upstream integration
SLA (Camunda 8)GovernanceE2E process orchestration across all systemsTarget platform (Jira live)

Current: Manual Handoffs

PDF exports between OneTrust and ServiceNow. Manual Ariba-to-Oracle financial hand-off. No single audit trail spanning the full lifecycle across all 8+ systems.

Target: SLA Orchestration Layer

Camunda 8 coordinates across all systems via API. Single process instance tracks a request from intake through assessment, contracting, and go-live — with regulatory compliance evidence assembled automatically.

Concierge / Quarterback Model

Architecture's Governance Facilitator role provides the blueprint for end-to-end process orchestration across all domains.

What Works Today (Architecture)

Governance Facilitator

  • Pre-screens all design artifacts
  • Removes incomplete items from agenda
  • Captures action items, manages follow-ups
  • Runs ARB and SDRB
  • JIRA integration for tracking

The most disciplined governance function identified across all 14 sessions.

Proposed E2E Extension

Process Quarterback

  • Single point of contact for requesters (eliminates 5-6 team self-navigation)
  • Automated DART formation (replaces requester burden)
  • Quality gates at each phase boundary
  • Status visibility and proactive notifications
  • Cross-functional escalation authority

Eliminates the requester burden: "It's completely on the onus of the requester."

"Exactly like what we're talking about for the quarterback from a broader end-to-end."Consulting Team

Simultaneous Engagement Model

Replace sequential DART formation with parallel engagement of all review streams. Architecture Lead's explicit recommendation.

Current State (Sequential)

TBC Approval
Requester contacts Architecture (wait)
Requester contacts Security (wait)
Requester contacts Compliance (wait)
Weeks to months. Dependent on requester initiative.

Target State (Simultaneous)

TBC Approval
Concierge triggers parallel streams:
Architecture
Security
Compliance
AI Gov
Bounded by slowest stream SLA. No requester burden.
"I disagree with that because that makes us a bottleneck... there should be simultaneous engagement of all the major players that have a vote."Architecture Lead

3 Request Types and Distributed Pod Model

Request Type Routing

The current process treats all requests identically. v3 routes them through distinct paths based on request type.

TypeDescriptionProcess PathFrequency
Defined NeedBusiness owner knows requirements, has vendor selectedStandard 6-phaseMost common
Forced UpdateExisting vendor, product changes (on-prem to SaaS, EOL, new AI)Re-evaluation path (skip intake, start at SP3)Growing
Speculative / ExploratoryAdvisory support, no sponsorship, generating interestIdea funnel (pre-SP1), not standard processFrequent, clogs pipeline

Distributed Pod Model

Domain-specific pods controlling their own prioritization, meeting cadence, and workflow speed. Central team provides consistency.

PodControlsCentral Team Provides
CybersecurityPrioritization, meeting frequency, review speedConsistent SLA framework
ArchitectureTechnical review cadence, domain assignmentArtifact standards
Legal / ContractsContract template usage, negotiation approachRisk appetite alignment
AI GovernanceAI risk posture, review depthRegulatory compliance
TPRMAssessment methodology, vendor scoringCross-pod visibility

Implementation Roadmap: 30 / 60 / 90 / 120 Days

The first 30 days focus on process consolidation and quick wins. Subsequent phases build automation, governance refinement, and organizational change.

Days 1-30: Consolidate

  • Unified intake form replacing 5+ channels
  • Deal-killer pre-screen gate and AI no-go list
  • Completeness quality gate at submission
  • 3 request types: Defined Need / Forced Update / Speculative
  • 3-pathway routing: Buy / Build / Enable
  • Quarterback role definition (Architecture Facilitator model)
  • Simultaneous engagement rules defined
  • NDA timing decision (Security + Legal alignment)
  • Prioritization scoring with capacity impact (05-Priority Scoring)

Days 31-60: Automate

  • Parallel evaluation replacing sequential committees
  • Automated DART team formation
  • Progressive forms (ask only what's needed per stage)
  • Contract review automation pilot
  • Security tiered assessment (baseline / elevated / major)
  • AI governance consolidation (3 committees to 1 stream)
  • Finance rework loop (avoid full restart)
  • Workload visibility dashboard (pilot)

Days 61-90: Optimize

  • AI fast-track pathway (2-week target vs 6-9 months)
  • Enable pathway live (Vendor Affinity, skip funding validation)
  • Time-bound conditional approvals
  • Mandatory ownership assignment at onboarding
  • NDA-first gate enforcement
  • Automated security baseline checks
  • Shift-left: self-service Vendor Questionnaire tools
  • SLA enforcement with escalation ladder

Days 91-120: Scale

  • Pre-onboarding idea funnel (feedback platform integration)
  • Full workload dashboard across all teams
  • Exception routing for rapid risk assessments
  • Post-onboarding utilization tracking
  • Distributed pod model pilot
  • Annual ownership validation process
  • Process mining and continuous improvement
  • Executive KPI reporting
60-70%
Target Cycle Time Reduction
18 to 5
Sequential to Parallel Streams
3 Pathways
Buy / Build / Enable Routing
Day 1
Non-Starter Identification

Intake

Current State

5+ disconnected entry points
ServiceNow START, AI Use Case form, AI Governance form, Rapid Risk Assessment (Power Apps), email/chat. No unified routing.
Requester confusion
First-time requesters struggle with complexity. "Nothing directly points folks to intake forms... discovered ad hoc" (Security Architect)
Forms require multi-disciplinary expertise
RAE form has 80 questions. Business partners frequently cannot complete accurately. Questions asked from the writer's perspective, not the user's.
Process bypasses are common
Technology teams especially prone to bypassing standard processes. Business partners arrive with pre-selected vendors, skipping sourcing.

RACI: R Business (submits request) | A Business (owns request) | C Governance, Compliance

Recommendations

Unified Intake Gateway

Single entry point absorbing all channels. Dynamic routing based on request type: Standard, AI, E...

Impact: Eliminates 4 redundant channels, reduces requester confusion

Deal-Killer Pre-Screen

DMN-driven no-go check at submission. Inputs: vendor name, AI model, data residency. Blocked requ...

Impact: Saves 100% of downstream effort on non-starters

Completeness Quality Gate

AI-assisted pre-screening validates minimum viable fields before routing to SME review teams. Inc...

Impact: Eliminates rework from late-surfacing requirements

Request Classification

Automated classification: Defined Need, Forced Update, or Speculative. Each type routes to the ap...

Impact: Right-sizes governance effort to request complexity
30 Days60 Days90 Days120 Days

Prioritization

Current State

No formal prioritization formula
Teams "horse trade" internally. Each requestor views their request as most important. No force-ranking mechanism across the enterprise.
"Whoever screams loudest" gets priority
EVP support pushes other reviews down. No SLA enforcement possible without fundamental process fixes.
Business Council quorum failures
Monthly meetings draw 2-3 of 8-10 members. Evolved to email voting with manual facilitation by an executive.
All requests follow same heavyweight process
No auto-approval for low-risk items. No de-prioritization guidelines for exception cases.

RACI: R A Governance | C Business, Finance, Technical Assessment

Recommendations

WSJF Scoring Formula

Weighted Shortest Job First: WSJF = Cost of Delay / Job Size. DMN-driven with 7 inputs (business impact, alignment, urgency, risk tier, capacity, cost of delay, job size). Produces continuous score for true queue ranking plus P1/P2/P3 tiers. Small urgent requests surface above large low-priority ones.

Impact: Replaces subjective "horse trading" with economically-optimized, auditable queue ranking

Tiered Governance Fast-Track

4-tier classification at intake: Express (renewals, low-risk, 2-3 days), Standard (known category, 2-3 weeks), Enhanced (new vendor/high-risk, 4-8 weeks), Board-Level (AI/ML, regulatory, 8-12 weeks). DMN-11 evaluates fast-track eligibility automatically.

Impact: 40-60% of requests clear in days, not months — committees focus on complex cases

Cost of Delay Quantification

Capture estimated $/week financial burn rate per request at intake. Regulatory deadline cliff detection auto-escalates time-critical items. Backlog dashboard shows accumulating cost in real-time.

Impact: Makes waiting costs visible — $100K+/week requests auto-escalate to P1

Dynamic Re-Prioritization

Weekly automated queue re-evaluation via timer events. Triggered by capacity changes, new requests, deadline proximity, or strategy shifts. Quarterback reviews and confirms priority changes with delta indicators.

Impact: Priority stays current — no more stale backlog rankings from months-old scoring

AI-Augmented Triage (Future)

Auto-classification at intake using historical approval patterns. Similar request matching suggests fast-track eligibility. Predictive cycle time estimation sets expectations at submission. Full explainability for audit (EU AI Act, SR 11-7).

Impact: 25-40% reduction in intake-to-routing time with auditable AI decisions
30 Days60 Days90 Days120 Days

WSJF Prioritization Deep-Dive

Weighted Shortest Job First
WSJF = (Impact + Urgency + Risk) × 10 / Job Size
Cost of Delay ≥ $100K/week → auto-escalates to P1 regardless of other factors
Why WSJF? SAFe-proven method that optimizes economic throughput, not just importance. Small urgent requests surface above large low-priority ones — maximizing value delivered per unit of capacity.
DMN-driven: 05-Priority Scoring evaluates 7 inputs → continuous WSJF score + P1/P2/P3 tier. Fully auditable, no subjective horse-trading.
Before: Static Priority Tiers
5 inputs, 7 rules, 3 static tiers (P1/P2/P3). No queue ranking within tiers — two P2 requests are indistinguishable. Whoever escalates loudest wins.
After: WSJF Queue Ranking
7 inputs, 10 rules, continuous WSJF score plus P1/P2/P3 tiers. Economically-optimized queue order. Cost of Delay dimension captures true business urgency.
Worked Example: Two Competing Requests
RequestImpactUrgencyRiskJob SizeWSJFTier
A: Urgent patch693360P1
B: Large platform8541313P2
Result: Small urgent patch surfaces 4.6× higher than the large platform build — WSJF optimizes throughput, not just importance.

Funding / Finance

Current State

Formal denial restarts entire process
Cannot reroute coding matrix issues to FP&A. Minor cosmetic corrections (dates, alignment) require full denial and restart.
Financial business case form outdated
Shows 2024 in 2026. Form is locked, password unknown (owner left organization). Downloaded, completed offline, uploaded to shared folder.
Enable pathway forced through funding validation
Vendor Affinity products require no organizational investment, but the process still requires funding justification. Creates unnecessary friction.
FP&A process cumbersome
Hard to navigate. Business case justification required multiple times across different forms.

RACI: R A Finance | C Governance, Procurement | I Oversight

Recommendations

Finance Rework Loop

Add correction pathway for coding matrix issues. Minor fixes route directly to FP&A without full ...

Impact: Eliminates weeks of rework for cosmetic corrections

Enable Pathway Bypass

Vendor Affinity requests skip funding validation entirely. DMN routing detects "no org investment...

Impact: Removes unnecessary friction for zero-cost tools

Modernized Financial Form

Replace locked, outdated form with dynamic digital version. Pre-populate from intake data. Condit...

Impact: Single source of truth for financial data

Consolidated Business Case

One business case captured at intake, enriched progressively. Eliminates repeated justification a...

Impact: Reduces form burden by 60%+ for requesters
30 Days60 Days90 Days120 Days

Sourcing

Current State

RAE: 80-question internal questionnaire
Target: 14 days. Actual: 28-29 days (2x target). Assigns inherent risk tier and determines DD level.
Due Diligence: 830-question vendor questionnaire
Skip logic enabled. Vendor completion: 30 days (ahead of 42-day target). Internal review: 75 days (down from 144).
"Sourcing department doesn't source"
They manage contract lifecycle. True sourcing activity falls on requesters.
Single-vendor selection common
Business partners bypass competitive sourcing, arriving with pre-selected vendors. Weakens pricing leverage.

RACI: R Procurement | A Governance | C Compliance | I Oversight

Recommendations

NDA-First Gate

Enforce NDA execution before detailed vendor engagement. Single discussion allowed before NDA req...

Impact: Protects IP, aligns Legal and Security requirements

Shift-Left: Self-Service Vendor Questionnaire

Empower requesters with structured RFP tools before formal onboarding. Codify sourcing knowledge ...

Impact: Better vendor selection, reduced single-vendor bias

Tiered Due Diligence

Risk tier determines DD depth. Low-risk: automated checks only. Medium: abbreviated review. High/...

Impact: 60% reduction in DD cycle time for low-risk items

Vendor-Level Aggregation

Single vendor spanning 10+ business units shares vendor-level compliance artifacts. Per-request a...

Impact: Eliminates redundant vendor-level assessments
30 Days60 Days90 Days120 Days

Cybersecurity

Current State

"Security is our biggest bottleneck"
Understaffed security architecture team. The primary reason ARB SLA takes 2 weeks. EA could "get our SLA way down" without security constraints.
"I need three of me right now"
Request volume increasing continuously, specifically for AI reviews. Current process cannot scale with existing staffing.
No "secure by design" standard
Teams don't know minimum security controls. Enforcement is "fairly loose." Only a fraction of systems covered by identity management.
3x vendor contact for AI questions
Technology risk management, cybersecurity, and third-party risk management each contact the vendor independently. Creates redundancy and vendor frustration.

RACI: R A Technical Assessment | C Governance, Compliance

Recommendations

Tiered Security Assessment

DMN-driven. Inputs: risk tier + data classification + AI component. Baseline: automated checks on...

Impact: 70% of requests handled by automated baseline checks

Consolidated Vendor Contact

Single coordinated vendor engagement for security, replacing 3 independent contact points. One qu...

Impact: Eliminates 2/3 of vendor security touchpoints

Security Baseline Definition

Define minimum control requirements (the "secure by design" standard). Publish baseline, elevated...

Impact: Clear expectations for teams and vendors

Parallel Evaluation (not Sequential)

Security runs concurrently with Architecture, Compliance, and Financial reviews. Architecture Lea...

Impact: Eliminates 4-8 weeks of sequential waiting
30 Days60 Days90 Days120 Days

Enterprise Architecture

Current State

Governance Facilitator role
Dedicated facilitator pre-screens all designs, removes incomplete artifacts from agenda, manages follow-ups. Runs both ARB and SDRB.
ARB: 2-week SLA commitment
Present, start workflow, reviews by security, EA leaders, distinguished architects. SDRB: same-day if no issues.
JIRA integration
Every design has a JIRA ticket. Facilitator documents notes and action items. Approvers ask questions in tickets.
AI tooling adoption
Cursor AI for diagram generation. Custom AI agent scans design documents for pattern conformance.

RACI: R A Technical Assessment | C Governance

Recommendations

Scale the Governance Facilitator Model

The Architecture Facilitator role is the blueprint for the broader end-to-end "quarterback." Expa...

Impact: Proven model reduces requester burden organization-wide

Define Clear Committee Scope

Architecture questions asked by architects only. TBC covers business case. AI Governance covers A...

Impact: Eliminates redundant presentations and "hands slapping"

Simultaneous Engagement Model

"When a request comes in, simultaneous engagement of all major players that have a vote." Replace...

Impact: Architecture Lead's own proposal, validated by team

Domain-Based Auto-Assignment

Automated routing based on requesting domain. EA leader to architect by bandwidth. Cross-enterpri...

Impact: Reduces manual triage, balances workload
30 Days60 Days90 Days120 Days

Compliance

Current State

End-of-year compliance failures common
PII found in uncontrolled systems. Compliance enforcement is "fairly loose." No consistent standard for escalation triggers.
Multiple teams assess compliance independently
Risk, Legal, Privacy, and Compliance each conduct their own review without shared findings or coordinated assessment.
No standardized business case definition
TBC business case overlaps with financial review. Architecture teams assessing financials (outside expertise). Financial teams assessing architecture.
Contract deviations not reportable
No structured format for tracking contract deviations. Unknown compliance status for older contracts.

RACI: R A Compliance | C Governance, Oversight

Recommendations

Consolidated Compliance Stream

Single compliance review stream combining Risk, Legal, Privacy, and Compliance assessments. Share...

Impact: One review instead of four independent assessments

Regulatory Annotation Framework

Every process task mapped to applicable regulations. OCC 2023-17, DORA, GDPR/CCPA, SOX, EU AI Act...

Impact: Audit-ready compliance documentation from day one

Contract Deviation Tracking

Structured format for recording and reporting contract deviations in OneTrust. Automated alerting...

Impact: Visibility into compliance status across all contracts

Compliance Quality Gates

Phase-boundary compliance checks at each transition. Automated validation that required artifacts...

Impact: Prevents downstream compliance failures
30 Days60 Days90 Days120 Days

AI Governance

Current State

60+ items in AI governance queue
Multiple tools submitted for the same function with no alignment to AI strategy. "Crawl, walk, run" messaging rejected by stakeholders.
3 separate AI committees + redundancy with TBC
AI Risk Working Group, AI Cyber Review, AI Risk Review, AI Governance Committee. Working Committee "somewhat redundant with TBC process." Sequential processing adds months.
Overly restrictive AI addendum
Causes extended vendor negotiations. External firms frequently push back on AI terms. Different teams have varying risk acceptance thresholds. EU AI Act landscape "ever-changing, different every other day."
3 additional AI questionnaires "snuck up"
"I have no idea why they're there... I'm really annoyed that they're even in existence." Working to merge into single dataset.

RACI: R A AI Review | C Technical Assessment, Compliance | I Governance

Recommendations

Consolidate 3 AI Committees to 1 Stream

Single AI governance review stream replacing Working Group, Cyber Review, Risk Review, and Govern...

Impact: Eliminates months of sequential committee queuing

AI Fast-Track Pathway

Pre-defined risk posture for common AI scenarios. AI tools run AI Governance + Security in parall...

Impact: 2-week target vs current 6-9 months

Unified AI Questionnaire

Merge 3 "snuck up" questionnaires into single dataset. One vendor engagement, one questionnaire, ...

Impact: 1 vendor contact instead of 3

AI No-Go List

Communicate non-starter models/vendors early. Enterprise Risk Management decision matrix for imme...

Impact: Instant reject for known non-starters, zero wasted effort
30 Days60 Days90 Days120 Days

Privacy

Current State

Privacy review embedded across multiple teams
Privacy SME participates in vendor risk reviews, AI governance, and compliance. No dedicated privacy review stream in the current process.
PII found in uncontrolled systems
End-of-year compliance failures include PII discovery outside managed environments. Data classification not consistently enforced at onboarding.
Different risk thresholds per team
Different teams have varying risk acceptance thresholds for AI-related privacy concerns. No unified privacy risk classification.
Data hosting/storage questions could be answered earlier
Many RAE questions about data hosting, storage, and transmission could be resolved through proper sourcing events before formal assessment.

RACI: R A Compliance (Privacy) | C Governance, Business

Recommendations

Data Classification at Intake

Mandatory data classification fields in the unified intake form. Determines privacy review depth:...

Impact: Right-sizes privacy effort from day one

Privacy Review as Parallel Stream

Dedicated privacy evaluation branch running concurrently with security and compliance. Not embedd...

Impact: Consistent, predictable privacy assessment timeline

Unified Privacy Risk Classification

Single privacy risk framework replacing team-by-team thresholds. DMN-driven: inputs include data ...

Impact: Eliminates inconsistent risk acceptance decisions

Early Data Residency Screening

Data hosting, storage, and transmission requirements validated at sourcing stage. Prevents late-s...

Impact: Eliminates rework from late privacy findings
30 Days60 Days90 Days120 Days

Commercial Counsel

Current State

"Dumpster Fire #1": 2 people, 30+ contracts/month
Manual review process sustained for 4 years at unsustainable levels. Contract lifecycle management system requested for 5-6 years without funding.
Contract negotiation: up to 1.5 years
Security exhibits drive the longest negotiations. No reportable format for contract deviations. Unknown compliance status for older contracts.
Sourcing makes quasi-legal decisions
Legal bottleneck forces sourcing team into areas beyond their expertise. Creates risk and potential liability.
Only 2 legal partners for all vendor contracts
Legal capacity hasn't scaled with organizational growth. Solutions often underused after purchase.

RACI: R A Contracting | C Finance, Governance, Compliance | I Oversight

Recommendations

Contract Review Automation

AI-assisted contract review for standard clauses, redlining, and deviation detection. Human revie...

Impact: 50%+ reduction in manual review time for standard contracts

Standardized Contract Templates

Pre-approved MSA/SOW templates by vendor tier and risk classification. Reduces negotiation scope ...

Impact: Days instead of months for template-based contracts

Parallel Contracting

Begin contract negotiation in parallel with due diligence (not sequentially). Architecture Lead a...

Impact: Compresses 2-3 month contracting phase significantly

Contract Deviation Register

Structured tracking of all contract deviations with risk classification, expiry dates, and automa...

Impact: Full visibility into organizational contract risk posture
30 Days60 Days90 Days120 Days

Third-Party Risk Management

Current State

System landscape fragmented
ServiceNow (intake), Ariba (contracts), OneTrust (risk), Oracle (AP). Data lives in 4+ systems with manual PDF exports between them.
Process ownership vacuum
"Somebody needs to be empowered to say I own this... we will not allow you to do it any other way." START assigned "half a person."
4 people contacted same vendor on data breach
No clear ownership for incident communication. Same questions asked multiple times. "We're all doing the same thing to be helpful."
Tripartite ownership model weak on tech owner
Business Owner identified. Vendor Owner tracked by TPRM team of 6. Product/Tech Owner weakly tracked, often different from requesting team.

RACI: R A Governance | C Procurement, Contracting, Technical Assessment, Compliance | I Oversight

Recommendations

Empowered Process Owner

Dedicated strategic owner with authority and resources: 1 workflow manager, 2-3 project managers....

Impact: Single point of accountability for E2E process

Mandatory Ownership Assignment

Business Owner, Technical Owner, and Support Owner assigned at onboarding completion. Annual owne...

Impact: Resolves the #1 post-onboarding governance gap

Integrated System of Record

Connect ServiceNow, Ariba, OneTrust, and Oracle into unified workflow. Eliminate manual PDF expor...

Impact: Single source of truth across 4+ systems

Incident Response Coordination

Single vendor contact point for security incidents and data breach notifications. Clear ownership...

Impact: Prevents 4 people contacting same vendor on breach
30 Days60 Days90 Days120 Days

SR 11-7
Model Risk Management

Federal Reserve SR 11-7 is the foundational supervisory guidance for Model Risk Management in U.S. banking. Our governance framework provides ~95% coverage across all four pillars through executable BPMN processes, DMN decision tables, and structured Camunda forms.

8 Compliance Artifacts 32 DMN Rules 4 BPMN Processes ~95% Coverage

Model Inventory & Risk Classification — SR 11-7 §II

Model Inventory Register

Camunda 8 JSON form capturing all SR 11-7 §II required fields per model.

Identification

Model ID (AIMDL-YYYY-NNN), name, purpose, version

Classification

9 model types, 5 development types, 6 lifecycle states

Ownership

Named model owner, developer/vendor, business unit, validator

Risk & Validation

DMN-9 risk tier, EU AI Act class, validation dates, baselines

8 Regulatory Frameworks: SR 11-7, EU AI Act, NIST AI RMF, OCC 2023-17, GDPR, SOX, DORA, FS AI RMF

DMN-9: AI Risk Tier Classification

FIRST hit policy, 15 rules, 5 input dimensions scored 1–10.

Decision
Materiality
Credit/
Capital
Model
Complexity
Data
Sensitivity
Autonomy
Level

Tier 1

Any score ≥9 or high combos. Challenger model required. Monthly + continuous monitoring.

Tier 2

Any score ≥5 or moderate combos. Standard validation. Quarterly + semi-annual.

Tier 3

All scores 1–4. Self-assessment with documentation. Annual review.

Legacy Backfill: One-time BPMN process discovers pre-governance models, registers via form, classifies via DMN-9, assesses validation gaps, produces prioritized remediation plan with governance attestation.
Artifacts: model-inventory-register.formDMN-9-ai-risk-tier-classification.dmnlegacy-model-inventory-backfill.bpmn

Independent Model Validation — SR 11-7 §IV

2-lane BPMN process (AI Review + Governance) enforcing validator independence from model developers.

Validation Process Flow

Start Assign Validator A. Conceptual Soundness
Vendor Model?
No (Internal)
B. Process Verification
Yes (Vendor)
B. Vendor Doc Review
Merge C. Outcomes Analysis
Tier 1? Challenger Evaluation Consolidate
Governance Lane: Quality Gate Approval Approved / Conditional / Failed

Three Validation Components

A. Conceptual Soundness

Model theory, design assumptions, limitations. Explainability assessment for Tier 1. Findings rated Critical / Major / Minor / Observation.

B. Process Verification

Internal models: data quality, implementation testing, integration, process controls.
Vendor models: SOC 2 Type II, model cards, vendor validation reports, API test results.

C. Outcomes Analysis

Backtesting, benchmarking, sensitivity analysis. Establishes accuracy and fairness baselines for Phase 8 drift monitoring.

Tier 1 Requirement: Challenger model evaluation — alternative methodology comparison and benchmark analysis proving primary model superiority.
Artifact: sr11-7-independent-validation.bpmn

AI Model Monitoring Loop — SR 11-7 §V

4-lane BPMN (Automation, AI Review, Governance, Oversight) with DMN-8 driven cadence and event-based triggers.

Monitoring Cycle

DMN-8 Cadence Event Gateway
Timer Cycle
Material Change
Regulatory Change
Drift Detection
KS, PSI, JS
Bias Check
Fairness metrics
Performance Review
Human analysis
Stable
Watch
Warning
Action
Champion-Challenger
Trigger Revalidation
MRM Report
Board Report

DMN-8 Drift Thresholds

Risk Tier Watch Warning Action Fairness
High + AI5%10%15%10%
Limited + AI8%15%20%10%
Minimal + AI15%25%30%15%
Unacceptable3%5%8%5%

Model Status Decision

Continue Monitoring

Loop-back to DMN-8 cadence for next cycle

Retire Model

3+ ActionRequired cycles, regulatory prohibition, or Board decision

Suspend Model

Immediate cessation for critical accuracy or fairness failures

Artifacts: ai-model-monitoring-loop.bpmnDMN-8-monitoring-cadence-assignment.dmn (17 rules, 7 output columns)

SR 11-7 Artifact Inventory & Coverage

SR 11-7 Section Artifact(s) What It Covers Coverage
§II Model Inventory model-inventory-register.form
legacy-model-inventory-backfill.bpmn
DMN-9-ai-risk-tier-classification.dmn
30+ field registration form, one-time backfill for pre-governance models, 5-dimension risk classification (15 rules) 95%
§III Model Use All governance BPMNs
8 DMN decision tables
DMN-first design: business logic externalized in versioned decision tables, not embedded in gateway conditions. Audit-ready. 90%
§IV Validation sr11-7-independent-validation.bpmn
ai-governance-review.bpmn
3-part validation (A/B/C), vendor vs. internal branching, Tier 1 challenger evaluation, quality gate, governance approval 95%
§V Monitoring ai-model-monitoring-loop.bpmn
DMN-8-monitoring-cadence-assignment.dmn
Cadence-driven loop, automated drift/bias detection, 4-level escalation, champion-challenger, board MRM reporting, retirement 95%
8
SR 11-7 Artifacts
~95%
Overall Coverage
32
DMN Rules
DMN-8 (17) + DMN-9 (15)
4
BPMN Processes
Validation + Monitoring + Backfill + Review
Remaining ~5%: Inherent examination-risk ambiguity in vendor model compensating controls (§IV.B) and model retirement trigger precision (§V) — these represent regulatory interpretation boundaries, not identifiable process gaps.

End-to-End Process Orchestrator

Hierarchical model with 9 collapsed sub-processes, decision gateways, vendor pool with message flows, NDA gate, governance committee call activities, and 3 request type routing (Defined Need, Forced Update, Speculative).

End-to-End Orchestrator
87 Forms across 5 phases + vendor pool capture every data point, decision, and approval
14 DMN Tables automate routing: risk tier, pathway, governance, prioritization, security, SLA escalation, contract risk
20-Day Target end-to-end cycle time (down from 75 days) through parallel evaluation and automated gates

Request and Triage

The front door to onboarding. Requesters describe their need, existing solutions are checked for reuse, documentation is gathered, and requests are triaged, classified, and routed.

Refine Request
2-Day SLA DMN: Deal-Killer Pre-Screen

Key Decisions

  • Bypass Gate: Can an existing solution fulfill the need?
  • Completeness Gate: Automated validation before classification
  • Request Type: Defined Need → NDA → Planning | Forced Update → Fast-track | Speculative → Idea Funnel

Forms & Data Collected

8 Forms
  • Review Existing — catalog search, reuse decision, cost avoidance
  • Gather Documentation — business case, data classification, budget authorization
  • Completeness Gate — 6 automated validation checkpoints
  • Initial Triage — strategic alignment score, risk indicators, duplicate detection
  • Classify Request — type determination driving downstream routing

Planning and Routing

Validated requests are analyzed for strategic fit, scored for priority, and routed to Buy, Build, or Enable pathways via DMN decision tables.

Planning and Routing
3-Day SLA DMN: Prioritization Scoring DMN: Pathway Routing

Key Decisions

  • Needs Backlog? Does the request need full backlog prioritization or can it proceed directly?
  • Pathway Routing (DMN): Automated Buy / Build / Enable pathway assignment based on risk tier, complexity, and strategic alignment
  • Priority Scoring (DMN): Composite score driving resource allocation

Forms & Data Collected

2 Forms
  • Preliminary Analysis — business impact, risk appetite alignment, DPIA screening, capacity impact score, vendor affinity check
  • Backlog Prioritization — strategic value, urgency, resource availability, priority tier assignment

Risk and SME Assessments

The most complex phase. Five parallel assessment streams evaluate simultaneously — eliminating the sequential bottleneck that currently takes 75 days.

Risk and SME Assessments
5-Day SLA 5 Parallel Streams Shift-Left: All assessments simultaneous

Tech Architecture

Scalability, integration, enterprise standards

Security

Encryption, MFA, pen testing, SOC 2, incident response

Risk & Compliance

GDPR, OCC 2023-17, DORA, data residency

Financial

TCO, ROI, budget authorization, funding source

Vendor Landscape

Market research, shortlist, viability scoring

Evaluation: Forms and Data Architecture

The heaviest data collection phase: 8 structured forms capture 147+ fields across security, risk, financial, vendor, and AI governance dimensions.

Security Assessment

25 fields across 6 groups

  • Security tier classification (6 levels)
  • Encryption at rest/transit, key management
  • Vulnerability scan results, pen test dates
  • Breach history, incident response SLA
  • SOC 2 Type II, ISO 27001 certifications

Tech Architecture Review

18 fields

  • Architecture pattern, deployment model
  • Integration points, API compatibility
  • Scalability and performance ratings
  • Tech debt and maintainability assessment

Risk, Compliance & Legal

15 fields

  • Regulatory exposure assessment
  • Data residency, cross-border transfer
  • Consent management, DPIA results
  • OCC 2023-17 / DORA alignment

Financial Analysis

16 fields

  • Total cost of ownership model
  • ROI projection, payback period
  • Budget authorization chain
  • Funding source identification

AI Governance Review

32 fields — largest single form

  • EU AI Act risk classification
  • Model transparency, explainability
  • Bias testing, fairness metrics
  • SR 11-7 model risk alignment

Vendor Due Diligence

18 fields + vendor landscape (11 fields)

  • Financial stability, operational resilience
  • Fourth-party risk, subcontractor disclosure
  • Market positioning, viability scoring
  • Weighted evaluation matrix

Governance Committee Voting

Structured deliberation replaces ad-hoc email voting. A reusable call activity invoked from any governance phase provides full audit trail of who voted, why, and what conditions were set.

8
Forms
6
Vote Types
3
Outcomes
Per-Voter
Audit Trail

7-Step Deliberation Lifecycle

Review
Brief
Q&A
Period
Cast
Votes
Tally
Reconcile
Conditions
Remediation
Loop ↻
Vote
Summary
Reusable Call Activity Invoked from any governance phase
Vote types: Approve, Approve with Conditions, Reject, Abstain, Recuse, Veto

Deliberation-Aware Voting

  • Risk context banner on every ballot
  • Q&A summary table with peer questions and answers
  • Conditional fields: conditions visible only for "Approve with Conditions," veto justification only for "Veto"
  • Structured 3-part rationale capture

Transparent Condition Consolidation

  • Per-voter attribution table
  • Structured list with category, owner, and deadline per condition
  • Explicit mapping rationale from individual to consolidated conditions

Audit-Ready Narrative

  • Per-voter rationale table (who voted what and why)
  • Color-coded outcome banner, decision timeline
  • All variables extractable via Tasklist API

Remediation Loop

  • Previous round vote table shows each voter's position
  • Addressed-concerns checklist maps concerns to actions
  • Evidence attachments for updated documents

Contracting

Two pathways converge here. Buy: refine requirements, proof of concept, contract negotiation with AI-powered Contract Inherent Risk Score (CIRS). Build: define requirements, then the full Product Development Life Cycle (PDLC).

Contracting
7-Day SLA Buy vs Build Pathways

Buy Path Forms

6 Forms
  • Refine Requirements — final spec from evaluation insights
  • Proof of Concept — structured PoC with pass/fail criteria
  • Negotiate Contract — OCC 2023-17, DORA Art. 30, GDPR Art. 28 provisions
  • Review Contract Risk — 8-dimension CIRS score, 07-Deal Killer Screen routing, regulatory gap analysis
  • Finalize Contract — internal approvals, term verification

Build Path Forms

5 Forms (PDLC)
  • Define Build Requirements — functional specs, architecture constraints
  • Architecture Review — enterprise standards validation
  • Development — secure coding, CI/CD pipeline tracking
  • Testing & Integration — test coverage, defect resolution, deployment

UAT and Go-Live

The finish line. User acceptance testing validates the solution, final approval confirms governance compliance, software is onboarded, ownership is assigned, and the request is formally closed.

UAT and Go-Live
3-Day SLA Final Governance Gate

Key Activities

  • Pilot / UAT: Structured testing with pass/fail criteria, defect tracking, user satisfaction scoring
  • Final Approval: Third line of defense governance sign-off based on complete evidence package
  • Condition Verification: For conditional approvals, verify all conditions met before onboarding
  • Dual Ownership: Business Owner + Vendor Owner assigned at completion

Forms & Data Collected

6 Forms
  • Perform UAT — test scenarios, pass/fail rates, critical defects, user satisfaction (1-10)
  • Final Approval — governance sign-off, risk acceptance, audit evidence
  • Onboard Software — catalog entry, access provisioning, monitoring setup
  • Assign Ownership — Business Owner + Vendor Owner designation
  • Close Request — lessons learned, satisfaction survey, archive

Execute NDA(s)

Legal prerequisite gate. Non-disclosure agreements are executed before sensitive vendor information is exchanged. Tracks NDA status, routes to legal for execution, and verifies completion before proceeding.

Execute NDA(s)
Legal Gate Contracting Lane

NDA Processing Flow

Detailed sub-process for NDA lifecycle management. Handles NDA generation, internal review routing, vendor signature collection, and fully-executed document storage. Supports multiple NDA types (mutual, one-way) with configurable review paths.

NDA Processing Flow
Phase 1 Contracting Lane Vendor Interaction

Vendor Sourcing

Market research and vendor shortlisting sub-process. Evaluates the vendor landscape, scores candidates against requirements, and produces a shortlist for detailed due diligence evaluation.

Vendor Sourcing
Phase 3 Market Research Vendor Shortlisting

Per-Vendor Evaluation

Individual vendor assessment flow within the due diligence phase. Each shortlisted vendor undergoes parallel evaluation across technical, security, compliance, and financial dimensions with structured scoring and consolidated recommendation output.

Per-Vendor Evaluation
Phase 3 Parallel Assessment Scoring & Recommendation

Product Development Life Cycle (PDLC)

Nested sub-process within Risk Assessment and Contracting. Covers the full Build pathway: architecture review, development, testing with retry loops, and integration verification before UAT.

Product Development Life Cycle
Build Path Nested Sub-Process Test Retry Loop

Vendor Pool: The External Partner Journey

Running in parallel with the enterprise process, vendors follow their own structured journey. 10 tasks span intake through deployment support, with message flows synchronizing handoffs at key milestones.

Vendor Intake
Qualification, sanctions screening
Proposal
Commercial & technical response
Tech Demo
Structured evaluation
Security & Compliance
Questionnaire + certifications
Contract
Review, negotiate, sign
Deploy & Close
Onboarding, support, close
10 Vendor Forms
  • Vendor Intake — legal entity, tax ID, ownership, sanctions screening, insurance
  • Security Questionnaire — SOC 2, pen testing, encryption, incident response
  • Compliance Documentation — regulatory certifications, DPA, sub-processor disclosures
  • Contract Execution — MSA, schedules, authorized signatory verification

Message Flow Handoffs

→ Due Diligence Request

Enterprise sends RFP/assessment requirements to vendor

← Vendor Response

Vendor submits proposal and completed questionnaires

→ Contract Draft

Enterprise sends negotiated contract for vendor review

← Signed Contract

Vendor returns executed contract with authorized signature

Vendor Questionnaire: Self-Service Vendor Pre-Screening (SP0)

Before formal intake, requesters complete a lightweight 9-step wizard that identifies deal-killers early and pre-populates downstream forms. This eliminates wasted reviewer capacity on non-starters.

9-Step Pre-Screening Wizard

1Vendor Info
2Business Case
3Risk Indicators
4Data Classification
5Compliance Screening
6Financial Overview
7Vendor Context
8AI Classification
9Review and Submit

Deal-Killer Detection (07-Deal Killer Screen)

Real-time alerts flag non-starters before any reviewer time is consumed:

  • Active regulatory sanctions against the vendor
  • Unresolvable data residency conflicts
  • Prohibited AI use case categories (EU AI Act)
  • Missing mandatory certifications for data tier

Pre-Screening to Formal Intake Flow

SP0: Requester Vendor Questionnaire Wizard
07-Deal Killer Screen: Deal-Killer Gate
Pass
Fail
SP1: Formal Intake
Variables pre-populated
Request Rejected
Before reviewer time spent
Vendor Questionnaire is embedded as a collapsed sub-process in the v20 onboarding model (Activity_0j7ifzh in SP1)

RACI Matrix by Governance Topic

TopicBusinessGovernanceFinanceProcurementContractingTechnicalAI ReviewComplianceOversightAutomationVendor
IntakeR/ACC----C---
PrioritizationCR/AC--C-----
Funding-CR/AC----I--
Sourcing-A-R-C-CI-R
Cyber-C---R/A-C--C
EA-C---R/A----C
Compliance-C-----R/AC-C
AI Governance-I---CR/AC---
PrivacyCC-----R/A---
Comm. Counsel-CC-R/A--CI-R
TPRM-R/A-CCC-CCCC

R = Responsible | A = Accountable | C = Consulted | I = Informed  |  Finance = new functional area (split from Governance per discovery findings)

Governance Topic Coverage Across Onboarding Phases

Hover over any cell to see the tasks that address each governance topic in that phase.

Governance Topic Request
and Triage
Planning
and Routing
Evaluation
and Due Diligence
Contracting
and Build
UAT
and Go-Live
Vendor Pool
Intake
Review Existing, Leverage Existing, Gather Docs, Submit Request, Initial Triage
Close Request
Prioritization
Preliminary Analysis, Backlog Prioritization, Pathway Routing
Funding
Gather Documentation
Preliminary Analysis
Financial Analysis
Negotiate Contract
Sourcing
Assess Vendor Landscape, Vendor DD, Evaluate Response
Refine Requirements, Perform PoC
Vendor Intake, Proposal Submission, Contract Execution
Cyber
Security Assessment, Tech Architecture Review
Tech Risk Evaluation, Perform PoC
Integration Testing
Vendor Security Questionnaire
EA
Tech Architecture Review, Perform PoC
Tech Risk Evaluation, PDLC Sub-Process
Onboard Software
Vendor Demo
Compliance
Initial Triage
Risk Compliance Legal, Vendor DD
Finalize Contract
Final Approval, UAT
Vendor Compliance Review
AI Governance
Preliminary Analysis (AI flag)
AI Governance Review, Risk Compliance Legal
Privacy
Gather Documentation
Preliminary Analysis
Risk Compliance Legal
Comm. Counsel
Negotiate Contract, Finalize Contract
Vendor Contract Review, Contract Execution
TPRM
Initial Triage
Risk Compliance Legal, Vendor Landscape, Vendor DD, Evaluate Response
Tech Risk Evaluation, Negotiate Contract, Finalize Contract
Final Approval, Onboard Software
Vendor Intake, Vendor Onboarding, Close Request
Primary (2+ tasks) Secondary (1 task) No coverage

DMN Decision Tables: Risk Tier & SLA Breach Escalation

01-Risk Classification: Risk Tier Classification

UNIQUE Phase 2

5 risk dimensions (1-10) assign one of 4 tiers. Unacceptable terminates immediately.

Data Sensitivity Regulatory Exposure Operational Criticality AI Complexity 3rd-Party Dependency Risk Tier Eligible?
≥ 9≥ 9---UnacceptableNo
≥ 9-≥ 9--UnacceptableNo
-≥ 9≥ 9--UnacceptableNo
≥ 9--≥ 8-UnacceptableNo
---≥ 9≥ 9UnacceptableNo
≥ 7≥ 7---HighNo
≥ 7-≥ 7--HighNo
-≥ 7-≥ 6-HighNo
--≥ 7-≥ 7HighNo
[4..7)[4..7)---LimitedYes
[4..7)-[4..7)--LimitedYes
-[4..7)[4..7)--LimitedYes
--[4..7)[3..6)[4..7)LimitedYes
< 4< 4< 4< 3-MinimalYes
< 4< 4< 4-< 4MinimalYes

04-SLA Escalation: SLA Breach Escalation

FIRST Cross-Cutting

4-level escalation ladder triggered by SLA timer boundary events.

Breached Phase Days Beyond SLA Risk Tier Escalation Action Notify Auto-Action
Phase3, Phase4≥ 15HighAuto-RejectBoard + SponsorTerminate process
Phase3, Phase4≥ 10HighBoard-AlertAdvisory BoardEscalation meeting
Any≥ 10LimitedEscalate-GovernanceGovernance LeadPriority reassignment
Phase1, Phase2≥ 5HighEscalate-GovernanceGovernance LeadPriority reassignment
Any≥ 5LimitedNotify-SponsorBusiness SponsorStatus update
Phase5≥ 5-Notify-SponsorBusiness SponsorStatus update
Phase1, Phase2≥ 3MinimalNotify-SponsorBusiness SponsorEmail reminder
Phase3≥ 7-Escalate-GovernanceGovernance LeadVendor follow-up
Phase4≥ 5MinimalNotify-SponsorBusiness SponsorCommittee scheduling
Any≥ 2-MonitorProcess OwnerDashboard alert

DMN Decision Tables: Pathway, Governance, Prioritization, Security, Contract Risk

02-Pathway Routing: Pathway Routing

UNIQUE Phase 1

Routes to Fast-Track, Standard-Buy, Standard-Build, or Hybrid based on 6 inputs.

ReuseBuild/BuyCapacityStrategicTtVVendor?Pathway
≥8"Buy"--≥7trueFast-Track
≥7"Buy"-≥7≥7trueFast-Track
-"Buy"----Standard-Buy
-"Build"≥6---Standard-Build
-"Hybrid"----Hybrid
-"Build"<6---Hybrid

03-Governance Gate: Governance Routing

UNIQUE Phase 4

Routes governance review to Advisory Board, Committee, Fast Path, or Auto-Approve based on risk tier and pathway.

Risk TierPathwayReview BodySLA
Unacceptable-Rejected-
High-Advisory Board7 days
LimitedStandard-*Committee5 days
LimitedFast-TrackFast Path2 days
MinimalFast-TrackAuto-Approve1 day
MinimalStandard-*Fast Path2 days

05-Priority Scoring: WSJF Prioritization Scoring

FIRST Phase 2

7 inputs produce priority tier (P1/P2/P3) AND continuous WSJF score for true queue ranking. Cost of Delay ≥ $100K/week auto-escalates.

ImpactUrgencyRiskCoD $/wkJob SizeWSJFTier
---≥100K-99P1
-≥9---95P1
≥8-High--90P1
≥8≥7---88P1
-≥5--≤570P2
≥6-High-[1..4]65P2
-----40P3

06-Security Routing: Security Assessment Routing

UNIQUE Phase 3

Routes security review depth based on risk tier, data classification, and AI component presence.

Risk TierData Class.AI Component?AssessmentSLA
HighConfidentialtrueMajor10 days
HighConfidentialfalseMajor10 days
HighInternaltrueElevated5 days
LimitedConfidential-Elevated5 days
LimitedInternaltrueElevated5 days
LimitedInternalfalseBaselineSame-day
Minimal-falseBaselineSame-day
Minimal-trueElevated5 days
07-Deal Killer Screen: Contract Risk Routing FIRST Phase 4

8-dimension Contract Inherent Risk Score (CIRS) drives 4-way routing: AutoApprove (≤3.0) | EnhancedReview (3.1-5.5) | SeniorGovernance (5.6-7.5) | RejectRenegotiate (>7.5 or immutable violations). Inputs: CIRS score, immutable violations, regulatory gaps, risk tier.

Onboarding Bottleneck Analysis: Before vs. After

Key delay drivers identified during stakeholder interviews mapped against projected improvements from the proposed governance model.

75 Days → 20 Days

Total E2E cycle reduction through parallel processing and DMN-driven routing

18 → 4 Committees

Consolidated governance committees with tiered review paths

335 Assessments/Year

Current volume processed through the new automated intake and triage pipeline

60+ AI Queue → 0

Backlog eliminated with dedicated AI governance lane and SR 11-7 framework

Measurement Dashboard: Day 120 Targets

Key performance indicators with current baselines and Day 120 targets. These metrics enable data-driven proof of improvement.

MetricBaseline (Current)Day 120 Target
E2E cycle time (standard)6-9 months60-90 days
E2E cycle time (Enable)Same as standard30 days
E2E cycle time (AI fast-track)Same as standard14 days
RAE completion28-29 days14 days
DD internal review75 days30 days
Security review (Baseline tier)2 weeksSame-day (automated)
Form completion rateUnknown90%+ first-pass
Intake rejection (deal-killer)0% (no pre-screen)Measured
Contract cycle timeVaries (up to 1.5yr)90 days standard
Business Council decisionMonthly + email48-hour async SLA
Requester satisfactionUnmeasuredNPS baseline established
Queue transparencyNoneReal-time dashboard
Competitive Analysis

SLA Governance on Camunda 8
vs. ServiceNow

The customer's governance lifecycle spans 8+ specialized systems. No single platform can replace the ecosystem. SLA on Camunda 8 is the orchestration layer that connects them all — with auditable BPMN/DMN models regulators can inspect directly.

8+Systems Orchestrated
160Releases Shipped
18Phase-Level SLA Timers
24/24E2E Scenarios Passed
Already Running

SLA on Camunda 8

Process models, 87 forms, dashboard, Jira integration, Vendor Questionnaire, Vendor Portal — in production on Camunda 8 Cloud.

Start from Zero

ServiceNow Equivalent

No BPMN. No DMN. No multi-pool. Decompose into dozens of disconnected Flow Designer workflows. Rebuild 14 DMN tables in proprietary format.

24/24

E2E Scenarios Passed

Full gateway coverage — every pathway (fast-track, buy, build, hybrid) exercised against Camunda 8 Cloud.

8+ Systems

One Governed Lifecycle

Jira, OneTrust, Ariba, Oracle, AppFox, iManage, Box, ServiceNow — orchestrated by a single auditable BPMN process.

This is an orchestration problem, not a platform problem.

A typical software onboarding request touches Jira, OneTrust, Ariba, Oracle, Confluence/AppFox, iManage, Box, and ServiceNow. No single platform owns this end-to-end. The question is not "which system?" but "how do we coordinate work across 8+ systems with SLA enforcement, regulatory traceability, and decision transparency?" That is what Camunda 8 does.

Customer Technology Landscape

Each team has invested in tooling optimized for their domain. None of these are going to change.

TPRM & Risk
OneTrust
Vendor risk assessments, questionnaires, continuous monitoring
Technical SME Tasks
Jira
Task assignment, tracking, completion for all technical evaluation tracks
Product Management
Jira
Intake coordination, task assignment, progress tracking
Sourcing & Procurement
Ariba
NDAs, questionnaires, RFPs, vendor contracts, supplier management
EA Approval Workflows
AppFox (Confluence)
Architecture review approvals, decision documentation
Legal Contracting (WIP)
iManage
Contract drafting, redlining, version management
Legal Contracts (SOR)
Box
Fully executed contract storage, system of record
Finance
Oracle
Financial analysis, budget approval, vendor payments
ITSM
ServiceNow
Incidents, changes, configuration items
Key pain point: The Ariba-to-Oracle integration is currently manual — financial approvals and vendor payment workflows require hand-offs between systems with no automated bridge.

The Virtual Application Layer

SLA is the application that sits above all stovepipe systems — it doesn’t replace any of them.

People continue working in the systems they know. SLA provides what no individual system can: end-to-end visibility, SLA enforcement, and regulatory compliance evidence spanning all 8+ systems.

CapabilityHow It WorksWhy No Single System Can Do This
End-to-End VisibilitySingle BPMN process orchestrates work across Jira, OneTrust, Ariba, Oracle, iManage, Box, Confluence, ServiceNowEach system only sees its own slice of the lifecycle
SLA Enforcement18 phase-level SLA timers with 3 severity tiers (baseline/elevated/major), FEEL-configurable durations, Admin UI for live configurationSLA timers spanning systems require an orchestrator outside any single system
Regulatory EvidenceBPMN model + Camunda Operate = proof that controls are in place and operatingRegulators need the "evidence package" across the full lifecycle
Decision TransparencyDMN tables document every routing decision — versioned, auditable, portableEmbedded rules in ServiceNow, OneTrust, or Ariba are platform-locked and opaque
Audit TrailEvery system interaction, decision, and approval timestamped in one process historyServiceNow logs cover ServiceNow; Jira logs cover Jira; neither covers the hand-off

What’s Built

Tangible deliverables proving the platform isn’t theoretical

10
Web Applications
Task Worker, Dashboard, Vendor Questionnaire, Admin UI, Vendor Portal, Concierge, Quarterback Toolkit, Current/Future State Explainers, Jira Sync Status
24/24
E2E Scenarios Passed
Full gateway coverage against Camunda 8 Cloud — every pathway exercised
77
Camunda 8 Forms
Dynamic JSON forms with auto-fill defaults and conditional visibility
160
Releases
CalVer high-velocity iteration — multiple releases per day
18
Phase-Level SLA Timers
3 tiers (baseline/elevated/major), FEEL-configurable, Admin UI for live tuning
21
DMN Decision Tables
FEEL-based, versioned, portable — risk tiering, pathway routing, governance review

Head-to-Head Comparison

Process modeling, cross-system integration, and governance capability

Process Modeling & Execution

BPMN 2.0 and DMN 1.3 vs. proprietary designers and embedded rules.

CapabilitySLA (Camunda 8)ServiceNowEdge
Process notationBPMN 2.0 (ISO standard)Proprietary Flow Designer / PlaybooksSLA
Decision tablesDMN 1.3 with FEEL, hit policies, DRDsProprietary Decision Builder (no FEEL, no hit policies)SLA
Multi-lane modeling9+1 swim lanes with cross-lane routingNo lanes/pools concept in any designerSLA
Multi-pool (vendor)Enterprise + Vendor pool with message flowsNo equivalent — manual API integrationSLA
Parallel gatewaysFull AND/OR/XOR with token semanticsParallel tasks exist but no formal token modelSLA
Long-running stateZeebe maintains state across monthsTransient — workarounds for long processesSLA
Process versioningMultiple versions running, in-flight migrationChanges affect all in-flight workSLA
PortabilityBPMN XML runs on any compliant engineLocked to ServiceNow platformSLA
Low-code citizen devCamunda Modeler requires BPMN knowledgeFlow Designer accessible to non-technical usersSN

Cross-System Integration

The decisive factor — connecting 8+ specialized systems into one governed lifecycle.

SystemSLA (Camunda 8)ServiceNowEdge
JiraBuilt — bi-directional task sync, in productionIntegrationHub spoke (separate license)SLA
OneTrustService task creates assessment, retrieves via APINo native spoke; custom RESTSLA
AribaService task triggers NDA/RFP, polls for completionSAP Ariba spoke limited to basic PO/invoiceSLA
OracleBridges Ariba-Oracle gap (currently manual)Oracle spoke doesn't solve the gapSLA
AppFoxTriggers AppFox approval, waits for outcomeConfluence spoke is read-only; no AppFoxSLA
iManageCreates matter, tracks redlining statusNo native integrationSLA
BoxStores contract, updates metadataBox spoke (basic CRUD)Neutral
ServiceNowCamunda+ServiceNow connector (GA — shipped November 2025)Native — it IS ServiceNowSN
Critical insight: ServiceNow can integrate with some of these systems via IntegrationHub spokes, but it cannot orchestrate them within a single auditable process model. Each spoke is a point-to-point integration. Camunda models the entire 6-phase lifecycle as one BPMN process.

AI Opportunity Map

Every task categorized: Deterministic (D), AI-Assisted (AI), or Human Judgment (H)

Only 10% of tasks genuinely require human judgment. 90% automation potential (deterministic + AI-assisted) — this represents the target state, not current implementation.

44 Tasks
15 Deterministic (34%)
24 AI-Assisted (55%)
5 Human (11%)

Estimated 40–60% reduction in manual effort across evaluation, contracting, and documentation phases.

SP1 — Refine Request & Triage

3 Deterministic 6 AI-Assisted 1 Human ~60% effort reduction
AIPortfolio matching — LLM finds existing software matches
AIDocument extraction — LLM extracts structured metadata
AITriage recommendation — LLM pre-scores against criteria
AIAuto-classification — risk tier, data sensitivity, regulatory
DDeal Killer Pre-Screen — DMN table, auditable
AIQuarterback assistance — conversational guidance
AIVendor response analysis — scores proposals, flags gaps
DNDA status check — automated against Ariba
DSLA timer events — deterministic escalation

SP2 — Planning & Routing

3 Deterministic 1 AI-Assisted 1 Human ~40% effort reduction
AIBusiness case drafting from intake data + market intelligence
HBacklog prioritization — human decision (AI suggests ranking)
DDMN Pathway Routing — deterministic track assignment
DPrioritization Scoring — DMN-driven

SP3 — Evaluation & Due Diligence (9 Parallel Tracks)

3 Deterministic 10 AI-Assisted 1 Human ~55% effort reduction
AISecurity assessment from SOC 2 reports, gap identification
DSecurity Routing DMN — Baseline vs. Elevated/Major
AIArchitecture fit analysis against enterprise standards catalog
AIRisk narrative from OneTrust data + vendor questionnaires
AIRegulatory mapping (OCC, DORA, GDPR) gap identification
AIDPIA generation from data flow descriptions
AIVendor intelligence — aggregated public data, risk profile
AILegal risk assessment from litigation/regulatory history
AITCO modeling from vendor proposals + Oracle historical data
AIMarket analysis — competitive landscape, pricing benchmarks
AIAI risk classification (EU AI Act, SR 11-7)
DParallel gateway synchronization of all 9 tracks

SP4 — Contracting & Build

3 Deterministic 5 AI-Assisted 1 Human ~50% effort reduction
AIContract redline analysis — the marquee AI use case
AIDeviation risk scoring against OCC/DORA requirements
AIContract completeness check (audit rights, termination, DPA)
AIAI-generated test cases and coverage analysis (PDLC)
HDevelopment (AI assists, human owns implementation)
DD3/D7/D11 vendor response reminders
DReceive task with SLA boundary timer

SP5 — UAT & Go-Live

3 Deterministic 2 AI-Assisted 1 Human ~45% effort reduction
AITest result analysis — summarize UAT, generate go/no-go rec
HFinal Governance Approval (AI provides decision package)
DAutomated provisioning and configuration
AIOwnership recommendation based on org structure
DAutomated notification and record closure

Contract AI — The Marquee Use Case

Highest-ROI AI opportunity: reduce contract review from 5–15 days to 1–3 days

Contract negotiation involves unstructured documents, domain expertise, high stakes, and significant manual effort. Legal focuses on the 20% requiring judgment; AI handles the other 80%.

1

Vendor Returns Redlined Contract

Received from iManage. Camunda service task triggers AI analysis pipeline.

2

AI: Clause Extraction & Classification

Immutable clauses (regulatory: audit rights, data protection, termination for breach). Flexible clauses (commercial: liability caps, indemnification). Standard clauses (force majeure, governing law).

3

AI: Deviation Analysis Against Playbook

Flag each redlined clause: accept / reject / counter-propose. Risk score per deviation with regulatory cross-reference (OCC 2023-17 §60, DORA Article 30).

4

AI: Negotiation Intelligence

Historical success rates for similar clause negotiations. Vendor-specific patterns. Market benchmarks vs. industry standard.

5

Human Review: Legal Team Decides

AI-generated redline summary: "Vendor proposes X change to liability cap — deviates from standard by Y, risk: HIGH, recommendation: reject/counter." Legal accepts, modifies, or overrides each recommendation.

6

AI: Generate Counter-Proposal Draft

Produces redlined response with legal team's decisions applied. Maintains clause-level audit trail in Box (system of record).

7

Camunda: Track, Enforce, Escalate

D3/D7/D11 reminder boundary timers on vendor response. SLA breach end event if no response within timeframe. Full traceability.

System integration flow: iManage (source) → LLM service task (analysis) → Camunda user task (human review) → LLM service task (counter-proposal) → Box (storage) → Camunda (audit trail, SLA).

AI Model Flexibility

Open orchestration vs. walled garden

DimensionSLA on Camunda 8ServiceNowEdge
Default AI modelNone — customer choosesProprietary Now LLMSLA
External model supportAny OpenAI-compatible API + native Anthropic/Bedrock/AzureAzure OpenAI, Vertex AI, Bedrock + custom model import (Zurich) — still no self-hostedSLA
Self-hosted modelsSupported via custom Zeebe workerNot supportedSLA
Cost modelDirect to provider (transparent, market-rate)Consumption "assists" (30–45% licensing add-on)SLA
AI portabilityChange connector config; BPMN unchangedAI investment locked to platformSLA
MCP supportPublished September 2025Unconfirmed as of March 2026SLA
Multi-model orchestrationDifferent models for different tasks in one processSingle provider per instanceSLA

When the customer's AI strategy evolves — new models, new providers, new regulatory requirements — which platform makes that evolution a configuration change vs. a re-implementation?

Committee Governance Voting: Platform Comparison

The committee voting subprocess is a core differentiator — DMN-driven configuration, multi-instance parallel voting, quorum enforcement, and tiered SLA escalation vs. flat approval primitives.

CapabilitySLA on Camunda 8ServiceNow
Voting methods5 methods (Unanimous, Super-Majority, Majority, Veto, Single Reviewer) — DMN-drivenBinary approve/reject only
Committee configurationDMN table selects method, quorum, roles, timers by risk tier × phase × contract value (8 rules)Approval Configurator — conditional routing to groups, flat pool only
Per-group quorumNative — one vote from each of N governance bodies via multi-instance + locked rosterNot supported — requires custom Business Rule scripting
Weighted votingConfigurable via DMN (veto-capable roles can block)Not supported — all votes equal weight
Parallel deliberationMulti-instance user tasks — all members vote simultaneously with shared briefGroup approval — anyone or all, no structured deliberation
Q&A phaseOptional multi-round Q&A sub-process, timer-bounded, DMN-configured per risk tierNo native equivalent — requires Teams/Slack integration
Vote options6 options: Approve, Conditional, Reject, Abstain, Recuse, Veto — with structured rationale2 options: Approved or Rejected
Conditions reconciliationAutomated conflict detection + governance officer mediation when conditional votes conflictNot applicable — no conditional vote concept
Remediation loopsConfigurable max iterations (DMN), auto-escalation to leadership after max reachedManual — resubmit record and restart approval chain
SLA escalation timers3-tier boundary timers: Reminder → Escalation → Deadline with auto-default rulingRequires separate SLA records + notification rules + scheduled job
Decision routingDMN 1.3 with FEEL expressions, FIRST/UNIQUE hit policies, chained decisionsDecision Builder — FIRST-match only, no FEEL, no chaining
Reusable subprocessBPMN call activity — same voting process called from 2+ governance checkpointsSubflow — functional but without event subprocess or boundary timer patterns
Audit trailPer-voter rationale, dissenting opinions, conditions history, remediation feedbacksysapproval_approver records — basic approve/reject with optional comments
Standards complianceBPMN 2.0 / DMN 1.3 — portable, vendor-neutral, auditable XMLProprietary Flow Designer — locked to ServiceNow platform

4-8 Weeks Custom Dev to Replicate

ServiceNow would require custom GlideScript, Business Rules, and UI Policies to replicate quorum enforcement, weighted voting, and tiered SLA escalation.

8 DMN Rules vs. ~500 Lines GlideScript

Committee voting configuration lives in a single DMN table — business analysts can modify voting rules without developer involvement.

OCC 2023-17 Audit-Ready

Every vote, rationale, dissenting opinion, and remediation loop is captured as process variables — the regulatory evidence trail OCC 2023-17 and SR 11-7 require.

Governance & Regulatory Compliance

BPMN models ARE the audit artifacts

RequirementSLA ApproachServiceNow ApproachEdge
OCC 2023-178-phase BPMN maps directly to guidance; DMN risk tiering; OneTrust integrationManual TPRM module config to map guidanceSLA
SR 11-7AI governance lane + DMN tables as versioned audit artifactsNo native framework mapping; custom configSLA
DORA (EU)ICT register, resilience testing, incident response as BPMN sub-processesDORM app (separately purchased, complex setup)SLA
SOXPhase transition pattern (completion, quality gate, approval, event)OOTB control testing and attestationSN
GDPR/CCPAPrivacy assessment as parallel eval track + DMNPrivacy Management module (add-on)Neutral
Audit trailFull token-level process history across all 8+ systemsAudit logs within ServiceNow onlySLA
Regulatory annotationsEmbedded in BPMN (text annotations + camunda:properties)Manual tagging on recordsSLA
Compliance evidence graphNeo4j knowledge graph (7 node types, 8 relationships) + PostgreSQL evidence schema (5 tables, 2 views) — cross-system compliance traceabilityGRC evidence within ServiceNow onlySLA
AI-powered GRCAI service tasks for risk assessment, document analysis — model-agnosticNow Assist for IRM: AI issue resolution, smart assessment templates (Zurich) — within-ServiceNow onlySLA
Key regulatory insight: Both OCC 2023-17 and SR 11-7 require governance processes themselves be documented, version-controlled, and auditable. A BPMN model satisfies this structurally — and because it orchestrates across all 8+ systems, it provides the single source of truth that no individual system can offer alone.

Effort to Production

What’s done, what remains, and what ServiceNow would need to replicate

SLA on Camunda 8 (Current State)

Core process, forms, dashboard, and Jira integration already in production.

Process Models
Done
87 Forms
Done
Showcase Apps
Done
Jira Sync
Done
24/24 E2E
Done
SLA Timers
Done
OneTrust
Discovery
Ariba
Planned
Oracle Bridge
Planned
AppFox
Planned
iManage/Box
Planned
Hardening
Planned

What ServiceNow Would Need to Replicate

The scope of work required to build equivalent governance orchestration in ServiceNow.

WorkstreamChallenge
GRC + TPRM licensingContract negotiation, module selection — TPRM is redundant since customer already uses OneTrust
Custom workflowRecreating 6-phase process without BPMN; no swim lanes; no token semantics; custom state machine in Flow Designer
Decision logic14 DMN tables into proprietary Decision Builder — loss of FEEL expressions, hit policies, and DRDs
87 forms + dashboardRebuild in ServiceNow UI Builder + Performance Analytics
8+ system integrationsOneTrust, Jira, Ariba, Oracle, AppFox, iManage, Box — most require custom IntegrationHub spokes or REST
Regulatory mappingManual control-to-regulation mapping (no BPMN annotations, no embedded compliance properties)
Cross-system SLA enforcementNo equivalent to boundary timer events spanning multiple external systems
UAT & validationFinancial services testing requirements across all integrated systems

Honest Assessment: Where Each Platform Wins

Acknowledging ServiceNow strengths while highlighting SLA’s decisive advantages

Where ServiceNow Wins

Mature in areas adjacent to — but not core to — the orchestration problem.

1

Performance Analytics

More mature reporting and dashboards than custom-built alternatives

2

Mobile App

Native mobile experience vs. responsive web

3

ITSM Integration

For incidents, changes, and CIs post-deployment — ServiceNow IS the system of record

4

Organizational Familiarity

"We already use ServiceNow" reduces adoption friction for IT Operations teams

5

Process Mining & Task Mining

Zurich release adds descriptive visibility into work patterns — note: descriptive (what happened) vs. SLA’s prescriptive orchestration (what should happen)

Note: Several traditional ServiceNow advantages are neutralized by existing tooling: OOTB TPRM (OneTrust), Vendor portal (Ariba), Continuous monitoring (OneTrust), Contract management (iManage+Box).

Where SLA Wins (15 Decisive Advantages)

The fundamental question: who orchestrates the governance lifecycle spanning all 8+ systems?

1

Cross-System Orchestration

Single governed process across Jira, OneTrust, Ariba, Oracle, Confluence, iManage, Box, ServiceNow

2

Regulatory Auditability

BPMN model IS the audit artifact — regulators inspect the process directly, across all systems

3

DMN Decision Transparency

21 versioned, portable, FEEL-based decision tables vs. proprietary rules

4

Ariba-Oracle Automation

Bridges the current manual hand-off between procurement and finance — net-new value

5

Process Complexity

9 parallel eval tracks, multi-pool vendor flows, event sub-processes — impossible in ServiceNow

6

Already in Production

Process models, forms, dashboard, Jira sync, E2E tests, SLA timers — running on Camunda 8 Cloud today

7

Incremental Investment

Camunda 8 platform is already running; remaining work is integration bridges, not platform buildout

8

No Vendor Lock-in

BPMN/DMN are ISO standards — process models survive platform changes

9

Long-Running State

Zeebe handles 30–180 day onboarding lifecycles natively

10

Deal-Killer Pre-Screening

DMN-driven early rejection has no ServiceNow equivalent

11

Single Pane of Glass

One dashboard showing work status across Jira, OneTrust, Ariba, iManage, and Oracle

12

AI Model Freedom

Any LLM provider is just another service task — no platform markup or walled garden

13

AI + Deterministic in One Process

Single BPMN process invokes DMN tables and LLMs in consecutive steps, with full traceability

14

Contract AI as Orchestration

Highest-ROI AI use case requires orchestrating iManage + LLM + legal review + Box

15

Self-Service Pre-Screening

Vendor Questionnaire 9-step wizard with deal-killer alerts, multi-vendor parallel evaluation, vendor portal with secure response interface

Strategic Recommendation

Don’t position this as SLA vs. ServiceNow — or SLA vs. any single system. Position Software Lifecycle Automation as the orchestration layer that connects the customer’s existing investments — and as the AI-ready foundation that enables intelligent automation without platform lock-in.

The customer has made deliberate, funded decisions about their technology landscape. None of these are changing. The question is: who orchestrates the governance lifecycle that spans all of them?

SLA (Camunda 8)
Governance process logic, DMN routing, SLA timers, regulatory compliance
OneTrust
Vendor risk assessment, TPRM questionnaires
Jira
Technical SME task management (bi-directional, built)
Ariba
Sourcing, procurement, NDAs, RFPs, vendor contracts
Oracle
Financial analysis, budget approvals (Camunda bridges gap)
AppFox / Confluence
EA approvals, technical content
iManage
Contract drafting, redlining
Box
Executed contract storage (SOR)
ServiceNow
ITSM — incidents, changes, CIs (via connector, GA — shipped November 2025)

Knowledge-Driven Governance: The Flywheel

Every governance decision is captured, compared to prior decisions, and used to improve future decisions. This feedback loop does not exist in any GRC platform today.

KnowledgeGraph + Analytics DecisionsDMN + Governance LearningOverride + Outcome ExecutionBPMN + Camunda 8

Knowledge drives decisions

Cycle time reports, vendor concentration, and SLA trends inform DMN routing rules and committee review scope

Decisions generate learning

Every DMN override is captured — automated output vs. human output, with rationale. Override patterns surface rule improvements

Learning improves future decisions

DMN override rate decreases as rules are tuned. Target: 50% reduction within 200 instances

The 10x claim: Every governance decision your organization has ever made is queryable, comparable, and improvable. That has never been true before.
0
New Services Added
6
Analytics APIs
v2.0
Ontology (Vendor + DMN)
24
Automated Tests

Analytics and Decision Intelligence

Six analytics APIs deliver immediate demo value with zero ML, zero embeddings — structured queries on existing governance execution data.

APIQuestion it AnswersData SourceCompliance Value
Cycle Time ReportWhy did this onboarding take 6 months?PostgreSQL task_executionsOCC 2023-17 timeliness evidence
Concentration RiskHow dependent are we on a single vendor?Neo4j Vendor graphOCC 2023-17 concentration limits
SLA Breach TrendsWhich lanes are chronically bottlenecked?PostgreSQL sla_eventsDORA operational resilience
DMN Override CaptureWhere do humans disagree with automation?PostgreSQL dmn_overridesSR 11-7 model validation
Audit Trail ExportCan you show me the full evidence chain?Template-driven HTMLSOX / OCC examination-ready
Board Health ReportWhat is our composite governance score?Aggregate across all tablesBoard risk committee reporting

ServiceNow GRC Cannot Do This

ServiceNow captures approvals but cannot correlate decisions across cycles, track DMN override drift, or generate comparative governance analytics.

Audit Trail as Byproduct

The audit trail is not assembled manually — it is a deterministic projection of process execution data. Template-driven, same output every time.

Claude as Copilot (MCP)

Claude with MCP access to PostgreSQL and Neo4j — ask any question about governance history. Retrieval-only: cite or decline, never generate advisory.

Interactive Governance Topic Explorer

Select a topic above to explore its process mapping, RACI assignment, and regulatory references

Topic Journeys: Intake, Prioritization, Funding, Sourcing

Intake

Prioritization

Funding

Sourcing

Topic Journeys: Cyber, EA, Compliance, AI Gov, Privacy

Cyber

Enterprise Architecture

Compliance

AI Governance

Privacy

Topic Journeys: Commercial Counsel and TPRM

Commercial Counsel

TPRM (Third-Party Risk Management)

Knowledge Base Architecture

A dual-database knowledge platform that captures every governance decision, learns from every negotiation, and makes every future decision faster and better.

CAPTURE STORE ANALYZE LEVERAGE Camunda 8 Process Instances + Tasks DMN Decisions Routing + Overrides SLA Events Warnings + Breaches Contract Templates 101 Clauses (4 Types) Vendor Contracts Redlines + Vendor Paper Regulations (11 Frameworks) Neo4j Knowledge Graph RELATIONSHIPS + TRACEABILITY PI TE CC VN RG NO CA PI=Process TE=Task CC=Clause VN=Vendor RG=Regulation NO=Outcome CA=Analysis 13 Node Types | 16 Relationships pgvector Embeddings SEMANTIC SIMILARITY + KPIs clause_embeddings 1536-dim vectors vendor_embeddings cosine similarity 4 Tables | 3 Aggregation Views | IVFFlat Index NLP Pipeline (6 Stages) 1Document Ingestion 2Clause Extraction 3Embedding (OpenAI) 4Semantic Matching 5Delta Analysis 6Classify and Route Similarity Thresholds Auto-Match>= 0.92 Potential0.78 - 0.92 Novel< 0.78 Attorney Workbench Pre-analyzed, pre-classified contract issues Auto-approve | Suggest Fallback | Escalate Process Analytics Cycle time | SLA trends | Concentration risk 6 Analytics APIs | Audit Trail Export Continuous Learning Negotiation outcomes refine fallback rankings DMN override capture | Pattern detection Regulatory Traceability Every clause linked to OCC, GDPR, DORA, SOX Examination-ready evidence chains FEEDBACK LOOP: Outcomes refine knowledge base
13
Graph Node Types
101
Standard Clauses
120
Fallback Options
1536d
Embedding Vectors
11
Regulatory Frameworks

SLA Governance Ontology

Neo4j-powered knowledge graph connecting processes, contracts, regulations, and decisions. Drag nodes to explore relationships.

Process Contract Regulation Vendor Decision Lane Evidence SLA

Persona-Based Intake Wizard (SP0)

Intelligent pre-process that identifies the requester, adapts questions to their context, classifies AI involvement, and sets governance journey expectations. Replaces the flat SP1 intake form.

8 Requester Archetypes

PersonaTypical PathwayKey Signal
The VisionaryUnsureIdea only, no vendor
The Informed BuyerBuyShortlisted vendors
The Mandated UpgraderFast-TrackRegulatory deadline
The BuilderBuildTechnical specs, team
The Renewal/ReplacerBuyContract expiring
The Innovation ScoutHybridAI/ML use case, PoC
The Executive SponsorAny (delegates)Strategic mandate
The Compliance ResponderBuyAudit finding

7-Step Adaptive Flow

0 Identity & Context
1 Describe Need + Graph-Powered Solution Suggestions
2 AI Probe (Core AI / AI-Embedded / AI-Adjacent / No AI)
3 Persona Classification (auto-detected, user-confirmed)
4 Adaptive Deep Questions (per-archetype)
5 Journey Explainer (phases, timelines, AI gates, tasks)
6 Review & Launch Governance Process
Shift-Left Intelligence: Every wizard interaction writes to the Neo4j knowledge graph. The graph powers solution suggestions, AI use case similarity matching, and continuous learning.

AI Classification Taxonomy (4 Dimensions)

System Type
Generative AI, Predictive ML, NLP, Computer Vision, RPA+AI, Recommender, Agentic AI
Deployment Model
Internally Built, Vendor-Embedded, API-Based, Open Source + Fine-tuned
Decision Autonomy
Fully Automated, Human-in-Loop, Advisory Only, Analytics/Reporting
Regulatory Triggers
SR 11-7, EU AI Act, NIST AI RMF, CRI FS AI RMF, ISO 42001

Continuous Learning & Feedback System

Closed-loop feedback at every user touchpoint, routed through a BPMN process with AI sentiment analysis and DMN priority routing. The system learns from every interaction.

Feedback Collection Points

TouchpointTypeMechanism
Wizard StepMicro-ratingThumbs up/down per step
Solution SuggestionDecisionAccept/reject with reason
Journey ExplainerExpectationTimeline reasonable? Y/N
Task CompletionExperience5-star + text
Stage GateFrictionWas ask reasonable?
Process EndNPS0-10 + open text
Any PageGeneralFloating FAB on all 8 pages

Feedback BPMN Workflow

START Feedback Received
Log to Knowledge Graph (Neo4j)
AI Sentiment Analysis (positive/neutral/negative/critical)
DMN Priority Routing (14 rules, 6 inputs)
Auto-Close ? gateway
Yes → Auto-acknowledge + notify
No → Priority Level ? gateway
CRITICAL (1hr SLA) → governance-lane
HIGH (4hr SLA) → governance/business-lane
MEDIUM (48hr SLA) → business-lane
Send Resolution Notification → Update Status
END Feedback Resolved
SOX Compliance: Task-linked feedback (Rule 11) always routes to human review — auto-close is suppressed to preserve the audit trail.

Graph-Powered Learning Loops

Solution Match Intelligence
Track accept/reject rates per technology. Flag repeatedly-rejected solutions for portfolio review.
Persona Classifier Tuning
Compare wizard-classified vs. user-corrected archetypes. Adjust keyword weights from real data.
Stage Gate Friction Detection
Aggregate feedback scores by phase. Surface low-scoring gates for process improvement.