AI Agent Fleet Expansion & Governance
Executive Summary
RRI has an extraordinary AI capability that is ungoverned and fragmented. Three independent AI programs are running in parallel: Jay Lane’s 30+ tools across 11 departments, Justin’s 8-agent bot fleet, and Daniel’s agentic AI program. Combined, these represent 38+ agents and tools with no centralized registry, no unified risk assessment, and no fleet-level visibility.
S1 takes the ROI framework established in U7 and scales it into a full governance infrastructure: a centralized agent registry (replacing the current SharePoint spreadsheet), risk-tiered approval workflows, a fleet monitoring dashboard, and a deployment pipeline with approval gates. The goal is to expand from 38+ agents to 40+ governed agents while capturing $5M+ in AI-driven savings and revenue.
The critical decision: include Daniel’s parallel AI program from Day 1. Three ungoverned AI tracks is a failure mode. Governance framed as “visibility” not “control” gets buy-in; governance framed as bureaucracy gets resistance.
What Needs to Happen
- Catalogue all 38+ agents in centralized registry — Migrate from SharePoint spreadsheet to custom registry application. Each agent documented with: name, owner, department, deployment platform, data access level, risk tier, ROI score, uptime status.
- Assign risk tiers to each agent — Low (auto-approve, internal data only), Medium (auto-approve, external-facing), High (committee approval, PII/financial data), Unacceptable (blocked — autonomous financial decisions, unsupervised customer interactions with PII). Week 1-2.
- Apply U7 ROI scoring framework to fleet — Three-dimension scoring (Cost Efficiency 35% + Time Efficiency 35% + Growth Impact 30%) applied to every agent. Portfolio-level ROI visible for the first time. Week 2.
- Establish monthly AI Governance Committee meetings — Jay + Justin + Spork + Lior. Review new agent proposals, audit existing agents, evaluate fleet ROI. Selene generates pre-read and monthly portfolio report for Yogesh. Week 2-3.
- Include Daniel’s parallel AI program from Day 1 — Lior meets Daniel, evaluates what he’s built, maps his agents into the registry. Frame as collaboration, not control. Three ungoverned tracks = failure mode. Week 2-3.
- Deploy fleet monitoring dashboard — Real-time visibility: which agents are running, error rates, cost per agent, ROI tracking. Built on Grafana (existing infrastructure from D6). Week 3-4.
- Create agent deployment pipeline with approval gates — Tier 1-2 agents auto-deploy. Tier 3 requires committee approval + security review. Tier 4 blocked. Pipeline includes rollback capability. Week 3-4.
Claude Code acceleration: The registry application, risk tier classification logic, and Grafana dashboard configuration are all highly automatable. Claude Code saves ~1 week on tooling, bringing 4+ weeks down to 3 weeks.
Completion Criteria
- All 38+ agents catalogued in centralized registry with risk tiers assigned
- ROI scoring applied to every agent in the fleet
- AI Governance Committee meeting monthly with Selene-generated reports
- Daniel’s AI program fully mapped into the registry
- Fleet monitoring dashboard live on Grafana
- Deployment pipeline with tier-based approval gates operational
- Fleet expanded to 40+ governed agents
- Portfolio-level AI ROI report delivered to Yogesh monthly
Initiative Attributes
Current AI Programs (Ungoverned)
| Program | Owner | Scope | Governance |
|---|---|---|---|
| Jay’s AI Tools | Jay Lane | 30+ tools, 11 departments, $670K+ proven impact | Yogesh throttle (4/month), no framework |
| Justin’s Bot Fleet | Justin Kahn | 8 agents (Selene, Primus, Inigo, etc.) | Self-governed, no risk assessment |
| Daniel’s AI Program | Daniel | Unknown scope, “true agentic AI,” security-first | Completely ungoverned, parallel track |
Related Risks
| ID | Risk | Severity | Probability | Mitigation |
|---|---|---|---|---|
| RF6 | Daniel’s parallel AI program creates ungoverned third track | MEDIUM | HIGH | S1 governance framework includes Daniel’s program from Day 1. Lior meets Daniel ASAP. Frame governance as “visibility” not “control.” Three ungoverned AI programs = failure mode. |