The assessment is for teams that already allow AI-assisted workflows into PRs, CI/CD, tools, credentials, or cloud paths and need a buyer-readable map of what can act, what authority it uses, what approval applies, and what evidence is missing.
What you get
Action-control graph
A connected map of owner, agent or workflow, repo or PR, credential, action, target, approval rule, policy decision, and evidence.
High-risk workflow register
AI-assisted paths that can write, execute, deploy, use credentials, publish packages, or affect release workflows.
Credential and authority summary
Standing versus scoped access, credential source, owner, target system, and revocation path where visible.
Approval and evidence gaps
Where approval rules, evidence retention, action attribution, or validation records are missing or unclear.
Agent Action BOM
An executive-readable artifact for security, platform, and engineering leadership.
Evidence pack
A buyer-readable record of task, actor, owner, credential source, approval decision, validation, target, outcome, and remaining gaps.
Allow / approve / block recommendations
A practical first policy boundary focused on action blast radius rather than every prompt.
When this assessment is a fit
- Your engineering teams are using Cursor, Claude Code, Codex, GitHub Copilot, Devin, Factory, MCP tools, or internal agents.
- You want AI coding adoption to move faster without losing review discipline or evidence.
- You have CI/CD, release, cloud, package, or credential-bearing workflows that security wants to understand.
- You need an evidence pack for security leadership, customer review, SOC 2, ISO 27001, or incident readiness.
- You want to start with one team or workflow before a broader platform rollout.
When it is probably too early
- AI coding tools are not yet approved or used in engineering workflows.
- The team only wants a generic AI policy document.
- The immediate concern is model evaluation, prompt filtering, or chatbot data leakage rather than software delivery actions.
- No one owns AppSec, platform security, DevSecOps, developer productivity, or secure SDLC.
Timeline and buyer lift
The standard assessment produces results in 5 business days from kickoff. Buyer time is designed to stay low: one kickoff session, one readout session, and async clarification if needed.
- Kickoff: confirm scope, repos or workflows, privacy requirements, and likely action paths.
- Scan and review: run local/private analysis and manually review the action paths for buyer readability.
- Readout: review the action-control graph, Agent Action BOM, evidence pack, high-risk workflow register, and recommended first controls.
Source privacy
The first assessment is built for local/private scanning. Raw source is not retained unless explicitly agreed. The output is a redacted action-control report that security and platform teams can share internally.
Start with a small surface.
Pick one team and two to three repos or workflows where AI-assisted software delivery is already close to PRs, CI/CD, credentials, or release paths.
Request assessment