Agentic Coding Support: Bringing AI Agent Governance Into Your Supply Chain
Miguel MartinezTL;DR: We’re adding support to automatically gather AI coding agent configuration and session data, paired with new policies, visualization, and control capabilities.
There’s more software being written than ever. More people writing it, and more tools writing it for them. In the span of a week we’ve seen the Trivy and LiteLLM supply chain attacks hit the ecosystem. Supply chain governance has never mattered more.
Developers are using agents like Claude Code and Cursor to write most of their code. But right now, you have zero visibility into what those agents are doing, how they’re configured, or what decisions they made.
Chainloop was built on a simple belief. Automation is the answer to governance. That’s why we automated collecting evidence from any security tool, any CI/CD system. AI coding agents made that problem exponentially worse, so we’re applying the same approach here.
Today we’re shipping two new evidence types, AI Agent Configuration and AI Agent Session, along with built-in policies and direct integrations with coding agents.
The Blind Spot
Until now, Chainloop collected evidence at the CI/CD level. Container images, SBOMs, vulnerability reports, code reviews, signatures. The pipeline was the boundary. Everything that happened before it, the actual writing of code, was a black box.
When an AI agent writes code, its behavior is shaped by two things: its configuration and its interaction with the operator. The instructions, how the agent interpreted them, which tools it called, what files it touched. That’s a whole category of evidence that didn’t exist before.
Consider two PRs that produce identical, passing code:
- PR A: Written by an agent with no boundaries defined, unrestricted tool access, no plan mode, no verification steps.
- PR B: Written by an agent scoped to a specific directory, with only approved tools, that planned before implementing and ran all verification checks.
Same output. Materially different risk. You can’t tell them apart. The more autonomous these systems get, the more the agent’s setup and behavior matter. Instructions, permissions, and session traces are part of the source code now.
The Agent’s Rules
The first new evidence type is CHAINLOOP_AI_AGENT_CONFIG. When you run an attestation, the CLI automatically detects agent configuration files in your repository (instruction files, rules, skills, MCP server configs, custom commands, settings) and bundles them into a single, tamper-resistant piece of evidence. No manual steps.
You get which agent is configured, a cryptographic hash of the overall configuration, git context, and every config file linked to the rest of your SDLC metadata.
That’s a real configuration from our own repository. 12 files including instruction files, skills for autonomous task picking, TDD workflows, design doc generation, and vulnerability remediation. All captured automatically. All browsable. All policy-enforceable.
The Agent’s Trace
The configuration tells you what the agent could do. The second piece of evidence, CHAINLOOP_AI_CODING_SESSION, tells you what it actually did.
It captures the full session transcript, a timeline of every interaction between the user, the agent, and the tools invoked, plus structured metadata:
- What happened? Session duration, conversation length, message counts. Which tools the agent used and how often. Subagents spawned, their types, and token consumption.
- What did it cost? Token usage broken down by model. Estimated cost in USD. Which models were used and which was primary.
- What changed? Git commits produced during the session. Files created, modified, or deleted with line-level diff stats. Branch and repository context.
You’re not just tracking what’s in the artifact anymore. You’re tracking how it was made.
Built-in Guardrails
Evidence without enforcement is just logging. Both new evidence types plug directly into Chainloop’s policy engine, so you can enforce rules on them like anything else.
Every team adopting AI coding agents runs into the same problems. Secrets leak into agent context because nobody excluded .env or .pem files. An agent runs destructive commands because there were no guardrails on what it could execute. Instructions are vague or generic, so the agent guesses at architecture, code style, and testing conventions. MCP servers get added with no review. Subagents get configured with more permissions than the parent.
These aren’t hypothetical. They’re what happens when teams move fast and governance hasn’t caught up. Chainloop ships with built-in policies that check for exactly these things:
- No secrets in config. Flags hardcoded API keys, tokens, and credentials in instruction files and MCP settings.
- Approved agents and MCP servers only. Enforces allowlists so teams can’t introduce unapproved tools.
- Instruction quality. Checks that instructions are substantive, include build/test/lint commands, reference the actual project architecture, and document gotchas the agent would otherwise get wrong.
- Behavioral boundaries. Verifies that explicit restrictions exist defining what the agent must not touch.
- Subagent safety. No privilege escalation, no extra MCP servers beyond the parent config.
- Session scope. Validates that file modifications stay within the project root and tool usage stays within the permitted set.
You can write your own policies on top of these, and we’re working on a governance framework that organizes these controls into maturity levels, from basic awareness to fully automated, continuous governance.
Getting Started
This feature is available today in preview. Three steps to get going.
1. Enable agent configuration collection
Add the aiagent collector to your attestation process.
chainloop attestation init --collectors aiagent
See the AI Config Collector guide for details.
2. Configure your repositories for session tracing
Run chainloop trace init in your repository. Once you do, every commit made by your coding agent gets attributed in the git history, and when pushed, a sealed attestation is sent to Chainloop automatically.
chainloop trace init
Here’s what a traced commit looks like:
commit 842cea6ee24c63293d6e38178fcf288859983c6d
Author: Jose I. Paris <[email protected]>
Date: Wed Mar 25 13:20:31 2026 +0100
Add exclamation mark back to greeting output
Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
Chainloop-Attestation-ID: 5c959903-171b-479b-aed1-5ba84a1e452b
The Chainloop-Attestation-ID is the key part. It links that commit to the full session evidence: the transcript, tool usage, cost, and file changes. Every commit becomes traceable back to the exact agent session that produced it, which can then be used to perform control gates at PR time, or to leverage the full picture in Chainloop’s discovery graph.
3. Set up policies and contracts
Attach the built-in AI policies to your workflow contracts.
apiVersion: chainloop.dev/v1
kind: Contract
metadata:
name: check-ai-agent
spec:
policyGroups:
- ref: ai-config-policies
Once configured, both evidence types flow through the same pipeline as everything else. Collected, signed, evaluated against your policies, visible in your compliance dashboards, and available for your control gates.
Looking Ahead
The way software gets built is changing. Humans and agents are working together, and that collaboration, who prompted what, which tools were invoked, what guardrails were in place, is part of the source code now. The agents are already writing your code. Now you can govern them.
We’re working on moving this feature out of preview, adding support for more coding agents beyond Claude and Cursor, and mapping these policies to AI governance frameworks so they plug into the compliance requirements you already care about.
If you’re interested in trying this out or have feedback, reach out at chainloop.dev.