From Annual Audits to Continuous Assurance: Why We're at OSFF NYC

Miguel Martinez

Meet the Chainloop team at OSFF New York to discuss how you can move from manual audits to automated, continuous assurance and build a trusted foundation for your AI-powered future.

The financial services industry is standing at a pivotal crossroads. On one side, there’s the immense promise of Artificial Intelligence, with a clear mandate to innovate and “crack the inference bottleneck” to gain a competitive edge. On the other hand, a seismic shift in the regulatory landscape is rendering traditional compliance models obsolete.

The old world of point-in-time compliance—characterized by manual evidence collection and annual audits—is crumbling under the weight of modern CI/CD pipelines. Now, the rapid adoption of AI is accelerating this collapse. How can you prove compliance when development cycles are measured in hours and AI agents are increasingly part of the workflow?

This is the new reality we’re coming to the Open Source in Finance Forum in New York to discuss. The era of the annual audit is over. The future is built on automated, continuous assurance, and it’s the only way to build trust in a world powered by AI.

The Paradigm Shift: From Checklists to Continuous Assurance

Past and Future, manual compliance and automated, continuous assurance

For years, compliance has been a reactive, labor-intensive process. Teams would spend weeks, or even months, preparing for audits by manually gathering screenshots, logs, and attestations from dozens of disconnected systems. This model was already inefficient, but now, regulators are signaling its end.

The Federal Financial Institutions Examination Council (FFIEC) is sunsetting its Cybersecurity Assessment Tool (CAT) in favor of more dynamic frameworks like the NIST Cybersecurity Framework (CSF) 2.0. This new version of the NIST CSF introduces a critical GOVERN function that places software supply chain risk management at the forefront.

The message is clear: compliance is no longer a once-a-year event. It must be a continuous, evidence-backed state of being.

AI: The Great Compliance Accelerator—and Agitator

The intense focus on AI at OSFF NYC is no surprise. Financial institutions are racing to leverage AI for everything from market analysis to operational efficiency. Yet, this rush to innovate introduces a new and complex set of risks.

AI doesn’t just speed up development; it multiplies compliance complexity. It raises critical questions that manual processes cannot answer:

  • How do you audit the integrity of an AI model when its training data is constantly evolving?
  • How can you prove to a regulator that an AI-driven deployment followed all security protocols?
  • How do you mitigate new threats like AI data poisoning, where malicious actors corrupt the very foundation of your models?

Attempting to apply the old, manual audit model to an AI-powered software factory is akin to inspecting a jet engine with a magnifying glass. It’s too slow, and fails to capture the complete system.

Building the Foundation: Automation is the Bedrock of Trust

Diagram showing AI-powered evidence store architecture

To solve this dual challenge, a new foundation is required—one built on automation and verifiable proof. This is precisely why we built Chainloop: to serve as a policy-driven trust layer for the modern software factory.

And because we believe the ultimate way to build trust is through transparency, our core is open source. At a conference dedicated to the power of open collaboration in finance, this isn’t just a feature—it’s our foundational belief. It provides the transparency necessary for true verification and frees institutions from the vendor lock-in that stifles innovation.

Continuous assurance is only possible when you move away from manual processes and embrace automation:

  1. A Central Evidence Store: Instead of chasing down evidence, our platform automatically collects and organizes every artifact, SBOM, scan result, and attestation into a single, tamper-proof graph. This creates one source of truth that provides audit-ready visibility in real-time, reducing audit preparation from weeks to hours.

  2. Policy-as-Code Guardrails: We embed compliance and security checks directly into the CI/CD pipeline. Policies—such as “no critical vulnerabilities” or “all open-source licenses must be approved”—are enforced automatically. This provides developers with instant feedback and ensures that compliance is built in, not bolted on, delivering security without the friction.

Trace Every Commit, Trust Every Model

This automated, evidence-based foundation for continuous assurance does more than just solve today’s compliance challenges. It is the essential prerequisite for building trustworthy AI.

You cannot have trustworthy AI without a trustworthy software development lifecycle. The same verifiable data needed to satisfy an auditor is exactly what’s needed to secure an AI model.

This is the principle behind our Trusted AI Gateway. By feeding AI and MLOps workflows with structured, cryptographically signed, and connected data from our Central Evidence Store, we provide the verifiable context needed to ensure the integrity and provenance of AI-generated outcomes. It allows you to trace every model back to its data and every commit forward to its release, creating the end-to-end audit trail that regulators—and your customers—demand.

EU Cyber Resilience Act compliance framework diagram

This end-to-end audit trail is not just a best practice; it’s rapidly becoming a regulatory necessity. Emerging frameworks like the EU’s Cyber Resilience Act (CRA) and the Digital Operational Resilience Act (DORA) are setting global standards for software and AI supply chain security. An automated, evidence-based platform is the only viable way to meet these demands at scale, turning compliance from a barrier into a provable characteristic of your AI innovations.

The financial services industry is building its future on open source and AI. Let’s ensure it’s built on a foundation of trust.

Meet the Chainloop team at OSFF New York to discuss how you can move from manual audits to automated, continuous assurance and build a trusted foundation for your AI-powered future.