Abstract
Autonomous AI agents are no longer a roadmap item. They are operating in production environments today, executing consequential decisions across hiring, payments, security policy, and operational control — often under borrowed identities, with access privileges inherited from human accounts, and without any mechanism to produce non-repudiable proof of what they did, when, and under what rules.
This paper examines the structural incompatibility between traditional Identity and Access Management (IAM) architectures and the operational characteristics of autonomous AI agents. Drawing on March 2026 survey data from the Cloud Security Alliance and the February 2026 CSA/Strata Identity research, it maps the specific failure modes that arise when agent identity is borrowed rather than native — and when execution records are logs rather than cryptographic proof.
The central argument is architectural, not merely procedural: the governance gap in autonomous AI cannot be closed by extending existing IAM frameworks. It requires a fundamentally different primitive — one that binds cryptographic identity to execution events at the moment they occur, producing immutable, verifiable proof that satisfies regulators, courts, auditors, and incident response teams alike.
I. The Production Reality — Agents Are Already Running
The enterprise conversation about autonomous AI agents shifted decisively in 2025 from feasibility to deployment. The question is no longer whether organizations will run agents in production — they already are. The question is whether the infrastructure assumptions beneath those agents are sound.
The March 2026 Cloud Security Alliance survey, Identity and Access Gaps in the Age of Autonomous AI (n=228 IT and security leaders), provides the most current quantitative snapshot of where enterprise deployments actually stand.
Those three numbers, read in sequence, describe a security posture that should concern every CISO operating at scale. The overwhelming majority of organizations are running agents. More than two-thirds cannot tell, from their own logs, whether a given action was taken by a human or an agent. And fewer than one in five are highly confident that their IAM infrastructure adequately governs agent identities.
The February 2026 CSA/Strata Identity research adds further texture: only 28% of organizations report the ability to reliably trace agent actions across environments. Traceability is not an aspirational capability — it is the minimum threshold for meaningful incident response, regulatory compliance, and forensic defensibility. More than seven in ten enterprises currently operating autonomous AI agents in production cannot meet that threshold.
II. The Identity Architecture Problem — Why IAM Was Not Built for Agents
2.1 The Human-Centric Identity Model
Contemporary IAM frameworks — whether built on LDAP, OAuth 2.0, SAML, or zero-trust ZTNA architectures — share a common design premise: the identity subject is a human being, or a static, predictable workload acting on behalf of a human being. Autonomous AI agents break every one of these assumptions simultaneously. An agent's identity is rarely native — it is almost universally borrowed from an existing account: a service principal, a shared workload identity, or, in the cases that should alarm every security architect, a human user account.
The problem is not that organizations are managing agents badly within their existing IAM frameworks. The problem is that the IAM framework was designed for a class of principal that agents are not — and no amount of policy refinement within that framework resolves the structural mismatch.
2.2 The Borrowed Identity Attack Surface
| Failure Mode | Security and Compliance Implication |
|---|---|
| Attribution lossLog entries become ambiguous — "which principal performed this action?" cannot be answered with confidence. | Incident response timelines extend dramatically. Root cause isolation becomes forensic archaeology. |
| Blast radius amplificationAn agent operating under a privileged service account inherits the full access scope of that account. | A single misbehaving or compromised agent can move laterally across the entire blast radius of the borrowed credential. |
| Least privilege violationAgents routinely receive more access than necessary — because access is provisioned for the identity, not the task. | Dynamic, task-scoped access management does not exist in most current IAM implementations. |
| Non-repudiation failureWhen a consequential action is attributed to a shared identity, neither the organization nor any regulator can establish that a specific agent took a specific action under specific rules. | That is a non-repudiation failure by definition — and a liability in every regulatory framework that follows. |
2.3 The Log Fidelity Problem
Even where organizations maintain comprehensive logs, the 68% figure reveals a fidelity problem, not a volume problem. A log entry that records an action against a shared identity does not establish what actually happened. It establishes that something happened under a particular credential at a particular server clock time. Those are not equivalent propositions.
- Identity ambiguity: The log records the credential, not the agent. In environments where multiple agents share a service principal, log entries are categorically unable to establish which agent acted.
- Timestamp unreliability: Server clock timestamps are subject to drift, NTP misconfiguration, and — in adversarial scenarios — manipulation. They do not constitute hardware-attested time.
- Context absence: Logs record actions, not the reasoning, inputs, model state, or governance rules that produced them. Reconstructing the decision context from a log entry is, in most current architectures, impossible.
- Mutability: Standard log infrastructure does not produce tamper-evident records. In a post-incident or litigation context, the provenance of log data is itself challengeable.
III. The Regulatory Convergence — Why This Gap Is Now a Liability
3.1 The EU AI Act
The EU AI Act's classification of employment-related AI as high-risk carries specific technical obligations that the borrowed-identity, log-based architecture cannot satisfy. The Act requires automatic logging throughout the system lifecycle sufficient to identify risks and track modifications — not simply to record that actions occurred. When 68% of organizations cannot distinguish agent from human actions in their logs, satisfying an EU AI Act audit with those same logs is not credible.
3.2 Employment and Algorithmic Decision Law
- New York City AEDT Law: Requires bias audits and candidate notification for automated employment decision tools, with audit documentation that must withstand regulatory inspection.
- Illinois AI Video Interview Act: Requires disclosure and consent for AI analysis of candidate videos, with retention and audit obligations.
- Colorado AI Act: Imposes impact assessment and transparency requirements on high-risk AI decisions, including employment decisions.
- EU GDPR Article 22: Confers the right not to be subject to solely automated decisions — which requires the organization to produce the decision record and demonstrate it can be meaningfully reviewed.
Each of these frameworks independently requires documentation that the current log-based, borrowed-identity architecture cannot reliably produce. In combination — and an organization hiring across jurisdictions faces all of them simultaneously — the exposure is not additive. It compounds.
3.3 The Insurance Dimension
Insurers covering AI-related operational failures are beginning to require documentation of agent governance infrastructure as a condition of coverage. Organizations that cannot demonstrate native agent identity, tamper-evident execution records, and auditable governance rules face either coverage exclusions or premium structures that price the gap directly.
The Gartner projection that more than 40% of agentic AI projects will be canceled by the end of 2027 due to inadequate risk controls is, in part, a reflection of this dynamic: organizations discovering, after deployment, that their agent governance infrastructure is not insurable at acceptable cost.
IV. The Failure Mode Taxonomy
| Failure Mode | Proximate Cause | Downstream Impact |
|---|---|---|
| Attribution failurein incident response | Shared identity: log cannot establish which agent acted | Root cause isolation takes weeks; damage amplifies during reconstruction delay |
| Regulatory non-compliance | Logs insufficient to satisfy EU AI Act or AEDT audit | Fines up to 3–7% of global turnover; mandatory remediation orders |
| Lateral movementfrom compromised agent | Excessive inherited privileges from borrowed account | Blast radius limited only by the scope of the borrowed credential |
| Indefensible employment dispute | No cryptographic record of decision rules at execution time | Cannot demonstrate compliance with hiring law; settlement exposure |
| Insurance coverage gap | No auditable agent governance documentation | Exclusions or uncovered losses on AI-related operational failures |
| Governance policy drift | No mechanism to prove which rules governed which execution event | Policy changes retroactively applied; audit integrity compromised |
The common thread across every row is not a configuration failure, a policy gap, or an operator error. It is a missing primitive: the ability to produce, at the moment of execution, a cryptographically bound, tamper-evident record that ties a specific agent identity to a specific action taken under specific, immutable rules at a verified point in time.
V. Required Properties of an Execution Ledger
The architectural response to the problem above is not an extension of existing IAM. It is a new primitive that operates at the execution layer — below the application, above the substrate — and produces the cryptographic guarantees that neither logs nor conventional identity frameworks can provide.
Each agent must carry an identity that is cryptographically distinct, non-inherited, and non-transferable — provisioned for the agent, not borrowed from a human or shared workload principal. This property alone eliminates the attribution ambiguity that renders 68% of enterprise logs forensically unreliable.
Timestamps must be derived from hardware-attested time, not server clocks. The Precision Time Protocol (PTP) and GPS-synchronized time sources provide the hardware attestation layer that transforms a timestamp from a metadata annotation into an auditable fact that satisfies regulatory and legal evidentiary standards.
In multi-agent workflows, the sequence of execution events is often as legally significant as the events themselves. An execution ledger must establish and preserve ordering cryptographically — requiring consensus-level guarantees, not application-layer sequencing that can be rewritten by anyone with database access.
The governance rules in force at execution time must be locked as part of the cryptographic proof bundle for that event — permanently and immutably, regardless of subsequent policy changes. The rules that governed a hiring decision on a specific date must be recoverable from the ledger entry for that decision, forever.
An execution ledger must produce verifiable proof bundles that answer the regulator's question — what happened, when, under what rules, in what order — in seconds. The ability to produce proof on demand is what distinguishes an infrastructure investment from a forensic archaeology project.
VI. From Failure to Intelligence — Proof as Operational Resilience
In a system equipped with an execution ledger, every agent failure becomes high-fidelity training data. Root cause is isolated in minutes. The failure mode is characterized precisely. The governance policy, the model prompt, or the oversight rule that needs adjustment is identified with specificity rather than estimated from fragmentary evidence.
Organizations that fear deploying more capable agents because the blast radius of failure is unknown are, in significant part, reacting to the absence of proof infrastructure. When every execution event is cryptographically bounded, the blast radius of failure is no longer unknown — it is precisely recoverable. That knowledge changes the risk calculus materially.
VII. The ROSÉ Execution Ledger
The five properties described above are implemented in the ROSÉ Execution Ledger — a purpose-built infrastructure layer developed by the ROSÉ Partnership, a consortium of six specialized technology systems assembled by Selfient.xyz. ROSÉ was designed from first principles for the operational reality that autonomous agents are executing consequential decisions at machine speed and at enterprise scale.
| Required Property | Current Gap | ROSÉ Implementation |
|---|---|---|
| Native cryptographic agent identity | Agents borrow service accounts; attribution is ambiguous | Each agent carries a non-transferable cryptographic identity bound at execution — no shared accounts |
| Hardware-attested timestamps | Server clock timestamps are unreliable and potentially manipulable | TimeBeat provides HSM-grade precision timestamps independent of server infrastructure |
| Immutable execution ordering | Log sequencing is application-layer and rewritable by privileged users | ROKO provides L1 substrate ordering with consensus-level finality; sequence is cryptographically sealed |
| Governance rule immutability | Policy changes can retroactively alter the apparent governance context of historical decisions | Selfient smart contracts lock governance rules as part of the execution proof bundle at the moment of execution |
| On-demand proof bundles | Forensic reconstruction takes weeks to months from distributed logs | Matric notary artifacts and Fortémi queryable records produce verifiable proof bundles on demand |
VIII. Conclusion
The Cloud Security Alliance data from early 2026 describes an industry in the early stages of a reckoning that security architects have seen before, in different forms: a powerful new class of system deployed at scale before the governance infrastructure appropriate to it was in place. The difference, this time, is that the systems are goal-directed, the decisions are consequential, and the regulatory environment demanding proof of governance is not emerging — it is here.
The architectural answer is not a better log. It is a cryptographic execution ledger that makes every agent action provably accountable — by identity, by sequence, by governance rule, by verified time. Organizations that build or adopt that infrastructure before enforcement pressure arrives will find themselves in a categorically different position than those that build it in response to a finding.
About ROSÉ & Selfient.xyz
ROSÉ is the Regulated Ordered and Signed Execution partnership — purpose-built execution proof infrastructure for autonomous AI. Led by Helen Sharron, CEO and Co-Founder, a 25-year veteran of enterprise technology leadership and two-time founder. The partnership includes Selfient.xyz · Matric · ROKO.Network · Latitude.sh · Fortémi · TimeBeat
References: CSA, Identity and Access Gaps in the Age of Autonomous AI (March 2026, n=228); CSA/Strata Identity research (February 2026); Gartner Agentic AI Forecast 2027.