In March 2026, San Francisco once again became the epicenter of the cybersecurity world. Thousands of practitioners, vendors, and investors gathered at Moscone Center for the RSA Conference, where one theme dominated every keynote, panel, and booth conversation: Agentic AI. Not just AI as a tool, but AI as an actor.
From autonomous code generation to decision-making systems that initiate actions without human intervention, the industry is entering a new phase. Developments like Mythos, a next-generation AI framework capable of orchestrating complex, multi-step cyber operations, highlight both the promise and the risk of this shift.
The Cloud Security Alliance predicts a surge in simultaneous AI-powered attacks and urges defenders to fight AI with AI. OpenAI has responded by scaling its Trusted Access for Cyber program to support thousands of verified defenders and hundreds of security teams. Gartner reinforces this trend, forecasting AI spending to grow by 44 percent in 2026 and reach $47 trillion by 2029. This far exceeds its projected $238 billion for information security and risk management solutions in 2026.
The Dual-Use Reality of Agentic AI
Technologies like Mythos reveal a fundamental truth. The same capabilities that benefit defenders also empower attackers. Adversaries are already using AI to enable:
- Autonomous reconnaissance and lateral movement
- Real-time adaptation to defenses
- Scalable, low-cost attacks with minimal human involvement
This is not theoretical. Early rogue AI agents are probing environments, exploiting misconfigurations, and mimicking legitimate users. Attackers no longer need to control every step. They can deploy agents that behave like identities.
Every major shift in cybersecurity has led to a wave of point solutions. The result is predictable: tool sprawl, siloed visibility, and operational complexity. These gaps often benefit attackers. Agentic AI risks are following the same path. Early signs are already visible:
- AI security posture management tools
- AI runtime protection platforms
- AI-specific anomaly detection engines
- AI governance solutions
Each may provide value, but adding more tools increases friction. Organizations do not need more dashboards. They need better context and control over the entities operating in their environments, whether human or machine.
At the parallel AGC Cybersecurity Investor Conference, AI experts and industry leaders reached a more pragmatic conclusion: organizations should treat AI like an identity. This perspective cuts through the hype. Rather than viewing AI as a new tool category that requires entirely separate security stacks, it places AI within the established and critical domain of identity security.
Because fundamentally, agentic AI behaves like an identity:
- It authenticates (via APIs, tokens, or credentials)
- It accesses systems and data
- It performs actions within an environment
- It can be compromised, misused, or go rogue
Once you accept this, the path forward becomes clearer—and far less fragmented.
Identity Threat Detection as the Foundation
If AI is treated as an identity, identity threat detection and risk mitigation solutions become the logical control plane. This approach focuses on analyzing behavior across credentials and systems. It combines adaptive verification, behavioral analytics, device intelligence, and risk scoring in a unified platform.
Applied to AI, this enables:
- Behavioral visibility to detect anomalies such as unusual access, privilege escalation, or data exfiltration
- Risk-based controls to adjust access, enforce additional verification, or isolate suspicious agents
- Unified policy enforcement across human and machine identities
- Lifecycle management to prevent orphaned or unmanaged agents
As rogue AI agents emerge, whether compromised or malicious, identity-driven security provides a practical defense. It enforces least privilege, continuously validates access, detects abnormal behavior, and automates response actions. These capabilities already exist in modern identity security frameworks and can be extended to AI without introducing new silos.
Conclusion
The conversations in San Francisco this March made one thing clear: the future of cybersecurity will be shaped by entities that can act independently. Some will be human. Many will not.
As technologies like Mythos continue to push the boundaries of what AI can do, the industry must evolve its defensive mindset accordingly. The most effective strategy may also be the simplest: If it can act, it should be treated like an identity.
By anchoring AI security within identity threat detection and risk mitigation frameworks, organizations can protect against rogue agents—without adding yet another fragmented tool to an already complex defense arsenal.
Learn More at the AI Risk Summit | Ritz-Carlton, Half Moon Bay
Related: AI Can Autonomously Hack Cloud Systems With Minimal Oversight: Researchers
Related: ‘Mythos-Ready’ Security: CSA Urges CISOs to Prepare for Accelerated AI Threats

