The tools are real. The coverage isn't. Here's where security programs are still flying blind - and what it will take to actually close the gap.

I’ve spent my career watching organizations spend more on security and feel less secure. That paradox has a name: visibility. And right now, the visibility gap is widest exactly where the risk is highest - identity, cloud, and AI.

I started in the Marines, which shaped how I think about security to this day. It isn’t a technology problem. It never was. It’s a people, process, and discipline problem, and the best technology in the world can’t compensate for the absence of those fundamentals. But there’s a specific category of failure I see repeating across organizations of every size and maturity level, and it’s worth naming plainly: we’ve built our security programs around a perimeter that no longer exists.

The new perimeter is identity. And most of us aren’t securing it like one.

01 / The Visibility Problem

You Can Have a Mature Stack and Still Be Flying Blind

The security investments are real. The SOC tools, the endpoint coverage, the SIEM, the cloud security posture management — organizations have spent enormously, and they’ve built genuinely capable programs. And yet when I talk to CISOs, the feeling I hear most consistently isn’t confidence. It’s that low-grade unease of knowing you’re not seeing everything.

The coverage gap tends to cluster in three places:

  • Cloud environments that sprawled faster than governance could follow — often driven by business units moving at a pace security wasn’t invited to match.

  • Identity ecosystems made complex by years of SaaS proliferation, third-party integrations, and federated access models that nobody sat down to architect holistically.

  • AI adoption that happened inside the business before security had a seat at the table — and in many organizations, is still happening that way.

You can have a mature security stack and a meaningful blind spot in any one of these areas. The question is whether you know where yours are.

02 / The Kill Chain Nobody’s Naming

Cloud Misconfigurations, Identity Compromise, and AI-Driven Attacks Aren’t Competing Risks

There’s a conversation happening in security circles about which risk is bigger right now - cloud misconfigurations, identity compromise, or AI-driven attack paths. I think that framing is a trap.

They’re not competing risks. They’re a kill chain. A misconfiguration creates the opening. Identity compromise is how the attacker moves laterally and escalates privilege. And AI-driven techniques are how they do it faster and more quietly than we can manually detect.

If I had to name the single element that keeps me up at night, it’s identity compromise, because it’s the hardest to detect when executed well, and because it touches everything. But the more useful question isn’t which risk is biggest. It’s: how well can you investigate across all three in a unified way? That’s where most organizations are still struggling.

03 / The Identity Surface

We Designed Access Controls for Humans. Then Everything Else Showed Up.

The shift to identity as the primary control plane happened faster than most security programs were built to handle. We architected access controls around human users. Then came service accounts. API keys. OAuth tokens. CI/CD pipeline credentials. AI agents with standing permissions to read email and write to systems.

Suddenly you have an enormous non-human identity surface that is routinely under-governed and over-privileged. The gaps I see most consistently:

  • Non-human identities provisioned for a specific use case and never de-provisioned — they just accumulate.

  • Service accounts with standing access that should be ephemeral — standing privilege is standing risk.

  • No clear ownership model. When nobody’s accountable for an identity, nobody notices when it’s compromised.

It’s a hackneyed saying at this point but true nonetheless, Identity is the new perimeter. Still, a lot of organizations are treating it as an HR problem, something that lives in the IAM team’s lane and surfaces during access reviews. That’s not good enough anymore.

04 / Investigation Across Environments

Attackers Don’t Respect Your Tool Boundaries. Your Investigation Has to Follow.

If your investigation capability is siloed, you can see what happened in your cloud environment, but you can’t correlate it to what that same identity was doing in your SaaS environment or through an AI interface, then you are going to miss things. Attackers move across environments fluidly. Your investigation has to be able to follow that thread.

What I’m increasingly focused on is making sure that when something suspicious happens, we can reconstruct the full identity narrative. Not just the event. The context. The timeline. The blast radius. Identity-centric investigation across SaaS, cloud, and AI isn’t a nice-to-have capability, it’s foundational to answering the only question that actually matters after an incident: what did this identity do, and where?

05 / Securing AI

We’re Securing the Model. We’re Not Securing the Behavior.

Some of the fundamentals of AI security are being applied reasonably well, such as input validation, output filtering, access controls around model endpoints. But there are two blind spots I keep coming back to.

The first is the data supply chain. Organizations are feeding proprietary data into models without fully understanding what’s being retained, what’s being exposed, or what the third-party risk profile actually looks like. The model works. The data governance doesn’t.

The second is agentic AI - autonomous systems that take actions on behalf of users. An AI agent with standing permissions to read email, write to systems, and make API calls is not just a productivity tool. It’s an attack surface. The security implications are enormous, and most organizations haven’t built the controls to match. We know how to secure the model. We haven’t figured out how to govern the behavior.

06 / The Agentic SOC

What It Would Actually Take for Me to Trust Autonomous Security Operations

The conversation around agentic SOC capabilities is moving fast, and I want to engage it seriously because the potential is real, and so is the risk of getting it wrong. Here’s what I would actually need to see before trusting an autonomous SOC to act on behalf of an organization I represent:

  • Explainability. I need to understand why the system made the decision it made, not just that it made one. Black-box autonomy in a SOC is not something I can defend to a board or a regulator.

  • Constrained authority. The system should be able to act, but within clearly defined and auditable guardrails. Autonomy without boundaries isn’t efficiency. It’s liability.

  • A human escalation path that actually works. Not theoretically. Not in a demo. In production, under pressure, at 2am, on a Sunday morning.

  • Adversarial testing. This is the one most implementations skip. I want to know how the system behaves when someone is actively trying to fool it, not just how it performs against known scenarios.

    Until those four things are solid, I’d trust an agentic SOC to augment my team’s judgment, but not replace it.

The organizations that will get this right are the ones who treat autonomous security tooling with the same rigor they’d apply to any high-stakes operational system. Trust is earned through evidence, not enthusiasm.

The security field has never moved faster, and neither have the adversaries. The gap between where our programs are and where our risks are is real, but it’s closeable. It starts with being honest about where visibility ends, taking non-human identity as seriously as human identity, and demanding that our detection and investigation capabilities work across environments, not just within them.

The perimeter shifted. It’s time our thinking did too.

Next
Next

Stop Managing "Resources”. How to Start Leading Humans