From Promise to Preparedness: How to Secure AI and Keep Our Edge
Generative AI is reshaping the tech landscape faster than most organizations can update their risk registers. For security leaders, the challenge isn’t just keeping pace and supporting revenue—it’s doing so without compromising trust, compliance, or mission integrity.
Pierre Mouallem’s piece, "Securing Generative AI: A Strategic Framework for Security Leaders", lays out a practical roadmap for this work. And while I agree with the bones of it, I’d argue the real difficulty lies not in understanding what needs to be done, but in doing it at scale, under pressure, and without stifling the very innovation we’re trying to protect.
So let’s dig in. What does it actually take to secure GenAI, and how feasible is it for teams working in the real world, under real constraints?
Governance First, or Don’t Bother
Establishing accountable AI governance isn’t optional. It’s foundational. But that doesn’t make it easy.
Cross-functional governance groups sound great until you realize you’re trying to get Legal, Product, Risk, Security, and Marketing to agree on the definition of “ethical use.” Still, this alignment is necessary. Without it, your AI initiatives are castles built on sand, especially when new regulations hit.
This isn’t a “set it and forget it” play either. The pace of AI evolution demands living governance frameworks that adapt to the evolving landscape. That means ongoing investment in policy iteration, stakeholder training, and yes, uncomfortable conversations about bias, risk, and responsibility.
Feasibility: Moderate to High, but only if executive sponsors prioritize it.
Technology Controls Aren’t Optional. They’re Your Guardrails.
Forward-looking security controls are the difference between using GenAI safely and becoming a cautionary tale.
Robust logging, access controls, and third-party vetting are all table stakes now. But most orgs still struggle with visibility. Shadow AI use is rampant because the path of least resistance is always faster than the secure one.
Realistically, you’ll need to allow some sanctioned platforms, monitor the rest like a hawk, and create discovery processes that don't punish innovation, but channel it.
Security reviews, model integrity checks, and encryption everywhere? Non-negotiable. But let’s not pretend any of this is plug-and-play. It requires resourcing, planning, and teams that know how to move quickly without compromising quality.
Feasibility: Hard, but absolutely necessary.
Your Data Strategy Needs a Detox
If you’re using GenAI on unclassified, untagged, or sensitive data without controls, you’ve already lost the plot.
Data classification, access enforcement, and lifecycle protection are the nuts and bolts of AI security. But in the rush to prove value, too many teams skip the basics. The fallout isn’t just a privacy risk; it also erodes trust with customers, stakeholders, and regulators.
Use encryption at every stage. Monitor AI inputs and outputs. And for the love of security, make sure you’re tagging and filtering mechanisms are enforced in production.
Feasibility: High—but only with mature data hygiene practices in place.
Identity Is the Core of GenAI Security
This is where the article truly nails it: identity-first security isn’t a buzzword—it’s the linchpin.
AI environments amplify identity challenges. Non-human identities are spinning up faster than you can say “API key,” and each one is a potential attack vector.
Privileged Access Management (PAM) and zero-trust architecture aren’t “nice to have”; they’re the baseline. Just-in-time access, adaptive policy enforcement, and robust authorization logic must all evolve in tandem with your AI stack.
And don’t sleep on model integrity. Malicious updates and prompt injection are real, present threats.
Feasibility: Tough to implement, but this is where you win or lose.
Avoiding the Easy Mistakes
Let’s be honest. We’ve all been tempted to trust GenAI outputs more than we should. But blind faith has no place in security.
Common pitfalls, such as ignoring shadow AI, skipping access controls, or failing to vet third-party tools, are all avoidable. What it takes is discipline, vigilance, and yes, a leadership culture that values security as much as it values speed.
The Bottom Line
Securing GenAI is a strategic imperative, not a technical afterthought. And while the work is hard, it’s doable with the right mindset and frameworks in place.
In securing GenAI, you have to think differently. Govern boldly. Engineer wisely. And above all, lead with identity.
If you’re not sure where to start, start with what you can control: your team’s posture, your policies, and your platforms. Innovation doesn’t have to be reckless. With smart guardrails and adaptive security, you can protect what matters without losing momentum.
The work to secure GenAI will be challenging, but let’s move forward with purpose and the grit necessary to protect it.