Voices of the Vigilant S2 EP7 | Downloading Random AI Tools Is...A Career Choice
In This Episode
Amber Bennoui, a founding researcher at Jiffy Labs joins me for Voices of the Vigilant Season 2, Episode 6!
You can learn more about the conversation and the guest below.
Tune into the audio version of this episode by clicking the player below:
Tune into the video version of this episode by clicking the YouTube player below:
VIDEO: Voices of the Vigilant S2 Ep07
Downloading Random AI Tools Is...A Career Choice with Amber Bennoui, founding researcher at Jiffy Labs
About the Guest
Amber Bennoui is a founding researcher at Jiffy Labs, an AI supply chain security project building a trust and governance layer for AI artifacts — MCP servers, agent configs, IDE rule files, and the emerging toolchain powering AI-driven development. She's also Principal Product Manager for Developer Relations at DataRobot, co-founder of AISECA (AI Security Alliance), and a community builder through Cyber Women of Boston and MEACS. With 15+ years of cybersecurity product leadership across Datadog, Chainguard, and F5/Threat Stack, Amber brings deep technical expertise and a genuine passion for opening doors in the field.
Full Episode Transcript
JJess Vachon: 00:33
Hello, everyone. Welcome back to the show. Voices of the Vigilant where tech toughness meets human truth. I think I change that up every time I start a new recording, but we'll go with it for today. Today we're going somewhere I love the intersection of deep technical expertise and the kind of community-driven spirit that actually changes an industry. My guest today has spent more than 15 years building products that protect some of the world's most complex systems. She's helped secure developer pipelines at Datadog, hardened software supply chains at Chain Guard. And now she's doing something genuinely frontier, building a trust and governance layer for the AI artifact ecosystem through her project Jiffy Labs, as well as principal product manager for developer relations at Data Robot. But that's not all. She also co-founded AISECA, the AI Security Alliance, and is a connector, community builder, and a voice of the Boston cybersecurity scene. And honestly, the whole field is lucky to have her. But today isn't just about the impressive resume. Today is about what drives someone to keep building, even when the terrain shifts under your feet, about finding your people and what it means to lead at the frontier of something no one has mapped yet. Amber Bennoui, welcome to Voices of the Vigilant.
Amber Bennoui: 01:48
Thanks for having me, Jess. Yeah, that sounds like I'm doing a lot. Trust me, I have time to sleep. I have fun in my free time too. But yeah, I'm a busy person. I love these projects.
Jess Vachon: 02:02
Yeah, I don't think that's unusual for people in cybersecurity. Whenever I talk to someone, they've got four or five or six different things going. But I think that's because we tend to be naturally curious and we want to learn. And a good many of us, like yourself, want to help others. So that's all good. I'm sure you do have other hobbies. I know you do, and we'll get into those a little later in the podcast.
Jess Vachon: 02:25
I want to start at the beginning before the product leadership titles, before the startup ventures. You speak four languages. Is that correct?
Amber Bennoui: 02:32
Yes, that is correct.
Jess Vachon: 02:34
Okay, and you are highly fluent in Mandarin, which is no small task.
Amber Bennoui: 02:40
I'm more fluent in writing and reading than I am in speaking. I've gotten very rusty. So that's a very interesting thing. And I have a very funny story behind learning it and trying to actually go to China as a cybersecurity expert. I don't think I'm allowed there. I'll just say that. But yeah, I take any opportunity to practice more. I picked up Duolingo again, and it's funny, people tell me I speak Chinese with a French accent because I speak French too. That was my first language. And yeah, I'm honestly wondering if I should keep going deep in what I speak, or if I need to go learn something new to add on to that.
Jess Vachon: 03:22
So, what got you interested in language? Is that where you started out? Did you go to college for that specifically and then kind of pivot off of it?
Amber Bennoui: 03:33
So, my mom is an Arabic translator. And so, at home, there was always a big emphasis on, hey, we might be in the US, but we speak French and Arabic at home. That was always a big thing for my mom — learning new things, embracing our culture and ensuring that we're always learning something new. So, it's really funny. I learned French at home. And then when I went to school, I was like, oh, I'm gonna get A pluses in French. I'm just gonna take French. My mom was like, no. So, she gave us a couple options, but she pushed me to go learn Chinese. She was always very adamant that it was a good language to learn. It was gonna open up opportunities for me in the future. It was good because I get to learn more about a new culture. And honestly, I got really into Chinese movies. Though I learned less in school, I learned more from listening to Chinese music, watching Chinese movies, reading literature. And so, all the way from high school into college, where I took three more years on top of that, I had this weird side thing with the movies as they were coming out, like older movies, not even the newer things. And now it's interesting to see it more in culture. There's a lot of Asian media influence in the US now, and I think everyone's listening to Korean pop and K dramas, and so it's cool to see a lot of that come into the purview here.
Jess Vachon: 05:15
So, you're the second, maybe the third multilingual guest I've had on the show. And I love to ask this question. Because you are able to speak another language and you just talked about watching movies in another language and reading in another language, do you find that you approach problems or your daily life with a different perspective than if you hadn't learned those other languages?
Amber Bennoui: 05:41
I feel like it does. Language, in a way, the more you go deep in specific languages, it rewires how you think, if that makes sense. So, for me, I feel like the fact that I have had to struggle a bit — I struggled very early on learning English. And so, I would see a word I didn't know, or a book that was too hard for me to read and understand. And instead of putting it down, I'm like, no, we're gonna get this. We're gonna learn how to use this word properly. We're gonna learn how to use it in a sentence. And so, for me, I always had a curiosity when I came across something I didn't know right away to go deeper, as opposed to being like, oh yeah, I'll get back to this later. That was always a challenge for me. And that's how I think in tech and security and AI now. Like every day there is a new thing that I've never heard of before. And granted, it's a full-time job to keep up with all of that stuff, and I can't learn everything at the same time. I still keep lists, I still try to revisit things, I still go reach out to people, and I'm like, hey, you're an expert here. What is this thing? And it's interesting to see how willing people are to share what they know, what they're an expert in with you, or give you a new podcast or newsletter or blog for you to read and fully understand. So, for me, it's been great at rewiring how I think, and it's also helped me be a lot more honest about, hey, I don't know this, I'm not an expert here. How do I go educate myself? If there isn't a Reddit thread that tells me what this thing is, do I know someone in my network that I can reach out to and just have a 15-minute conversation? I live off of 15-minute conversations, and so that ends up creating more usable knowledge for me than just reading the definition of the thing and going, oh yeah, that's what this is. All right, let's move on.
Jess Vachon: 07:42
I love that. So, it sounds like as you were learning these languages younger in your life, that it also built that muscle for learning, that open-mindedness that said, you know what, I'm just at the tip of the iceberg, but I need to know more about this. I remember — and you actually brought a memory back for me — when I was growing up, and I'm a native English speaker, that's the only language I speak. But my mother would never give me the definition of a word. We had a huge dictionary at our house. I swear it was probably 10 inches thick. And if I didn't know a word, whether I was reading or I had heard it, she'd say, go look in the dictionary. And I just ended up starting to read words in the dictionary and then wanted to learn more about them. So, it's interesting how learning languages or learning words early in your life shapes the kind of person you're gonna be later on and the curiosity that you have.
Jess Vachon: 08:38
That's my long way of leading into the question as to how did you decide you wanted to go into cybersecurity? Is there a link there or was it just happenstance?
Amber Bennoui: 08:48
Let's just say I was a bad kid. And I'll tell you some stories that I honestly have not really shared. My mom is probably the only one that knows these stories because she caught me doing things that I shouldn't be doing on the internet when the internet started to become a thing. So yeah, there was a link. I've always done cybersecurity, even if it hasn't been in my job title. So as a kid — you know this — AOL, big thing, Netscape. We had a computer because my mom was a translator and she needed the computer to do her job. Making documents, writing documents, looking at things on the internet. And so, when my mom wasn't doing her job, she let us use the computer. And I've done so many bad things unintentionally on that computer. I remembered writing a letter to my mom being like, hey mom, I'm sorry, deleted all the files off your computer trying to install SimCity 2000. So regardless, I don't know why she let us keep touching her work computer and her laptop. She let us use it, let us explore. And so there are a couple things. Like every kid that's curious and likes games, video games, all that stuff, we played games on her computer. We looked at the internet; we got into chat rooms that we shouldn't have been in. And so that was the very beginning of it. Started to realize that I didn't have unlimited access to the internet because of a thing called parental controls, or my mom wouldn't let my sister and I buy a video game. And so, I started to figure out how I could get creative to get a license for this video game or to get around the parental controls or to create an account for myself. And so that was very early on, when I started to do cybersecurity without realizing what it was. And starting to do that just between me and my sister, it started to leak into school. So, at school, I was part of our computer club. And then the kids at school started to realize, oh, Amber knows how to get around parental controls. Can you help me do that? Or can you help me get more hours on AOL? And so, I had this whole crime ring at school, helping my classmates get around the consumption thing that AOL gives you, get around the controls that their parents got them. And it got me into some crazy parts of the internet. So, I think when I was like 14, 15, that's when I started to get on the dark side of the internet — not like the bad dark side with the drugs, but the hey, I'll pay you $50 to hack into this account or help me get this license through this thing. So, I may or may not have done a little bit of that growing up. So, after high school, I was like, what am I gonna do in college? And I'm like, I already know how to program, I already know how to get around a computer. Why am I gonna pay a college $50,000 a year to learn something I already know? So, I decided to pick something else. And so, I've always loved science. I was obsessed as a kid with nuclear physics and Chernobyl, volcanoes, all of that stuff. So I went to school, I did physics. First two years, miserable, absolutely miserable. I was like, I don't like math this much. This sucks. Third year, I get this opportunity to work in a lab. And in the lab, they're like, we need help with our computers. Can you do this? And I'm like, I know a thing or two about computers, I can help you with this. Very quickly, they were like, she shouldn't be doing this ultrasound research. She should be running the computers here. So, I ended up working with some PhD researchers in a Harvard lab, setting up really high-performing throughput data pipelines to take all of their lab data. They were scanning brain matter, finding anomalies, and looking at protein data to find links that can help indicate if somebody has certain types of cancer. So, I ended up working on that. And I was like, is this physics too? I'm helping physics happen. And I ended up just back on the keys. I went to school for physics, I thought I wanted to work in a lab. When I realized working in a lab is literally waiting for your experiment to take a really long time, I realized, maybe I can still do physics, but I'll go back to the computer stuff that I know very well. And so, I got to work with these really smart researchers from Germany and other parts of the US, working at Harvard and building all of that. And so, all that to say, I thought I was gonna do physics. I ended up back in computer science. I thought I was gonna do computers, just regular programming. And then every role I had after that, when I was a software engineer, it was me going in cahoots with the security team because I'm helping the security team understand what we're actually doing. And I'm asking the questions back to my team and back to the security team: how can we actually make this safe? I've always had this way of thinking of, okay, cool, we can build this, but what are all the weird things you could do around that thing? And that was probably why I didn't thrive as a software engineer because I was always finding the edge cases and getting myself in trouble when I should just be doing the project.
Jess Vachon: 15:22
It does. I love the part of your story where you talk about dinosaurs and nuclear reactions. That's such a wide variety of things that you're interested in. And even hearing you tell the story, you're interested in physics, but you're also interested in computers and security. And then you mentioned the programming as well. So, the language piece tied back into that. It sounds like as much as you might have tried to escape a certain orbit, it just kept pulling you back to where you were supposed to be. Is that a fair statement?
Amber Bennoui: 15:58
That's a pretty fair statement. I've run away from things because if I feel like I'm good enough at something, my thing is I want to go learn something else. I don't want to, yes, being an expert is great, and I think there's some people that thrive at being experts, but for me, I'm like, there's so much cool stuff going on in the world. Why am I going to go deep on this one thing when I can constantly go have — I think it's the thrill of being new at something and being curious about something, progressing to a point where you have a good handle on it. And then when I get a good handle on it, it just feels like Groundhog Day. I'm like, I don't want to do this. I don't want to wake up every day and just keep doing this over and over again. And so, I think that's kind of why it was one of those things where I'm like, I can keep running away from it, but people keep pulling me back into it. And so, then I just try to find unique things I can do in that same orbit to keep it exciting and interesting.
Jess Vachon: 16:56
Yeah, there's a lot in that statement, but I think that speaks to a lot of us that tend to have success in the work that they do, is that they become really good in an area, and then they're just like, okay, I know that. And your Groundhog Day reference resonates with me because when I've done it the second or third time, it's not as fun. And really, I want to do new stuff. I want to learn new stuff. That's why I'm kind of excited about what AI brings to the table, and we'll get into that in a little bit, but it's a whole new evolution in computing and in learning and what the capabilities are in society and what the promise might be for us if we use it wisely.
Jess Vachon: 17:45
You've been through several different companies. You worked for F5, Chain Guard, Datadog. Each one is slightly different in the security that it does, and I think you've touched upon why you've looked for different aspects of security. What did you learn and carry with you from each chapter of your career so far?
Amber Bennoui: 18:05
It's a good question. I think each of those companies come from different origins as well. So F5 — huge conglomerate company, everybody knows them. Datadog, newer company, founder led. Chainguard, much newer company, very founder-led. And I think each of them taught me humility, but in different flavors of it, if that makes sense. I would go in and I'm like, yeah, I know all the problems here. That's why you hired me. I'll get this handled. But I think, especially when it comes to security, it's also learning that a perfect solution just doesn't exist. So, coming into each, I'm like, every company here has its merits and is successful for a reason, but perfection isn't there, and you're almost always trading that risk to try and innovate and go fast and build cool things. I always say: build cool things, go fast, break stuff. And so, at F5 — and F5 was actually the company that acquired the startup that I was at, called Threat Stack, the Boston startup. It was one of the earliest companies I worked at and found. I had two of my own startups at a certain point too. And so, I was like, okay, let's try to go do this for someone else, and let's go solve cool problems. And what drew me to them was that they weren't just solving security for the sake of solving security; they were thinking about the problems and the people at the end of that statement. And I learned a lot. It was my first time where I was spending tons of time on calls instead of behind a terminal trying to build things. So very early on — and I look on this fondly and I talk about this a lot — I would sit, Friday afternoons, I would get my laptop, get my notebook, go downstairs, and I would just sit with the SOC team. We had our own SOC team that was helping customers. And because I was new to security from the perspective of doing it in a company — not just doing weird side projects — I would just sit down and I would just watch. And I'm sure they thought I was creepy because I was just sitting there and writing notes. Someone would get an incident, and they would pick it up and they would start working through it. And I'm just like, what are you clicking on now? Why are you going in this application? What is this application? What's a process tree? Just asking questions to learn what product I was actually building. And I did that for a while. And one of the things I thought at first might come off as not working hard enough — it was me listening and talking to as many customers, as many people that worked at ThreatStack as I could, before I could confidently go and say, hey, these are the actual problems we need to be solving. So yeah, that helped me a lot. A lot of what I learned there still carried into all the companies I worked at after. One thing I learned — and this isn't unique to any of the companies I specifically worked at, I think it's in general — is that there's still a lot of the same problems in security. Just because new technologies come and go, new types of attacks come and go, the fundamentals of security, the problems are still the same. And in a lot of cases, I feel like people are trying to solve their own problems. Oh, we want to respond to an incident more quickly. Okay, let's throw an AI solution at it. Let's throw this type of process at it. And I'm like, no, let's think about what the problem underneath it is. What are the basic questions we're trying to answer? And in a lot of cases, I don't see tools starting with that. They start with, let's just jump into action and show that we made some type of progress. And so, for me, spending the time that I did at all those companies, a lot of my thinking and framing has been about the five W's. And you see this when you're dealing with security stuff — if we're building something and you can't help someone answer the five W's, then what are we truly doing? I still see that a lot. I don't want to pick on AI stock, but it's like, cool, you built me a feature that can create a post-mortem very quickly after an incident. Okay, cool. What? Why? Who is this for? How does it work? I see a lot of these features. I talk to other founders, other product people building these things, and they're just like, oh, our competition did it, or we thought this would help with this KPI. And I'm like, no. Build the muscle of first understanding what the fundamental problem is, being able to empathize with the problem, and then go build. And I feel like a lot of security, a lot of techs in general, is missing that — the spending of time and being the method actor before you go and solve the problem.
Jess Vachon: 23:52
Oh, yeah, absolutely. And that's the buzzsaw that so many startups run into when they talk to a CISO. Because the first thing I want to know is what problem are you solving for me? You can tell me how your product works, but why do I need it? I have to make a business case. And if you can't tell me those five Ws, it doesn't matter how fancy your technology is. You get maybe 30 minutes with me when I sit down with you, and I'm gonna know in the first five minutes if you're addressing an issue I have or not. And if you haven't asked yourself those questions, and if you aren't prepared to answer those questions, it's not gonna be a good conversation. And I can hear the experience in your thoughts and in your reasoning. You're a founder, you said you founded a couple companies, so you've either learned from not having the five Ws answered, or you wanted to know those answers yourself to justify the work that you were doing. And I kind of suspect it was to justify the work because you had a problem, you wanted to come up with solutions for it, and then you were able to answer the question and realized other people had the same questions and needed the same answers. Am I going down the right trail here?
Amber Bennoui: 25:12
Oh yeah. I've done security, I've been an engineer. I don't believe that we need to be inventing new problems right now. We have tons of existing problems that have lived on for decades in some cases that no one has ever solved well. A lot of them are leading with the how instead of the why. And I'm just like, no, that's the wrong framing. Let's figure out the why, and then everything else will come organically.
Amber Bennoui: 25:42
And so, for me, yeah, I started AISECA with one of my friends who was a VP of security at Morgan Stanley, and we have these weekly calls where we just get on a call and I'm like, what did you see this week? What did you hear this week? And I love those types of conversations. I have so many of those types of conversations where I start to see themes, and I see themes in my own work. And so, we would talk and talk and finally realized that there is a similar wall. And in this specific context, it was why I started AI Security Alliance — talking to Charlie, we were like, okay, cool, this AI stuff is coming. But what does that truly mean? Yeah, we know that there are new things, and yeah, we know there are new acronyms popping up. But can you actually take that and easily understand it? Is it real English? Does it apply to my business wholesale? Do I have to do a lot of work to make it apply? How do we know it's working? How do we know when we need to modify it? How do we know if it's not working anymore? And going back and forth, we're like, yeah, there are a lot of questions here. First, let's go see what's out there. First, let's go talk to people. First, let's see if there is a tool that can do this. And realizing that, no, there isn't. And what can we actually try to do about it — at least to help ourselves not lose sanity, but see if there's something that we could build that can help others take some type of guidance and make it work in their company. And AISECA has been really fun to work on because it started with just me and Charlie. For a couple months, it was awful — we were working out of spreadsheets. He works at a bank, and I worked in spreadsheets as a product manager for most of my career. And so, we're looking at it, and finally we got to a good point with it. And we were like, okay, cool, let's go talk to other people about it. And then we go show them the spreadsheet, and they're like, we get the vision, we agree this is a problem, but the execution is not the right execution. And then it became one of those things where people said, hey, this is interesting. Let me try to join forces with you. And so now we have about 11 people working on it. And it is still a spreadsheet, but it's simpler English. And we're at a point now where we have 10 of us from across many different companies — Charlie being at Morgan Stanley, myself right now at Data Robot, folks at Sono, Citizens, Experian — big companies that are working on this. We got this, we have the experts, we know what we're doing. And then we started thinking, maybe let's not trust ourselves because it's like an echo chamber — very much like we're thinking in a vacuum. So, we started reaching out to other folks. And they started to look at it and go, no, you're missing something here, this needs to be simplified more. And so now we have peer reviewers coming in. But yeah, it's been a lot of fun. I'm meeting a lot of people, and it's realizing for me that even if you do hone in on the problem, if you're solving it in a vacuum — similar backgrounds, big companies, deep security experience — you might think you have something gold. And then you realize you need the self-awareness that it's not just you making it; it's making sure that it works in practice. And I think that philosophy applies to people starting companies, people building within companies. A lot of things sound great theoretically, but do they work in practice or at scale? Not always, unless you're truly a unique person.
Jess Vachon: 29:56
Yeah, it takes a lot of boldness, I think, to question yourself and open yourself up to constructive criticism from others, but it speaks a lot to intention. You know, obviously you knew you were all doing a great thing and you self-organized and it took off, but then you said, we have a limited perspective here. And to reach out to others and say, show us what we're missing — that's a big deal. And it's not something that I think we always feel safe enough or have enough trust to do in the organizations that we work with, but it's very interesting when it's a passion project. We feel a little bit more secure about doing that. And they probably at the end of the day, when you do that, you have more of your heart tied around that work; it's more fulfilling. And I think that's probably why a lot of us have our day jobs, but then there's things that we like to do on the side because most of us, at the end of the day, want to feel like we're leaving the world a better place than we found it. And you certainly are speaking to that. Whether you're founding companies or working for other companies, I keep hearing a constant theme of here's a problem, I want to solve it, here's why I want to solve it, and here's how it's gonna benefit other people. I want to change the conversation up a
Jess Vachon: 31:21
little bit. Talk to me about Jiffy Labs. What is Jiffy Labs for those who don't know, and what are you doing there?
Amber Bennoui: 31:26
Yeah, so Jiffy Labs is a really fun project. Over the past couple years I have met a lot of really smart security researchers and people that are more on the operational side and the tactical side of not just security, but engineering, AI, data science, all that fun stuff. And what we're trying to solve sounds simple, but it's pretty big as I'm learning from my conversations. So today, or just in general, we've always had this problem at organizations where people just love downloading random code off the internet. Love it. Love to do it myself. I'm not absolved of that. And I think the problem's starting to get worse as I'm hearing from a lot of my CISO and security friends at companies that are being very AI-first. And so I'm starting to hear more that it's not just their developers anymore downloading random things off of the internet and running them in ways that you might lose sleep over, Jess. But it's everybody — it's the marketing team, it's the HR team, it's the product team, it's the executive assistant team. Everybody wants to use AI, everyone's very excited by it. I've never seen anything like this, and I've survived digital transformation twice now. But we are realizing now that as security folks, not only do we not have any control or knowledge or expertise on it as everyone's building it from the ground up, but no visibility. And so I see this as a huge blind spot. I almost see it as worse than a blind spot than application security has been, or software development has been in organizations. And so we're trying to build a couple things here. One is a way to get that visibility, build an inventory of what's there as it relates to AI systems, a way to scan that and put risk to it and put a score to it so people can quickly see what they're bringing into their environment — because we all know they're not reading a thousand lines of code and prompts. And if they are, they're not understanding it. And a way to take action against it. And so, a lot of this is starting as an open-source project. What we're trying to start by doing is building a really good scanner that can actually understand these AI components because they're very different than your traditional software components and database. Because in a lot of cases, people are just blindly downloading things. And when you bring that to a security team, most security teams are not going to know what a humanizer skill for Claude is, right? You look at that at face value, and you might not even have time to look into it. And we're seeing that a lot of security tools out there are blind to those things in terms of giving verdicts, in terms of scanning them. And so yeah, it's been fun. People are using it, which is crazy to me. It's ironic because it is also untrusted code on the internet. I mean, we're smart people, we're building, we're trying to make sure it's safe, but who is downloading this? We're at like 500 downloads a week on NPM. We have a website for it. I mean, hey, not the worst thing you can download, but it's been very interesting to see that out there and also to get random outreach from people using it to give us feedback on what they want to see next. And so yeah, I've been working with tons of different companies. The highlights have been Zendesk and DigitalOcean and Morgan Stanley — getting feedback from them has been super rewarding because I came in trying to solve a simple problem for myself because I was working at companies doing AI things. I'm like, hey guys, let's calm down. And now I'm like, okay, people are using this now. What do I do?
Jess Vachon: 35:51
Yeah, and I've been in technology for almost 30 years now. So I've seen a great evolution, I've seen cycles of technology, but I've never seen something where we have to learn about it so fast, and we can't keep up with it. We can't go at machine speed. The AI is advancing so fast that what you learned today or even last week might not be relevant a couple days later. And you know, I hat off to Anthropic and Claude — at least it gives you somewhat of a vision of what it's doing. It's explaining its thinking. How you take that is either a good thing or it's intimidating. But a lot of people who don't have a technical background are just going out there and trusting it implicitly without understanding that it's the equivalent of giving a motor vehicle to a toddler and handing them the keys and saying have at it. Because it is that powerful and it can cause that much damage, and it has impacted the lives of people. And so, from a security perspective, it's not just about protecting the data and protecting the company, it's about protecting the people from doing something very negative to their lives that is most likely going to be unintentional. So, I'm glad that there are people out there saying, hold on, let us get something or a solution for you that is going to protect you. You don't have to understand everything, but you do have to understand that we only allow adults to drive a car after they've taken driving lessons, and we do that for a reason. There's a level of maturity and understanding you have to have before we give you this power. And I'm wondering if we're gonna get to that point because I think we've been very lucky today. And I think Mythos is a wake-up call about the potential danger that can be brought
Jess Vachon: 38:16
forward. And let's talk a little bit about that, right? Because earlier in your conversation, you said a lot of the things that we say are new are not really new, they're just variations of what we've seen before. Vulnerability management programs and patching have been around for decades. That's nothing new. What's new is the wake-up call to businesses saying, oh, okay, where we used to give lip service to this, we actually have to do it now. And it's such a game changer. So, there's that. What else do you see that might change in terms of the security realm — AppSec or what have you — that are gonna be Mythos-like in the next year or two or three?
Amber Bennoui: 39:03
Yeah, I think Mythos definitely is already pushing that shift. I noticed also that as models are getting more powerful, code becomes more ephemeral. Things become more ephemeral. And the way that security has worked is it relies on some type of continuity, some type of pattern, some type of familiarity. And I think as AI models get smarter, a lot of those traditional security tools are gonna break — both in how they do their scanning, in terms of how they do the guardrailing, in terms of how security teams can do the remediation. If a line of code written yesterday is now seven versions different the following day because an autonomous agent is running on it and has its own system prompts, has its own ways of working, has its own perks, as we see across the different models, across the different AI coding systems — what does that mean for the output? It means everything's dynamic, everything's ephemeral, nothing is familiar anymore. And that's good and it's bad. And so, I feel like I'm scared, because I feel like the models are changing faster than the security tools are adapting. And granted, I see really good bipartisanship between the security companies and the AI companies, but I don't see enough. I feel like a lot of it is almost like marketing and vanity. And I'll say this as a hot take, but I think the entire Mythos thing was a huge marketing stunt. How am I gonna trust that a model is so powerful it can't be released when that same company leaked their own source code the week before? So yes, granted, it's powerful, but is it controlled? What is the partnership there? Are they actually working with the security companies to secure what they're building? Because their own source code got leaked. Is it bi-directional or is it just, hey, here's something we don't understand, help us understand it? I find that to be very interesting, and that is the extent of my commentary. But yeah, I think security is gonna need to evolve. I think we need to start evolving to look at behavioral intent or context, like in actions. We need to start looking at lineage and provenance, which is gonna get harder when everything is in flux. And being able to change how the security tools look at that and use that to actually give good context to the security teams that have to go help make big decisions based off of what's coming out of those tools.
Jess Vachon: 42:01
Something I thought of as you were speaking — there's been a lot of talk about bias in AI, and I think we're looking at that in one perspective, right? What is the nature of the person who's creating AI? But I think as we're looking at these tools now, there's also going to be the bias of the security mindset saying this is how this tool should work, and trying to force the old ways of doing security into the AI tool, which may limit the applicability of the tool moving forward, or may actually work against the inherent intelligence of the AI and where it's thinking — I'm using air quotes for those of you who can't see me. It's being told this is how I need to execute this model, but there's actually a better way of doing it. So, it's kind of a touch of what we're hearing about Mythos, and I completely understand your point that we don't know if it's true or not what they've told us. We're kind of assuming that they're being honest with us and honest that the code leaked out. But it's the same thing with the bias. I mean, I think we're gonna run into issues with that more so as we move forward, because humans are gonna try and say, this is the way security should be done. And the AI — if Mythos is actually what they're telling us it is — the AI is already outsmarting us under the sheet, so to speak. And I think that bias is gonna become worse. It's gonna be more of the human fighting against a more intelligent outcome. What are your thoughts on that?
Amber Bennoui: 43:47
Yeah, it keeps me up at night, honestly. It's so different. I always say that cloud security made everyone rethink how we do security because that's also different. With traditional security, when you're securing your data center, you knew where your servers were, you knew where your applications were, everything was built within the same confines. Cloud security shifted a lot. And in a lot of cases, I worked in cloud security for a while. I realized that my understanding was always a couple of years ahead because we're building the products for that, and then companies are consuming it — so people were catching up. I think AI is at a very similar inflection point. I just think it's very, very fast, both at the clip of adoption, the top-down adoption, and not as predictable. With the cloud, you had to adapt because you're now securing something that you don't physically control. I always joke; the cloud is just somebody else's server. It's in another data center, and you have confidence that they're making the right precautions and the right decisions around that because that was the contract that you entered with that cloud provider. We don't have those same promises with AI, whether you're using it in a consumer-type way just for fun, or you're using it as a company. Those AI companies have not made those concessions or those promises to you. And so that is scary because not only do we have to try to secure as defenders what we don't understand, but you don't have that shared responsibility model for security, besides some very tightly scoped guardrails. For example, Chat GPT writing a system prompt that says, hey, stop talking about goblins. Those are their promises to you. They're putting the same types of guardrails at a system level because they know about it, they've got enough complaints about it, but I don't think they would do that out of the goodness of their heart. So, I think it's a hard problem because I don't think it's about systems anymore. I think it's about intent, it's about autonomy, it's about the shared responsibility model that needs to exist between the companies using it and the companies building it. And I think there is something that needs to give between the two where we need to start pushing back both within our org and within the vendors that we're using to use all this cool AI stuff, to build the transparency, but also at least give us the lever so the accountability is there. Because all I'm seeing is, oh, it's somebody else's problem to do the security of this thing. And I'm like, no, this is a non-deterministic system. By the time this becomes somebody else's problem or another platform's problem, it's too late. So just trying to shift how we think about that and also defining what that responsibility model looks like, because I don't think it's good enough now. And hopefully I answered the question. I know I went off the rails.
Jess Vachon: 47:11
You did, and now I'm gonna be up all night thinking about the shared responsibility model not being there for AI, and now we've got another thing to worry about. But that's all good, and you managed to get goblins into the conversation as well. So, we've got dinosaurs, volcanoes, goblins, nuclear physics. It's all good. You are a person of many interests.
Jess Vachon: 47:34
One of those is snowboarding. Talk a little bit about the snowboarding and then what else you do when you're not behind a keyboard.
Amber Bennoui: 47:41
Absolutely. Yeah, I snowboard a lot. I've managed to snowboard now on three different continents. I'm very proud of that. It's one of the main reasons I travel. In fact, all these badges — whenever I go to a conference, I'm like, is there a mountain nearby? Is it winter there? Can I go? And so, I always try to, when I do a trip, whether it's a conference or just for fun, make it snowboarding related if I can. So yeah, I've been doing that for about 20 years now, which is crazy. I do a lot more backcountry now — going into areas that may have avalanche risk or may not. I've survived to tell the tale. And yeah, it's been really fulfilling. Getting out in nature forces me to stay off my phone, forces me to do things that I'm uncomfortable with. I am deeply afraid of tree wells — that's another obsession on top of nuclear reactors. But yeah, I've met so many cool people because of doing it. I've gone to tons of great places; I've had some near-death experiences that I wouldn't have had otherwise. I love it. But now I'm learning golf, which is less extreme. I just bought my first set of golf clubs and I'm gonna try to learn that this summer.
Jess Vachon: 48:56
I would have never picked golf, with the whole background that you've described to us. Snowboarding, yes, but golf. That's really mellowed you down quite a bit, huh?
Amber Bennoui: 49:06
Yeah, I live next to a golf course, so I think it's just the environment I'm in. I'm like, I see people going. I literally see a golf course from my office, and I've seen people going, and I'm like, what is this? I'll try it. The investment is low. I just bought my first set of golf clubs and I'm gonna start taking lessons next month. I'll report back. If it's boring, I will stop doing it. I'm gonna try it for a month. And if I don't like it, it's going on eBay. The clubs are immediately going on eBay or Facebook Marketplace because I need to find something better. But we'll see. I might like it.
Jess Vachon: 49:43
You probably will. It's addictive.
Jess Vachon: 49:44
You're gonna be speaking at Boston Tech Week on May 27th. Tell us a little bit about the talk you're gonna be giving there.
Amber Bennoui: 49:51
Yeah, absolutely. So, I am doing a couple events at Boston Tech Week. The favorite event that I want to share with everyone is I'm teaming up with some really good friends in the security space. So we have the CISO of Scrut Automation, a senior director at DigitalOcean on the security side, and two friends — one friend writes the Cybersecurity Pulse, Darwin, who's amazing, and my other friend Leah, who's VP of Threat Intel at Citizens — and we're giving a talk, we're doing a panel on what is changing for people doing cybersecurity in the AI era. What needs to give? How do you stand out? How do you keep up? What keeps you up at night? And it's just gonna be a really fun conversation because I think all of these individuals are at different parts of the security world. Some of them are more hands-on, boots on the ground, some of them are thought leaders, some of them are CISOs of very scary-to-secure companies. Nick is a CISO of a GRC, SOC2 auditing company. And I'm like, that must be hard. So yeah, we're very excited to do that. And then we're also doing a fun dinner that same week and I've posted about it on my LinkedIn. So definitely sign up if you're around.
Jess Vachon: 51:11
Great. Where can people find you?
Amber Bennoui: 51:16
iseca.org — a-i-s-e-c-a.org. That's for the AI Security Alliance group. And then for Jiffy Labs, we don't have a real website yet, but you can find us on NPM, the package repository. If you want to check out our scanner and save your security team a headache when you bring new things into your AI environments.
Jess Vachon: 51:43
Awesome. Thank you so much for joining me today, Amber. And folks, if you enjoyed what you heard, reach out to Amber, connect with Amber, and check out all the amazing things that Amber is working on. Thank you once again for listening to the Voices of the Vigilant. Until next time, stay curious, stay vigilant, and take care of each other.