Use Red Teaming To Build, Not Validate, Your Security Program

Use Red Teaming To Build Not Validate Your Security Program

When did we all agree that red teaming was about validating security? It seems like increasingly red teaming is a catch all term for a whole lot of testing that isn’t clearly defined, and as a result it’s hard to see its value. Or it’s purely done for compliance reasons with no intentions of improving defenses. In this episode we examine the value of moving red teaming upstream, testing your infrastructure as-is rather than building out your program to validate that it’s “ready.” If you test earlier then you’ll know earlier what you need to build out your security program.

Check out this post for the discussion that is the basis of our conversation on this week’s episode co-hosted by me, David Spark (@dspark), the producer of CISO Series, and Steve Zalewski. Joining us is our sponsored guest, Richard Ford, CTO, Praetorian.

Got feedback? Join the conversation on LinkedIn.

Huge thanks to our sponsor, Praetorian

Praetorian helps companies adopt a prevention-first cybersecurity strategy by actively uncovering vulnerabilities and minimizing potential weaknesses before attackers can exploit them.

Full Transcript

Intro

0:00.000

[David Spark] Red teaming holds the potential to show you where you need to build your defenses. So, why do most organizations view it as a way to just validate what’s already been built?

[Voiceover] You’re listening to Defense in Depth.

[David Spark] Welcome to Defense in Depth. My name is David Spark. I am the producer of the CISO Series. And joining me for this very episode – you know him, you love him. You’ve brought him into your home as you listen to him. It’s none other than Steve Zalewski.

Steve, say hello to the wonderful audience.

[Steve Zalewski] Hello, audience.

[David Spark] That is the sound of Steve’s voice. Our sponsor for today’s episode, a phenomenal sponsor of the CISO Series, we adore them, it’s Praetorian. They are your offensive security partner. And when I say offensive security, I’m not talking about that they attack others but rather they work with you to actually attack you, to find where your weaknesses are on the early stages of building your security program.

And in fact, we’re going to talk about that today. In fact, all throughout today’s show. And in fact, they’re responsible for our guest who I’ll introduce in but a moment. But first, Steve, when did we all agree that red teaming was about validating security?

It seems like increasingly red teaming is a catch all term for a whole lot of different testing that isn’t clearly defined. And as a result, it’s hard to see its value. Or it’s purely done for compliance reasons with no intentions of actually even improving defenses, but there is a compelling argument that moving red teaming up stream, testing your infrastructure as is rather than validating something that’s “ready” can make it much more valuable to your organization.

Steve, you asked this very question of the community, and there was definitely not universal agreement, was there?

[Steve Zalewski] Boy, you got that right. When I posed the question… And for the audience, we did a show a while back with Ron Gula talking about blue teaming. And we talked about red teaming/blue teaming. That was one of the genesis, when we were having that conversation.

I’m like, “Well, if most people can’t do red teaming, we got to move it upstream so that we can do red teaming.” And that was where we started with. And so when I posed the question, relatively simple. And to your point, David, wow, it got a lot of responses, and they weren’t all uniform.

And it really called out what you talked about here, which was there is not a common understanding of what red teaming is or what they want it to be, so I’m really looking forward to this conversation.

[David Spark] Or where it’s used to have been. And by the way, for our audience, don’t go looking for that Ron Gula blue team episode. It is actually…while we recorded it earlier, it’s going to air later than this episode. Don’t worry, Steve. Steve didn’t know.

[Steve Zalewski] Sorry. Damn.

[David Spark] It’s all right. But let’s bring on our guest. Very excited to have our guest here. He’s the CTO for Praetorian. We’ve had him on before. He’s awesome. So, we said, “Why not come back again?” Our sponsored guest, Richard Ford. Richard, thank you so much for joining us.

[Richard Ford] Thanks for having me back.

Why is this relevant?

3:06.518

[David Spark] Jonathan Waldrop, who is the CISO over at The Weather Company, said, “A threat modeling approach is a better way to achieve the end result we’re looking for. This allows teams to look at a variety of potential problems, and it helps you consider all the ways vulnerabilities can be exploited.” Luke Jennings of Push Security said, “There should definitely be security assurance exercises through the project lifecycle.

You want to catch issues early that can be fixed cheaply rather than require significant and costly changes later. Red teaming should definitely be very open scope, goal driven, and conducted periodically, not just when you feel you’re at your strongest and ready for it.” So, Luke’s quote right there, Steve, is I think the argument we’re trying to make is stop waiting until the end.

You got to push it upstream because, like you said… And I’m interested to know, you could solve problems a hell of a lot cheaper, couldn’t you?

[Steve Zalewski] Well, here’s what was so interesting – look at between what Jonathan and Luke said and what I posed for the question. My question was, “Should we do red teaming before we have a defense established to know what is important to protect?” That was my simple premise, to be able to go a little maverick thinking.

Both of these responses are other ways to look at the problem, which is what I found so valuable. Is look at the different ways that people can see red teaming adding value compared to how we’ve historically defined it.

[David Spark] I totally think that they are seeing it in a version of the same way. You were nodding your head all the way through this, Richard. Your take?

[Richard Ford] I think what you’re seeing is the horrible definitional instability we have around what a red team is. So, I’d like to go back in history. I think Micah Zenko wrote a great book on this called “Red Team,” so you’d think it would be relevant, and it is.

It is a fantastic book. What a red team really comes from is taking the contrarian or the adversarial viewpoint. And so both of these folks are actually right. Jonathan is saying threat modeling. Guess what? Threat modeling is a form of red teaming. You’re pressure testing the idea.

And the problem is that the word “red team” has come to mean, “I will potentially take a bunch of scanners and run it against your stuff,” or, “I will take some smart people with a hammer and beat your stuff.” And because the term is so loosely defined, it’s very difficult to have a single conversation around it.

Because I’ll say tomato, and you might hear Aston Martin or some other thing.

[David Spark] [Laughs] But that is a good point. Because a lot of the definition of red teaming and what we’re talking about here is when to do it, when to test your defenses. And it’s a very, very different attempt at different times that you do it.

Because when you’re testing defenses early on versus later on is two completely different stories. Yes, Richard?

[Richard Ford] Absolutely. So, if you brought me in and said, “Hey, we’ve got this design. Let’s red team it. We’ve got like 40 lines of code,” of course I’m not going to come in and buffer overflow it. What we’re going to do is stand in front of a whiteboard and talk about the ways it could be broken so that we can do what we talk about but seldom do, which is bake security in.

And so a red team early could be a paper exercise, or it could be a whiteboarding exercise. It’s very collaborative rather than adversarial. It’s just somebody comes in and pressure tests everything. And then as the product matures, you roll out into what is sort of more traditional red teaming.

What’s going on?

6:55.878

[David Spark] Dave Kelly of SensCY said, “When the organization culture becomes one of learning rather than punishment, will the buy in necessary be for the iterative with our red teaming?” And Kane N. of Canva said, “The benefits of red teaming are misunderstood by many,” as we have been discussing.

He goes on and says, “Yes, there is the initial benefit of actively testing a product or feature but the biggest benefit from legitimately good red teams is what it adds to the security culture of the company.” Let me throw that to you. Because it’s interesting how you sort of explained it.

It’s like at the beginning it just could be a piece of paper and a whiteboard. It doesn’t have to be a bunch of guys hammering at your system at the beginning. So, I like that idea, that you’re sort of building a culture early on, aren’t you, Richard?

[Richard Ford] Oh, yeah. I read this quote in the discussion online, and I’m like I will buy a coffee or the beverage of their choice for either of these two fine folks because I think they just get it. Yes, it’s about learning. It’s about getting better.

It’s not about somebody comes in and makes you look foolish. That’s one of the reasons I think we’ve said this before – is I don’t care about the vulnerabilities that I find in somebody’s system. I care about the vulnerabilities that they fix. Finding them is easy.

Getting them fixed, making them better, that’s exciting. And so I agree with both these positions. It’s got to be learning, not punishment. And it’s around maturity.

[David Spark] Steve, has red team…? Because I can see red team doing both. I can see it destroying a security culture, and I can see it building. Where can you go wrong, and where can you go right, I guess?

[Steve Zalewski] So, if you’re a Fortune 500, which is where red teaming primarily is done, historically it’s been done to prove that you got a problem. To verify that you have problems to be fixed, so it’s kind of a stick. And more now, what we’re talking about here is let’s use it as a carrot.

Let’s actually understand that red teaming is not a technical exercise in dominance. It’s an appreciation that security is everybody’s problem, and how do we move it further up, and that you get to be part of the red team. And so we’re redefining its value proposition and how to do it.

And so once we do that, that’s part of the redefinition. This is where we’re coming from when we’re looking at what it means. So, when Richard says, “Hey, look, red teaming is something up front, something on the back.” But then the other thing I say for Richard is when you get out of the Fortune 500, get to the small and medium enterprise, get to where there is no CISO, there is no CIO, 50 people to 300, very small security team, if anything that’s where I was thinking about red teaming, which is how do we redefine it because that’s where we need the help.

That’s where we need to be up front to be able to support those companies. It’s not really the Fortune 500s that have the problem.

[David Spark] So, let me actually… Richard, this is the perfect time to take you in and specifically talk about what Praetorian is doing because you guys actually refer to it more as offensive security, which I know the whole industry sometimes… And I’ve made a reference to this when I mentioned you at the beginning.

They see offensive security, and they go, “No, you go, and you attack the bad guys.” But that’s not what you do at all. Let me just clear that up. That’s not the case. But my guess is that offensive security…and you also say you’re the offensive security partner… Your goal is like what Steve was just saying, is making this a positive security culture experience for everyone involved at the onset, not at the tail end, to show that we can bust your defenses, which is what you were saying.

Yes, Richard? Am I getting this right?

[Richard Ford] That’s exactly right. There are red teams that operate along the lines of, “We’re going to show that we’re smarter than you. Or we’re going to show that we can…”

[David Spark] By the way, that has value. I don’t want to degrade that that doesn’t have value. It doesn’t, but that’s not what we’re talking about now. Richard?

[Richard Ford] Yes, and it does have value, although the intent is wrong. Right? When you’re working with a customer, it’s never to show that you’re smarter than them. It’s to make the customer better. And, again, I kind of like the sparring partner analogy.

If you have somebody tutoring you in a game, let’s say even if it was… Let’s use an example like chess. If I was playing chess against a much, much better player, if they just wiped me off the board sequentially, I wouldn’t learn much. But if they kind of walked me through it, if they were just a little bit better than me, if they showed me the error of my ways, I’m going to leave that game a much better player.

It’s this idea of sparring. It’s not about… It’s about the opposition coming to your level plus 1, not plus 100.

[David Spark] So, it’s red sparring, not red teaming.

[Richard Ford] Yeah, red sparring. Although teaming does have the word team in it, right?

[David Spark] It does have the word team in it.

[Richard Ford] So often we don’t approach it that way.

[Steve Zalewski] I’m going to push back for the sake of argument here, which was I understand that, Fortunate 500 and trying to get along. I’m going to go back to small to medium business for you, Richard, which was don’t have security teams. Okay, sparring isn’t the option.

It’s the CEO trying to have good enough security for him to be able to sign a contract with a Fortune 500. Okay? To be able to build out good enough security for his program because he is trying to grow the company. He is not at the stage where he’s just trying to make cross security better.

So, I’m curious as an expert, how would you see that? How would you position value for that type of a use case?

[Richard Ford] That’s a great question. I think that the small to mid-sized enterprises are horribly underserved by the security world, so let’s get that out there. But for those folks, it’s about giving them material risk rather than a list of, “Here’s 20,000 SSL vulnerabilities that are really very exploitable unless you are the NSA.

Go worry about that.” It’s, “Let’s deal with material risk.” So, in other words, if you think of the things that your report to these companies like a dial, and as their security maturity improves, you turn the dial. You don’t always have the dial set to 11.

You start with the dial at where the customer is, whoever the customer is. So, if I was working with a smaller business, it’s about how can I reduce your material risk, Mr. Customer, and help you mitigate it. So, if I had a choice between pointing out ten risks to you that you can worry about all night or pointing out three risks to you and I’ll help you fix them, I would take the three over the ten every time.

[David Spark] I’m going to give you another case story, and I’m not going to mention the client. I had a project for a client, a very big phone company, and they needed us to rewrite literally 1,200 pages of website content that they had. It was a disaster.

An absolute disaster. Now, I joke… And I’m not likening it to specifically what Praetorian did, but I said, “We took something horrible, and we made it crappy.” In the sense that we leveled it up from the disaster it was. It was still not good, but it wasn’t a disaster.

Now, the thing is it’s hard to make someone a nightmare to A+ overnight. Yes, it improved from crappy after that, but we had to get to one level above horrible quickly. And so the thing is it’s like getting to…let’s get to the security poverty line. Can we just get to the security poverty line?

If we can get to that, which is far from ideal, then let’s move on.

[Richard Ford] That’s exactly right. And if I ever found a company of my own, the tagline will be, “We took something crappy and made it horrible.”

[David Spark] No, no, the other way around. We made it…

[Crosstalk 00:14:55]

[Richard Ford] [Laughs]

[David Spark] …and made it crappy. Horrible is below crappy.

Sponsor – Praetorian

14:59.879

[David Spark] Before I go on any further, I’m going to talk about Richard’s awesome company, Praetorian. And they are just a spectacular sponsor of the CISO Series, so please listen to the awesome stuff they do in this offensive security. I’m actually going to reiterate a lot of the things we’ve been talking about.

Praetorian is an expert driven, offensive security company whose mission is to prevent breaches before they occur. So, Praetorian, how do they do it? They help companies shift from an assumed breach mentality, those are the ones who give up, to adopting a prevention first cyber security strategy by actively uncovering vulnerabilities and minimizing potential weaknesses before threat actors can exploit them.

Now, from red team engagements like we talked about and attack simulations to continuously manage penetration testing, Praetorian’s human led tech enabled suite of offensive security solutions allows organizations to proactively identify and remediate risk while staying in control of their constantly evolving attack surface.

That never stays the same. So, find out why the world’s leading companies trust Praetorian and create a future without compromise. Now, if you want to go to their website, let me spell it for you. It’s praetorian.com. That’s praetorian.com. Check them out.

What needs to be considered?

16:17.380

[David Spark] Lesley Heizman of Lucidworks said, “It depends on the level of testing and scale you’re approaching it at. If I am testing something in my design or overall architecture that I’m confident will remain fairly stable over time, I think the earlier the better.

If I’m trying to red team something that’s not fully baked yet, early in development similar to QA while there is benefit to talking through or doing early testing scenarios on that might go wrong to get a rough idea. It’s constantly changing, so I reach a point of limiting returns where I need a finalized production order to really test affectively.” So, this is the argument of test early or not.

But more on this. Merrit Baer, who is the field CISO over at Lacework, said, “You can’t red team infrastructure that isn’t yet built. If you’re not deployed, it’s just validation of the build stage which folks can and should be doing today.”

And Ryan Franklin of Amazon said, “We need more alignment on how the industry defines red teaming.” Aw, we’ve been talking about this. “Whether as threat emulation, penetration testing, or a combination thereof. Then we can have better conversations on where it makes sense to shift those resources.

We should keep red teams as a downstream function and rely on them under the SOC to focus solely on driving improvements to our defensive posture.” So, these people are against the upstream. All right? So, I wanted to group these all together. They were arguing…This is what… I knew this argument would come back.

And they just have… I think in their mind, they have a clear vision of, “This is what red team is. This is what you’re supposed to do.” Steve, what do you say to these people who they’re not wrong, but it’s just their vision?

[Steve Zalewski] So, I would say is the push to shift left or upstream is this is how most people think of red team. When you say red team, the visceral response is this. And part of what we’re talking about here is as long as we say this, we’re limiting ourselves to the Fortune 500 and to a model of testing that has not served us well for the last ten years.

And we need to move. I think that’s what we’re starting to do today is to simply say take red teaming that most people say they can’t do and never can, and think of it as a form of QA. I really like, “It’s similar to QA.” No, it’s not, for QA for security that if you’re right, you push all the way back up.

Because the better your non-production environments at are at being secured the better your production are because it means that’s how it rolls forward, as a form of kind of out of the box thinking. And so this is why I really like this, is because starting from here, this is what does work, but these are all the things that don’t work for all these use cases that we’ve got to get better.

So, do we redefine red teaming, or do we have to introduce something else to get people to look left?

[David Spark] And I think this comes down to the whole issue of security culture, is your security culture willing to accept the idea of moving this upstream. Richard?

[Richard Ford] Yeah, and it’s easy to sort of poke at things when I’m not in the trenches. The advantage of being very far downstream, it’s very cut and dry. It’s like, “I’ve done my red team. I can not get my sticker on the side of my box for compliance.” And I get that.

I get that compliance is an important outcome for a CISO because it keeps the business alive. You can’t do business if you’re not compliant. With that said, it feels like the way that we use red team is it turns into unnecessary spend, or it’s a poor ROI.

You could get better ROI out of it if you change not even just how you do it but how you think about it. So, by the time you’re in the SOC, for example, doing it downstream, and you’re sort of… That’s red team as whack-a-mole. The earlier up you go, it’s red team as strategy and a strategic driver.

I think strategy has tactics for breakfast.

[David Spark] That is an extremely good point. That’s a great analogy of how to put is a better strategy will be essentially batting them down as they’re coming in.

What would a successful engagement look like?

20:38.452

[David Spark] Ramki Balakrishnan of BNY Mellon said, “Could the answer lie in a combination of using breach and attack simulation tools that can run automated tests for widely used TTPs and use a red team for complex attacks requiring a human brain?” Tim Chase, who is a global field CISO over at Lacework, said, “As with any testing, moving left is something to be considered, but it has to be balanced with resources and stability.

Most organizations don’t have enough red team resources that can be dedicated to early testing. Also, for red teaming to be affective, it has to be performed on a stable application. If it still being developed and changed, you run the risk of frustrating developers, telling them to fix things that are no longer there.”

All right. I think Tim puts some really good points in here. I’m going to start with you, Richard, on this, is I think if you… And I think the way you defined red teaming at the beginning of being a white board is very different of the red team at the end which is, “Let’s slam your system.” So, I think that’s kind of the extreme…the low to high end of what a red team is.

So, how do you address this very last thing that Tim Chase says of if you do it early on you’re just really going to annoy the developers of why are we dealing with something that’s not ready for prime time. It’s like when people want to see a draft of something I’ve written, I’m like, “It’s not ready yet.

I don’t want to show it to you.” What do you say to you?

[Richard Ford] I think it depends on, again, exactly how you think you’re going to experience red teaming. If red teaming is I’m firing up my [Inaudible 00:22:24] and I’m hacking on you APIs then, yes, you shouldn’t do it very early. But if red teaming to you is, “I’m going to be the adversary, and we’ll talk about how this thing will get broken in the field,” then the sooner you figure out that it’s a bad idea to have an API that lets you run arbitrary commands, the better.

The further left you can get that fix in at the design stage, the better.

And, again, I think this comes down to the fact that red teaming is a very, very loose term. And because people use it as their sort of capstone sort of concept, you lose all the value of… And that’s partly why, by the way, we go with offensive security when we talk about the company rather than red teaming.

Because there’s this sort of strong concept of what it is, and it’s not. It’s broader than that. And so, yes, it’s about using the right tools at the right time because it’s all about maturity. And that was the other thing about Ramki’s comment, right?

I think BNY Mellon, very mature organization. And that’s why they’re thinking about it with what I think is exactly right. I can use breach and attack TTPs, bring the people in. That’s a very mature organization talking, and you can hear it.

[David Spark] You know, I’m going to bring it back to the analogy of writing something and not wanting to show my draft. In the way that you have defined red teaming, if I were to show you an early draft of something I had wrote, you’re not going to edit for all its grammar, and spelling, and everything like that.

What you’re going to look at is just structure alone. Correct me if I’m wrong. But the idea is if you’re looking at the structure, and you’re like, “You’re missing a core structural element of what you’re writing. You need to put that in,” or, “You need to take this out.” The advantage of telling me that early on is I don’t waste my time writing this whole thing that’s completely useless.

I think I’m online. Steve, what is your take of not annoying the developers? Like how to do this sort of our version of early stage red teaming?

[Steve Zalewski] Yeah, so I’m going to stay with your analogy on writing because my whole… I’m taking the position that if you look at a business impact analysis, I’m trying to figure out which part of the companies are worth protecting and which ones are not.

Where the level of protection can be much lower because there’s little likelihood of compromise. Taking to your analogy, I would look at your rough draft and realize these three paragraphs are not relevant to the point you’re trying to make. Removing them in no way substantively impacts the point you’re trying to make.

Well, that’s what a business impact analysis is. Which parts of the companies that I may be exposing, do I have to protect a lot? And which ones might I protect a little or be able to do something else? And I would say that’s exactly what we’re doing here in looking at this upstream red teaming is looking early at the storyline and determining early in the draft what we should be doing rather than waiting until the end.

Then we focus not on the content. Okay, we’re focusing more on the commas, and the periods, and the prepositions. I think that’s one of the key things we’re trying to get here, which was early on in the story is the time to figure out what goes in the story and what doesn’t.

[David Spark] That’s a really good way of putting it.

[Richard Ford] Yeah, I love that. And I think you can apply that to defenses. You should be thinking about what defenses do I need to put in place in my perimeter or in my applications early, not, “I’ve put in all these defenses. Could you get in?” Because if 20 of your defenses are redundant, you’re just burning money.

[David Spark] By the way, let me… I’m going to throw this wrench at you two. I wonder if you’ve seen this with a client before. Maybe a client has already built out something. Do they ever run into that sunk cost fallacy of… Let’s just say that they build something to protect the M&Ms in the lunchroom that do not need to be protected.

And it’s like, “But we spent all this money on it. We can’t not use it,” kind of thing. Have you had that situation where you’re like, “Oh, we’re fighting the sunk cost fallacy right now.”

[Richard Ford] Well, I think we’re all human, and I think that’s one of the classic biases of human cognition. Sunk cost fallacy is a bad one, and we see it in the stock market all the time. Don’t get me started on that. So, yes, we absolutely see that.

It’s usually not about protecting M&Ms though.

[David Spark] That, I know…I’m sure I was wrong on that part. But go ahead, Steve.

[Steve Zalewski] I’ll throw one in where we most of us actually do it today. It’s your SIM. It’s how much information do you pump into the SIM compared to the value you get out of it. I know people will argue that they’ve got to keep two years’ worth of logs in there so that they can do a forensic analysis.

That’s an example, for me, of those paragraphs need to come out because you can probably get down to six months. But what they’ll say is, “But we committed to two years. And if we go down to six months then they’re going to say, “Well, you said it was two years before.

How come now it’s six months?” So, you hold yourself accountable to a change in your strategy or direction, and you have to be comfortable with explaining why. And I think that’s why we end up with a lot of sunk cost investments, and that’s why I call out the SIM as a case where we all can kind of look at that and realize where it’s easier to just keep spending the money than it is to look at SOCV2 [Phonetic 00:28:04].

Closing

28:07.504

[David Spark] Very, very good point. This was a fantastic discussion, guys. I love this. Now, before we wrap this whole thing up, there’s one thing I have to ask you, Richard, is which quote here was your favorite? Lots of good quotes. I know there was some ones you wanted to buy a drink for, so that might be it.

Which quote was your favorite, and why?

[Richard Ford] So, my prize here would go to Dave Kelly. I mean, actually there were a ton of good quotes in here, but Dave talking about organizational culture becoming one of learning rather than punishment, that’s the right way to think about security.

When we think about security as punitive, certainly we don’t learn. We hide. And so learn, learn, learn, be better.

[David Spark] The danger of the phishing test, by the way. Steve, your favorite quote and why.

[Steve Zalewski] I’m going to go with Ryan Franklin from Amazon. He says, “We need more alignment on how the industry defines red teaming, whether it’s threat emulation, penetration testing, or a combination thereof,” which is what we talked about today.

We called it for what it is. “Then we can have a better conversation on where it makes sense to shift those resources. We should keep red teams as a downstream function. Yes, they do have a function down there. And realign them under the SOC to focus solely on driving improvements to our defense posture.

To me, now that means what should I be protecting, not the best defensive posture I can create.” So, that’s my line, and I’m sticking to it.

[David Spark] I like it. Very good. All right. Well, thank you very much, Steve. Thank you very much, Richard. Richard, I’m going to let you have the very last word here. Is there anything more you would love to say about Praetorian or how people can get in touch with you, learn about how to build a great offensive security posture?

What say you?

[Richard Ford] Praetorian is an amazing place with some amazing people, and it’s all about the people that you’ll be working with. So, reach out. We publish some really interesting research. It’s all available on the website for free. Come check out our open source offerings, our free secret scanner.

Then reach out to us. Reach out to any of the team members, and I think you’ll be surprised.

[David Spark] Yes. Let me also spell your site again. It’s praetorian.com. We love having you as a sponsor. And I had lunch with your CEO, Nathan, and he’s great, too. I agree. I’ve enjoyed all our times together, and also Nathan is awesome as well. So, yeah, check out Praetorian and the people at Praetorian as well.

We greatly appreciate them sponsoring. We greatly appreciate our audience as well. We appreciate your contributions and for listening to Defense in Depth.

[Voiceover] We’ve reached the end of Defense in Depth. Make sure to subscribe so you don’t miss yet another hot topic in cyber security. This show thrives on your contributions. Please write a review. Leave a comment on LinkedIn or on our site, cisoseries.com, where you’ll also see plenty of ways to participate including recording a question or a comment for the show.

If you’re interested in sponsoring the podcast, contact David Spark directly at [email protected]. Thank you for listening to Defense in Depth.

David Spark
David Spark is the founder of CISO Series where he produces and co-hosts many of the shows. Spark is a veteran tech journalist having appeared in dozens of media outlets for almost three decades.