Technology has been a significant player in reducing phishing, but can it truly solve it? Will we always have to rely on humans to be the last line of defense?

Check out this post for the basis for our conversation on this week’s episode which features me, David Spark (@dspark), producer of CISO Series, co-host Geoff Belknap (@geoffbelknap), CISO, LinkedIn, and our guest Robert Wood (@holycyberbatman), CISO at Centers for Medicare & Medicaid Services.

Got feedback? Join the conversation on LinkedIn.

Thanks to our podcast sponsor, Living Security

Traditional approaches to security communication are limited to one-off training sessions that fail to take customers, regulators, and other external stakeholders into account and rarely affect long-term behavioral change. This report lays out a four-step plan that CISOs should follow to manage the human risk. It provides design principles for creating transformational security awareness initiatives which will win the hearts and minds of senior executives, employees, the technology organization, and customers.

Full Transcript

David Spark

Technology has been a significant player in reducing phishing, but can it truly solve it? Will we always have to rely on humans to be the last line of defense?

Voiceover

You’re listening to Defense in Depth.

David Spark

Welcome to Defense in Depth, my name is David Spark, I am the producer of the CISO series and joining me for this episode is Geoff Belknap, CISO of LinkedIn. Geoff?

Geoff Belknap

David.

David Spark

You see, we know each other’s names. [LAUGHS] It’s great. It’ll go up from there, I promise.

Geoff Belknap

We’ve been spending a lot of time in the green room together, so let’s try to warm this up. Hey friends, welcome to another episode, this should be fun.

David Spark

It should be, this is a great topic and I want to thank our sponsor, Living Security. I saw this from their CSO, Drew Rose, who’s actually been on our other podcast, and I said, “Hey, this is a really good topic,” and they were eager to sponsor it and we’re so thrilled. So here’s what it is. Drew asked the question of the community, exactly as our poster tease, “Will there be a day that phishing can be solved by technology? Do we always need humans in the equation?” The discussion started to veer into how much responsibility should be put on tech versus people. I would assume all CISOs would like the problem solved by tech, yes Geoff?

Geoff Belknap

Absolutely. I think this is one of those innately human problems that we have that humans are easily fooled and wish they weren’t, and honestly, it’s really easy to put together a good phishing email these days, so we have to rely on technology to make this go away. We’re never going to educate our way out of this problem.

David Spark

In the whole phenomenon of being duped, it’s weird in that when we see magic we’re essentially being duped and we enjoy that, it becomes entertainment. But when we’re being duped [LAUGHS] to take things away from us, we get aggravated and we feel very bad about ourselves too, for that matter. So this kind of rides the line of being duped and we don’t want to feel that, but it requires individuals to be involved.

Geoff Belknap

Yes, I think if you’re in a magic show or some sleight of hand, you get the watch back at the end of the show. If you fall for phishing, maybe not so much. You get your life savings stolen or your company gets impacted very negatively. So this is one of those things where we know humans can be duped and will be duped, and certainly a lot of the things we’ve done as an industry have reduced the amount of this happening, but we’re going to have to use technology to get out of this problem.

David Spark

Do you know the Jerry Seinfeld joke about magic?

Geoff Belknap

No. What is it?

David Spark

Jerry Seinfeld says “This is magic in a nutshell. Here’s a quarter, now it’s gone. You’re an idiot.” [LAUGHS]

Geoff Belknap

[LAUGHS] This feels like board meetings I’ve been in. Hey, let’s talk to our guest.

David Spark

Let’s talk to our guest. I’m so thrilled to have him on board, he’s the CISO at the Centers for Medicare and Medicaid Services, Robert Wood. Robert, thank you so much for joining us.

Robert Wood

Thanks for having me, excited to be here.

This is not just a security issue.

00:03:10:17

David Spark

Harold Walker, over at RCGT said “As long as human emotions can be manipulated, technology cannot completely prevent phishing.” Pretty much what you just said at the opening Geoff. And Jeffrey Johnson of Siemens Healthineers, said “Behind every phish/malware is a human and humans still beat technology by thinking outside of the box. Technology still lacks intuition and healthy suspicion.” I want to address that very last comment that Jeff said. Is there any way that technology can develop that, through any kind of AI or machine learning? Is that even conceivable?

Geoff Belknap

Sure. I think it’s conceivable. Are we there today? No and will that be the way we solve the problem? Probably not. In fact I think the reality is we just have to write off email and say this is just never going to be, and was never designed to be, a secure platform, a secure messaging platform where you can always trust the recipient on the other end. And certainly there are a lot of alternatives and we’ll probably see one become pre-eminent in the next few years, but what we can do is address one of the parts that really hurts people with technology which is authentication. We can make it much more difficult to phish credentials and I think we will.

David Spark

What say you on that Robert?

Robert Wood

Yes, I very much look at this as a humans and technology working together kind of problem. So building on what Geoff said about making authenticating to services harder, or at least putting more friction in the way of the attacker. You know, if you think about an attacker has to go through a series of steps to get to what they’re eventually after, it’s not just about engaging with a human on the other end of the email inbox. Maybe you’re getting them to click on a link and if you’re blocking certain malicious links or maybe they can’t just steal credentials and then go, they have to go through a 2FAor maybe you’re pruning access and you’re using lease privilege. There’s a lot of different points of friction that don’t impact usability or maybe minimally impact usability, that can make the problem much more manageable.

David Spark

One of the basic rules of thumb that I’ve taught others about whether to be concerned or not is whether that information is coming outbound or inbound. Meaning outbound, I reach out to the get the information, I call, I check, I do that. Inbound is someone calls you out of the blue or emails you out of the blue, asking you for sensitive information. That seems to be a pretty good dividing line as to whether to be concerned or not. Geoff do you believe that to be the case?

Geoff Belknap

Oh yes. I think that’s certainly a great rule of thumb. I often tell people, at least for the human side, and before I go any further I want to be clear. There’s the technical side of phishing that really nobody should be individually responsible for making sure they don’t fall for, and that’s the things Robert was talking about like you do 2FA, you do some detection. Then there’s the human stuff, the scam stuff, which should be like whaling, like “I need you to wire me a check. I’m the CEO trust me, I swear.” And on that side there are definitely tricks, like you’re explaining, and I think one of the ones that I use is, is there urgency? Is there a sudden false urgency to the demand? “I need you to wire me a check right now, don’t check into anything, don’t call anybody, just do this thing now, don’t think.” Those are good indicators that you’re being set up for a scan. I think there’s a bunch of things like that on the human side, and those are definitely things we can train people to identify and respond to. There are things that work there.

David Spark

I will tell you a perfect case. I had an urgency situation today and I’m going to ask you Robert, would you have done the same thing that I did?

Robert Wood

OK.

David Spark

I get a call from my mom’s home phone number today. So I answer “Mom?” and it’s another woman, who goes “Hi David, I’m at your mom’s house. I got the key, the alarm is going off.” I’m thinking, “Wait a second, this is something my mom would do, set the alarm and only give the key to the cleaner.” And also I don’t know who the new cleaner is by the way, she started with new people. By the way, I’m pretty sure I know the security code for the house, but it is going off. I hear it, it’s going off. So I give her my mom’s mobile number to call so then she’ll know. Then she calls me back, and also I try to call my mom, who doesn’t often answer the phone. When she calls back I say “I think I know the code,” and I gave her the code, turns it off. I then call my mom again, and can’t get her. I call her partner and my mom’s sitting right there so she confirms yes, she stupidly set the alarm and only left the key and so I did the right thing. But it was an inbound situation but there were clues like it was coming from my mom’s phone, that OK, this is legit. Robert, would you have done the same thing?

Robert Wood

Probably yes. I think following up with a person, on the other end of this, your mom in this case, typically when let’s say you get a call from your bank, the typical advice is call your bank on an expected line and try to verify things and go through a verification flow of sorts to verify legitimacy. In this case you did that, and I don’t know what your parents house looks like, but in my house I’ve had similar situations happen and I’ll go on Ring and check the inside cameras and see what’s going on, or things along those lines. So there’s layers, it’s not like a binary, you’re compromised, you’re not compromised situation. I think there’s layers to unpack with just about anything.

David Spark

Right, so I had this concern of it being an inbound issue, which I don’t like to give information, but it was from my mom’s phone, that’s legitimacy and I also know my mom to do something like this. Geoff, would you have done something similar? And also I followed up with mom eventually and yes I was right here.

Geoff Belknap

Yeah, look I’m pretty paranoid, maybe I would have handled it a little bit differently. But I think the rule that I usually teach people about this is think for a moment, especially if you’re being pressed for urgency, about what’s the worst that could happen if you slow it down? And the reality is, let’s take that example, if the alarm’s going off and you slow it down, just to give yourself enough time to feel confident about it because you’re never going to be 100 percent sure unless you drive over there and that’s probably not realistic.

David Spark

3,000 miles.

Geoff Belknap

Yes, that’s going to be a long trip. So before you get there what happens is the police are going to show up and they’re going to do a fine job of verifying everything that’s happened. So even if you’re not sure, like in that case you had some indicators and that helped you, because you’re only going to get to a level of confidence. But if you’re ever unsure just let the worst thing happen. It’s actually in most cases, not the worst thing in the world. The police show up and respond to the burglar alarm going off. They know how to do stuff.

David Spark

But let’s just say someone broke into my mom’s house and called up and said “Hey, you got to turn off the alarm.” How much could they steal in the period of time I had to contact the other people and also knowing that someone’s in the house?

Geoff Belknap

Yeah I think that’s right too. You know, you have to do all that risk calculus for yourself. The unfortunate thing is it’s not always about theft, it could be about all kinds of things, who knows? But I think the trick is just to remember what’s the worst that happens if I don’t do it urgently? If it doesn’t go as fast? If we’re still working this out when somebody shows up? That’s OK.

David Spark

Yes, the risk calculus I think is a great way to phrase and orient people’s thinking around that because every situation is different. What somebody’s asking for in a particular scenario is different, the indicators that might give you confidence or not confidence is different. The urgency technique used is different, that kinda thing.

What are they doing right? What are they doing wrong?

00:10:58:08

David Spark

Kelly Bray, over at OpenGov, said “I’m pretty sure that phishing remains the number one risk to organizations year over year despite the evolution of, and increase in, technology “solutions”.” And Drew rose, who’s the author of this discussion of Living Security said “More training doesn’t always equate to lowering risk either. It could in fact have an adverse effect.” You’re nodding your head on that Geoff. Do you agree with that? Can more training actually have an adverse effect?

Geoff Belknap

I mean I certainly feel that sometimes, not in every scenario, but you know, especially, people go overboard with the phishing training and they write these elaborate lures and they can sometimes make people feel bad. If it’s increasing risk, it’s also increasing friction to the business if nobody will respond to any emails, that’s not great. I think the other point of this, and this is probably what Drew was trying to say, at some point people just give up on even trying and they’ll click and enter their credentials into anything because they have no idea and they’re just trying to get their work done. That’s not a scenario that you want to create in your environment.

David Spark

And one of the things that was brought up on another show is if you have a poor corporate culture, no matter what training you do if people don’t care, they don’t care.

Geoff Belknap

Yes, if you’ve lost them, you’ve lost them.

David Spark

Yeah.

Robert Wood

Yeah and I think piling on with training, with beating up on people with more phishing, stuff like that, I think the adverse effect is not necessarily an increased risk of phishing, it’s decreased trust in the security team and negative impacts on culture and that kind of stuff can have really big ripple effects around an organization, at least in my experience.

Geoff Belknap 

Going too far can destroy your company culture. Let me just underscore that. Going too far with security can destroy your company culture, and that doesn’t mean trying too hard, but it means if you’re aggressively making people the problem, you are going to create a self fulfilling prophesy.

David Spark

I’m going to go to this point that Kelly Bray said, that believes that it’s the number one risk to organizations, which I don’t know if it’s the number one, but it’s the one that’s talked about the most. But there’s always new solutions for this and I guess concern goes up, solutions go up as well. Is it the number one thing on your plate Robert, in terms of dealing with security issues?

Robert Wood

No [LAUGHS]. Pretty simple, straightforward answer. It’s up there. We try to think pretty systematically about this problem inside of CMS using attack trees and what might people actually be after, and then work backwards from the steps they’d have to take to get there. Then just layer in defenses along the way. To be honest our bigger problems are elsewhere and we’re not sinking 50 percent of our budget here.

Geoff Belknap

I don’t think anyone is.

Sponsor – Living Security

00:13:54:19

Steve Prentice

October is cybersecurity awareness month and Drew Rose, Chief Strategy Officer and co-founder of Living Security plans to make the most of his time to help companies understand the importance of cybersecurity in relation to one of their weakest points. Busy people.

Drew Rose

For many organizations that are just starting to dive into human risk management and trying to understand the behaviors that their end users bring to the table, it’s really the first point in time that they start to make investments around culture building and behavior change.

Steve Prentice

Drew and his team intend to be highly visible and this month they’re unveiling a new and intriguing experiential product.

Drew Rose

We have our first reality themed security awareness training series called “Secure My Life.” It follows a female executive around at the office, at home and during her travel and it really follows what our theme is for cybersecurity awareness month which is work, play, live. Which is a core tenet that if you are trying to be secure at home, if you’re being intentional about being secure at home, are you being intentional about being secure on the road, that’s going to translate to you being more secure at the office. So we like to develop and distribute content not just focused on the enterprise in the office building, but in all areas and aspects of their lives.

Steve Prentice

For more information, go to LivingSecurity.com, that’s all one word, LivingSecurity.com and click on the link to get a demo of the next evolution in security awareness training, Human Risk Quantification and Reduction.

This problem doesn’t end here.

00:15:35:05

David Spark

Josh Sokol of SimpleRisk said “Everything comes down to a trade off between security and usability.” Ha, what we were referencing just earlier. You know, he mentioned whitelisting people but that would definitely fall into the not so usable. Joshua Wiley of Splunk said “If you somehow have some magic solution that will correctly identify phishing 100 percent of the time, and reject all emails at the border, your coworkers will still get phishing attempts outside of your corporate environment.” So does that happen in terms of, I’ll start with you Robert, like you’ve got to warn people not just on the company email, but just in their daily life? One of the things that we’ve talked about a lot on the show is that if you make the issue personal, then they’ll understand the value to the company.

Robert Wood

Yes, I think that happens quite a bit. You know, if you think about phishing as a sub-set of social engineering, people might get calls on their personal cell phones, they might get outreach on social media, things like that, and there’s a lot of avenues to get to somebody from a social perspective. And email is but one of those things, so I think that’s absolutely true. Especially if you think about the border of a company lessening over time, you know with Saas and BYOD and stuff. It’s not as well defined as it used to be, and so as traffic and access and stuff is flowing back and forth more fluidly, I think that definitely represents a real risk.

David Spark

That must happen to you all the time Geoff, and the line you said Robert was right on the money with, the border between personal and corporate is very much blurred. Especially given all the technologies and things that you mention. I know many companies try to lock it down by creating containeresque type elements to it, but it still bleeds yes Geoff?

Geoff Belknap

Yes. Look, there is no boundary for the bad guys. The bad guys are seeing that they have a target and you work there and whether you’re at work or not, you’re part of their plan to get access to that data. This is why to some extent, I think it’s good to talk about phishing, but what we’re really talking about is a methodology that people use to gain access to your credentials or to your systems. I think Joshua’s got a perfect point here that if suddenly sending emails as lures to get you to disclose your credentials stop working, they’re going to move to another way. And we’ve already seen this. There was a breach at another social media company not too long ago, and one of the ways that they targeted people was they spoofed the phone number to look like it was coming from an internal corporate line, and they said “Hey, we’re from the help desk, and there’s this issue. Can you call us or can you respond with your credentials or whatever it is.” But it’s just an avenue to get you socially engaged with that person and then lie to them enough that they will give up their credentials. And it works. So we’ve already seen this in companies that have sophisticated and mature security programs, so now it’s about building that holistic view. And again where I say as we said before, it’s about making it so that you can’t abuse those credentials, either through locking them to trusted devices, or doing other creative ways to make sure that they can’t be abused by a third party. And it’s hard.

Robert Wood

Right. Right, or in the case of whaling, that you can’t just automatically wire transfer $500,000 to some random person. There’s a check and a balance, whatever the asset is, there’s a little bit of hardening around it.

Geoff Belknap

Yes, this is why all these financial controls about multi-parties involved, or two keys to turn so to speak, involved in all these controls. It really makes a difference now.

David Spark

So getting back to the original question, if we have the two key methodologies set in place, maybe it’s not the attack but the outbound damage that technology could prevent, or we’re just always going to need humans no matter what?

Geoff Belknap

I think good technology always props up the decisions and improves the decisions that smart humans can make. And indeed, sometimes technology can make those decisions but really we’re always going to just be enabling smarter decisions to be made. We’re never going to make these problems go away, it’s always going to be moving the cheese and moving the risk.

Robert Wood

Yes, and the two key solution, I would put that in the bucket of a process change as opposed to something technology specific. I mean the way we think about it in the attack tree structure, is you can train your people around a particular thing, you can add in technology solutions, boundary filtering, attachment filtering, that kind of thing, 2FA and then there’s process changes to make particular, whether it’s help desk attacks or something like that, a little harder to carry out.

What are we going to do now?

00:20:20:11

David Spark

Ang B, of CFC Underwriting said “We cannot solve social engineering by throwing more technology at it. Human training has to ultimately fight human deception.” And Kristopher Palmer, of Intel 471, said “Phishing is labeled phishing for a specific reason and most are susceptible to the sophisticated lures that are planted which makes technology and tools a bit difficult to thwart modern attacks by way of adaptable and bypass techniques.” Pretty much sums it all up. And Seth Shestack of Temple University said “Develop a culture where security is easy for the users.” And I’m going to tap into that very last line there. Geoff, what is “security easy for the users” mean to you?

Geoff Belknap

I think it’s a perfect point because it should be what all of us are trying to do, and I know a lot of us are definitely clued into this track. It means that it shouldn’t be something that I have to be the only defense, or the only control in place, and it means security should sort of fade into the background. It shouldn’t be something that I have to always have at the front of my mind. My tools should help me not make a mistake. And I’ll give you a great example. Google built something, and I know we implemented something similar at he last couple of places I worked, where if you touch your credentials into a website that wasn’t known ahead of time, it would do a check and say “Hey those are your domain credentials and this is not a website that we know that your domain password should go into.” Yes, password alert, and it’s very effective, and it’s also very simple and doesn’t involve me as a user to do anything special. I’m just going about my day, typing my credentials in like I think you need to, and it will solve that problem for me. It will alert the right teams that something bad has happened and bring me into that loop if I need to be. Like that’s what easy for the users should be, and it should be stuff where it’s easy to do the secure thing. It doesn’t take work to do the secure thing. I don’t know, what’s your perspective on this Robert?

Robert Wood

Yes, I think that is spot on and when I’m talking internally or with friends about this, it’s like it’s hard to blow your own foot off, or it’s hard to make a mistake. So it’s easy to do the right thing, hard to do the wrong thing, basically the inverse of one another, but going back to the whaling example, it’s hard to just make a quick decision based on emotional reaction and just wire a bunch of money off. Because you have to maybe make a couple of phone calls, it has to go through a quick check, or if I’m just trying to send my credentials to something that’s maybe not my SSO or not a known expected website, then I’m going to get the browser hijack “Hey something is wrong here,” flow. I think summing it up as security kind of fades into the background and the users can go about their normal flow, is clutch and I think in order for security professionals to tap into that they have to understand how users work. We have to have empathy for how users want to do their jobs on a day to day basis. The sort of preferences that they have, the things that they need to do, things like that. And I think that’s often times something that we rush past in this field. I know I’ve been guilty of this a number of times in my own career, but pausing to really understand what people are going through, why they don’t understand security, what their comfort level is with technology, things like that, and then figure out a way to work in the background or come alongside of that is a really powerful technique.

Geoff Belknap

Yeah, good security is about helping the business win.

Robert Wood

Yeah.

David Spark

But that’s a really good point you just made there, in that our knee jerk reaction when someone doesn’t do something or doesn’t get it, it’s like, “Ooh why don’t you get it? I’ve explained it to you 20 times.” So digging one level deeper, someone does screw something, they didn’t do something, you’ve educated them umpteen times, do you actually ask “Is there a reason you didn’t remember to do this? Or was there something pushing you to not even think about this?” Like, how do you figure that out? Without essentially insulting the person for that matter?

Robert Wood

Yes, so when things like that happen in our environment and I’ll take it away from phishing, if something goes wrong and it’s a repeated thing, I am usually pushing my team to taking extreme ownership approach, to say “What is it about the way that we’re engaging the tools that we have, the policy that we have in place, the guard rails that they have around their flow, their process, that allowed them to make this mistake time and time again? Why are we allowing them to fail in the same way more than once?” We should be going through that post mortem retrospective process ourselves before we put the onus on the user to be taking on a more cognitive load in my opinion.

David Spark

Have you actually found a mistake that you made in a case where, as you describe, a repeated problem? “Oh, you mean we’re not doing this? Let’s now implement this.” Were you able to solve that issue to stop becoming repeated?

Robert Wood

Oh 100 percent. We find stuff like that all the time [LAUGHS] because I think there’s an assumption of expertise and assumption of we’re security, so we know what we’re doing, or we’ve considered all the angles. And we are just as fallible as any other field. You know, developers are really good at writing code, doesn’t mean they write perfect code all the time. CPA’s are really good at handling the books and handling the finances. That doesn’t mean they get it right every single time, and so it doesn’t mean we get security right all the time or that we consider it from all angles. And so I mean we absolutely find things like that, and I think it’s about, Geoff mentioned this earlier, around culture, but it’s about in my opinion creating an open, accepting, psychologically safe and learning culture where both the users are able to engage safely and freely with the security team. But the security team is also really taking ownership and always striving to learn more about their users, learn more about their own shortcomings, things like that. And when all of those things come together, culture eats strategy for breakfast.

Geoff Belknap

Yeah I think it speaks to the really important grounding that every security professional needs, because sometimes we can come across as surgeons that are like, we fix ACLs all the time. I don’t know why people keep coming to us to fix their ACL, why can’t you do the surgery yourself? That’s ridiculous, nobody’s an expert surgeon that can do that themselves. Well sometimes in security, I hear security people saying things out loud that sound exactly like that. And the reality is, it’s up to us to make this easy, it’s up to us to make it safe. And it’s up to us to provide the environment where people can play the sport or do the thing that they need to do without having to worry about repairing their own ACL.

Robert Wood

Yeah that’s a great example.

Closing

00:26:53:22

David Spark

Alright, and that wraps up our discussion here, but we’re not completely finished because I’ve always asked “What was your favorite quote and why?” and I’ll start with you Geoff. Which quote was your favorite and why?

Geoff Belknap

There’s so many good ones here but Ang had a quote that I think sticks out to me, which is “We cannot solve social engineering by throwing more technology at it. Human training ultimately has to fight human deception.” And I think that’s right with a caveat. I think humans do need to be trained to recognize signs of a scam or a trick or something that’s not quite right. I do think technology can bring to bear a lot of solutions against giving your credentials away or being in a position where that’s going to be a problem.

David Spark

Good point. Robert your favorite quote and why?

Robert Wood

So I think my favorite came from Joshua Wiley at Splunk. So “If you somehow have some magic solution that will correctly identify phishing 100 percent of the time and reject all emails at the border, and your co-workers will still get phishing attempts outside your corporate environment.” I think that highlights the complexity of the social engineering problem and the fact that you don’t have to guard the front door and make sure you lock the front door, you also have to make sure your windows have locks, you lock the back door.

David Spark

And make sure their windows are locked at home as well.

Robert Wood

Yeah exactly. So it’s a very layered, multi-dimensional problem and I think this quote really speaks to that.

Geoff Belknap

And don’t forgive to give the cleaner an alarm code.

Robert Wood

That’s right. That’s right.

Geoff Belknap

[LAUGHS] That’s a real takeaway.

Geoff Belknap

Give the cleaner an alarm code. Well with that said, and hopefully my mom will listen to this episode too and learn something, which by the way, she said she was going to. My mom does listen every now and then.

Geoff Belknap

That’s great.

David Spark

She says, “Well I just like to hear your voice.” I go, “Well mom, you can call me anytime. I can talk to you, you can hear my voice then.” [LAUGHS]

Geoff Belknap

And now she can have weekly episodes with you on tap.

David Spark

Talking about things that she doesn’t have a great interest in [LAUGHS].

Geoff Belknap

Ah, it’s the reason Rob and I are here. It’s just the sound of your voice. It’s so soothing.

Robert Wood

That’s right.

David Spark

Exactly. I’m sure she likes the sound of your voices as well.

Geoff Belknap

I’ll ask.

David Spark

I want to thank both Geoff and Robert for being awesome on this episode. Robert, I always ask our guests, “Are you hiring?” Are you hiring?

Robert Wood

I actually am, and hiring in the federal government is not a super common thing. So I have probably eight roles that are about to get posted probably over the next couple of weeks which is exciting, so everything from contracting to pen testing to product security, like all over our cybersecurity program here. So.

David Spark

Get in contact with you where? What’s the best way? LinkedIn?

Robert Wood

LinkedIn is probably best, plug Geoff. [LAUGHS]

Geoff Belknap

Thank you.

Robert Wood

@HolycyberBatman, I am on LinkedIn, you can track me down there.

David Spark

OK. That’s Robert Wood, the CISO at Centers for Medicare and Medicaid Services, and my co-host Geoff Belknap, CISO over at LinkedIn. You have been a phenomenal audience as always, thank you so much for contributing and listening to Defense in Depth.

Voiceover

We’ve reached the end of Defense in Depth. Make sure to subscribe so you don’t miss yet another hot topic in cybersecurity. This show thrives on your contributions. Please write a review, leave a comment on LinkedIn or on our site: CISOSeries.com where you’ll also see plenty of ways to participate, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at David@Cisoseries.com. Thank you for listening to Defense in Depth.