Security That Accounts for Human Fallibility

Security That Accounts for Human Fallibility

We expect our users to be perfect security responders even when the adversaries are doing everything in their power to trick them. These scams are designed to make humans respond to them. Why aren’t we building our security programs to account for this exact behavior that is simply not going to go away?

Check out this post for the discussion that is the basis of our conversation on this week’s episode co-hosted by me, David Spark (@dspark), the producer of CISO Series, and Steve Zalewski. Our guest is Ken Athanasiou, CISO, VF Corporation.

Got feedback? Join the conversation on LinkedIn.

HUGE thanks to our sponsor, Code42

Code42 is focused on delivering solutions built with the modern-day collaborative culture in mind. Code42 Incydr tracks activity across computers, USB, email, file link sharing, Airdrop, the cloud and more, our SaaS-based solution surfaces and prioritizes file exposure and data exfiltration events. Learn more at

Full transcript

[David Spark] We expect our users to be perfect security responders even when the adversaries are doing everything in their power to trick them. These scams are designed to make humans respond to them. Why aren’t we building our security programs to account for this exact behavior that is simply not going to go away?

[Voiceover] You’re listening to Defense in Depth.

[David Spark] Welcome to Defense in Depth. My name is David Spark, I am the producer of the CISO Series. Joining me for this very episode is Steve Zalewski. Steve, say hello to our friendly audience.

[Steve Zalewski] Absolutely. Hello, everyone.

[David Spark] That’s Steve saying hello. Our sponsor for today’s episode, a regular sponsor of the CISO Series, love having them back on again, it’s Code42. They’re the insider risk management leader. More about that later in the show. Steve, on LinkedIn you asked, “Why do so many security practitioners treat our users as children to be managed instead of adults to be educated and assigned a level of accountability?” Mistakes happen, yet we don’t configure that bias into the security programs we configure. Why aren’t we creating a security program that accounts for behavior? And like I mentioned before, these individual attacks are designed to trick people. That is their job. And guess what? They succeed. So, we need a security program that understands that. Right, Steve?

[Steve Zalewski] Yes. And I put this out there when I got frustrated one day at Levi’s as we were talking about this, and I was getting ready to go up to the executive team and talk about phishing attacks and what we’re doing. And we were constantly having this conversation around, well, not how do we hold accountability to our people, to understand that this is kind of part of the job. But we always wanted to dumb it down, we always wanted to make excuses for them. And so part of this conversation today is the expectations we have on our people, right? How – not sophisticated – but how much do we expect them to understand about cybersecurity now relative to phishing attacks and everything, which is basically the first line of defense?

And then the second part of this I hope we will talk about today is since people always make mistakes, right? They will click. What are we doing to design our systems for failure, not success? So, assuming they’re going to click, what are we doing? Leaning more to not prevent the attack, but how do we manage the exploit of the attack, the containment of the attack? So, there was like two different perspectives I was bringing to this conversation, of which I think folks have done a great job teasing out.

[David Spark] I agree, and we’re going to get to some of those comments in just a moment. But first I’d like to introduce our guest today who I met in Chicago where I was doing a show and so thrilled that he’s joining us. He used to be a competitor because his company used to make Wrangler jeans, direct competitor to Levi’s, but no longer.

[Steve Zalewski] So now we’re friends.

[David Spark] And also, you’re no longer with Levi’s as well. So the two of you can talk calmly and patiently and not scream at each other.

[Steve Zalewski] [Laughter]

[David Spark] Which you don’t do. We’ve talked about this before, this is why there’s all these ISACs for the different industries because people in directly competitive industries still get along when it comes to cybersecurity. That’s a whole other subject altogether. Let me introduce our guest. He is the CISO of VF Corporation, Ken Athanasiou. Ken, thank you so much for joining us.

[Ken Athanasiou] Hey, thanks for having me. Good to see you, Steve, as well.

Where are we falling short?


[David Spark] Osama Salah of Cloud Security Alliance said, “Human failure is just a symptom of a broken system that needs to be changed.” Now, I think also human failure just happens inevitably, but too much human failure may be a symptom of a broken system. John T. of Quest Software said, “Seems to me we underestimate the depth, breadth, and dynamics of culture change and how people change and how we change people, so easiest thing to do is just blame.” Ah. That’s a key line right there. Steve, I’m going to throw that to you. It is a lot easier to just blame users rather than to get them up to speed, isn’t it?

[Steve Zalewski] Yes.

[David Spark] And it doesn’t make you more secure though, blaming users?

[Steve Zalewski] It doesn’t make you more secure and here’s the other part of that then is when they fail, is it now a punitive exercise? So, they don’t feel good about how they’re trying to change their behavior. We hit them with punitive training or extra texting, and so we make their life harder, when what we should be doing, again, is upping the expectation and changing it from punitive to supportive.

[David Spark] And just to add to that, I’m going to bring you in, Ken, in just a second is, and I talked about this on another episode, a good friend of mine runs HR at a big company. They just let a mechanic go because this mechanic just repeatedly kept failing phishing tests, over and over. And Andy Ellis said that’s horrible because that’s a failure of the security department, not building a program so this mechanic, who can’t seem to deal with phishing and also email wasn’t the thing that he needed to be on, build a system so he could continue to be a mechanic rather than have to worry about the security protocols. So, Ken, it is easier to just blame, yes?

[Ken Athanasiou] Of course it is. The problem with blaming people is, as Steve said, it doesn’t really solve the problem, right? And firing a mechanic because he can’t seem to deal with email is probably not a great idea either. Humans at their fundamental are fallible creatures. We have a tendency to just not do the right thing all the time.

[David Spark] And also, we have a tendency to want to trust.

[Ken Athanasiou] That’s correct. To your point earlier, that’s exactly why these scams and these phishing events work so well is because people want to be helpful, they want to trust, they want to respond quickly to an urgent message because they’re trying to be helpful, and that’s exactly what these folks prey on.

[Steve Zalewski] And I want to dovetail on that because there’s another area where I think we’re falling short, which was this is a place where technology I think is not stepping up either. Which is we are really good at trying to identify malware, but I think there’s a lot of opportunity to look at natural language processing and understand the context of the communication that’s going through to determine your current state of mind and understand how the attackers are taking advantage of you when you’re vulnerable and not doing a good job there. And so I’ve kind of pushed on the developers of new technology to say, “Where’s that natural language processing? Where’s that capability to be able to account for the fact that humans are weak? We should be able to do better at understanding when they’re weak and make that much better as part of our exploit controls.”

[Ken Athanasiou] To take that a step further, I mean, one of the things we should be doing is actually looking at human behavior. How are we dealing with how people normally communicate through these channels and how can we detect anomalies in these channels to be able to alert and respond?

Why are we blaming users?


[David Spark] Ayoub Fandi of GitLab said, “It’s tough to understand the end user’s position and think about what their workday looks like and how to embed yourself instead of disrupting and antagonizing.” And Brennan O’Brien, CISO over at Genesis Financial Solutions, said, “We talk to them, or worse down to them, with our training instead of making them part of our defenses.” And one of the things I want to bring up is the need for a process. That no matter what email that comes in, the classic techniques of getting you to click on things or to send money or whatnot, that if there was a process in place that everyone did no matter what the situation, then it could deal with this stuff and, again, your people could be your line of defense. Right, Ken?

[Ken Athanasiou] Yeah, certainly. I’ll go back to what I said earlier though – humans are fallible – and when you have people in a system, you have to have controls to try and help those people do the right thing. So, when you talk about a broken system, or you talk about how people are being blamed, and how you communicate to those folks, it’s all well and good, but again, you have to put the right controls in place because you have to expect that people, they’re going to do the wrong thing, they’re going to make these mistakes. And again, blaming the users for being human isn’t effective. You have to understand what their behavior is, you have to understand how you can protect them from their selves. We have smoke detectors in houses, we have railings on stairs, we have all of these other things that protect people from themselves because they will fail.

[David Spark] Steve, give me an example of a railing or a smoke detector that you can put in place.

[Steve Zalewski] It’s standard out there and we used it which was the little button you can click in the email that says, “I think this is phishing,” or “There’s something wrong about this.” That is beautiful in my mind because what you’re doing is the bad guys are going to make mistakes. Because if they have a phishing campaign and they hit three or four of my financial people, or three or four of my HR people, there’s a high likelihood that one of them is going to click that button and realize there’s something isn’t right. So, it’s making it harder for them to be able to just do those larger campaigns, now they have to target individuals. So, what I’ve done is made it harder for them, they’ll go somewhere else, and I’ve made everybody in my company part of my defense. Where when they click on that, they get a thank you, and sometimes they’ll get a call from us. Because if it was legitimate, we’ll tell them how much they saved us, and if it’s a false positive, we don’t care.

[David Spark] That’s going to reinforce behavior, and then you see that sort of cranking up over time, yes?

[Steve Zalewski] Yes. And my goal was I wanted to see more people reporting potential bad emails than I worried about the ones that were legit. Meaning I didn’t want people to be 10 out of 10, which is only report it if you’re sure. I was happy with 95% of it being false positives, but I wanted people to feel comfortable to click because that was their job was to always, if in doubt, click and let us take a look.

[David Spark] To click the Warning button, essentially?

[Steve Zalewski] Right.

[Ken Athanasiou] The other aspect of this, right, and this is directly in Ayoub’s quote, right, you have to understand the user’s position, understand what they’re doing on a daily basis. They’re not looking out for phish on a daily basis, they’re just not. They’re trying to get their job done, they’re moving quickly through their email, through their tasks. And when they accidentally click on a phish, you don’t really want to smack their hand. You want to have the right technical controls in place to protect them from that sort of activity. Going back to the mechanic earlier, if you do have someone that is just constantly clicking on that shiny red button over and over again, even after all the training, etc., you have to have a methodology and an approach to insulate that person from themselves.

[David Spark] Yeah. And this was the thing that I brought up with the other CISOs is is there’s this concept of a virtual padded room that protects somebody from themselves. Because this guy, this guy who’s a mechanic, had lots of training, security awareness training, just kept clicking on those darn phishing emails, but email actually wasn’t core to his job responsibilities but he still needed it for financial stuff, for HR stuff, things like that. We want to keep this mechanic, they’re just they’re a danger to themselves in the digital world, what can we do? So, how could you create a virtual padded room for that kind of person, Ken?

[Ken Athanasiou] And you absolutely can. You can restrict who’s allowed to send to that particular person, you can disallow them from sending emails externally, you can disallow them from receiving emails externally, you can put in a known good type of list so they can only receive emails from these domains externally, etc. So, there’s a ton of things that you can do. Now, that requires overhead and cost. Every bit of complexity you put into the environment, you have to pay for, you have to support, etc.

[David Spark] But you can weigh that against the cost of, well, this person’s a good mechanic, what would it cost to get another one.

[Ken Athanasiou] Exactly.

[David Spark] Like it’s really cheap to get another mechanic and this person’s so good that, all right, I guess we’ll let this person go. I guess it’s always a cost decision, yes?

[Ken Athanasiou] It is, it is, and it’s a risk decision. But there’s absolutely a number of things you can do to better protect your folks from themselves.

[Steve Zalewski] We’re going to go one more on that which was it’s a profit decision. If I’ve got a mechanic who is doing a great job and is getting jobs done and 75% of the time and I’m billing him out, okay, we’re obligated to find defense in depth because he is the guy that we’re here from a business perspective to protect because he’s key to the business.

[Ken Athanasiou] That’s right.

[Steve Zalewski] So, it’s changing the conversation, again, from one of punity because the guy just isn’t good to, “No, that’s our job is to put a bubble around him and to lower the bar of our ability to do these kind of bubbles where it’s in the best interest of the business, not in the best interest of security.”

[Ken Athanasiou] 100%. Absolutely agree.

Sponsor – Code42


[David Spark] Before I go on any further, I do want to mention our sponsor Code42. They’ve been a phenomenal sponsor of the CISO Series and I’m very thrilled to tell you about what it is that they do and why you need to know more about them. Code42 is the insider risk management leader addressing the full spectrum of data loss, whether it’s malicious, negligent, or even accidental. Code42 delivers a SaaS solution built with the modern-day collaborative culture in mind. Did you know that there’s a one in three chance that your company will lose IP when an employee quits? Yikes! Economic uncertainty has created workforce volatility, and a lack of confidence in job security means that many employees are taking action to protect themselves, gaining the competitive edge by downloading IP, customer lists of sales strategy, all of this makes data protection even more challenging. Yikes!

So, Code42 has a product. It’s called the Code42 Insider. It gives you the visibility, context, and control – good combination – what you need to stop data leak and IP threat. With Code42 Insider, you can see what data’s exfiltrated without setting up strict classifications, eliminate excess alerts for your security team, contain data leaks without disrupting employee productivity, and maintain compliance with security standards and corporate policies. All sounds pretty good. So, for more, go ahead, visit to learn more about Code42 Insider, a new approach to data security.

What are we going to do now?


[David Spark] Simon Goldsmith at OVO said, “What we should be striving for is the psychological safety to make mistakes and own them.” Which really much leads to what you were saying about that button, Steve. Bryn Standley-Ossa of Segment said, “There needs to be greater visibility to inform people on what needs to be focused on. Taking pieces from a bunch of different places/platforms and aggregating them to understand how they fit into the bigger picture that is an employee’s risk level can be a great place to start.” So, I like this idea, Ken, I’ll start with you, is that this idea of the more they understand what’s going on, they’ll care about it, and the story we’ve heard – Mike Johnson brought this up early, a long time ago – if you can make security risk personal, like understand how to protect themselves personally, they’ll want to start to understand it for the business. Yes, or does more explanation need to be had?

[Ken Athanasiou] Yeah, there’s actually two aspects to this, right? So, when we talk about what Simon said about psychological safety, again, I go back – humans are fallible creatures so they’re going to make mistakes. We have to make sure that it’s okay for them to make mistakes and that we protect them from themselves. And we educate them that they made a mistake and try and get them not to make those mistakes again.

The other aspect though that Bryn is talking about is really around understanding – and it goes back to what I said earlier – understanding the employee’s behavior, how do they normally communicate within the business. Let’s take all these disparate pieces and platforms that we have that really, we can get a picture of their normal behavior, and then when we see an outlier for that behavior, we can raise a risk level on that particular employee, that particular transaction. We can say, “That one doesn’t look right. It’s not normal. Bryn doesn’t normally email the CEO at three o’clock in the morning, that’s unusual.” Now, it could be perfectly valid, it could be something that there’s nothing wrong with it, but you use that along with all the other pieces of telemetry that you can gain in the environment to determine if that particular communication is risky or not. And that helps you understand where you need to focus your efforts on response, and it also helps you understand how to educate folks.

[David Spark] Steve, you have repeatedly said, “Understand the business as much as possible.” Go, when you worked at Levi’s, physically go in the store, see how they’re selling things. How much have you worked with your team to have your team go to employees like, “Describe your day. What are you doing? What’s the communications you’re getting on a weekly?” How much of that sort of deep I guess minutiae do you start to understand of employees?

[Steve Zalewski] So, I was a huge advocate for that, I was always, “Get into the business,” and would use my own experiences to help them understand why that was so critical. I even went so far as I would generally once a month bring a vendor in that I thought had some interesting technology and actually spend 90 minutes and give him a tour from the inside out, so they actually understood what my job was. That was hugely valuable for them to actually see what it was that Levi’s had to do. I had to protect the creative process, not secure the creative process.

But there’s another part and it’s what I started with which was [Inaudible 00:19:31] accountability on the individual, have a certain amount of responsibility in protecting the company, and where they can’t, won’t, don’t, pick your adjective, our ability to have defense in depth includes adding additional friction for them to do their job. Because the larger responsibility to the company may dictate that we do MFA for them every time, or we dramatically restrict their email, or we give them a Chromebook, whereas everybody else has a regular laptop, Windows or Apple machine. Because we’ve had to take extraordinary defense in depth to be able to account for the needs of the business as well as the aptitude of the user. We need to find that balance, but we need to put that front and center.

[David Spark] And Ken, we’ve already touched on this before, but there’s a lot of variabilities here in that there’s the person who can’t seem to protect themselves, that mechanic I gave, and then the people who are the high targets – your C Level executives and the people working in Finance and Accounting – and so that’s usually a pretty darn good place to start, yes?

[Ken Athanasiou] Absolutely. It’s great callout, great point. Whaling and spear phishing are real, and you have to be very cautious about your high-value targets within your environment, you need to levy in additional controls for those folks, so there’s absolutely some additional things that you do for those. And you have to be cautious for those repeat clicker type of events. You want to make sure that you’re not, again, instilling punitive measures on them. You’ve got to make sure that you’re helping them.

What would a successful engagement look like?


[David Spark] John Scrimsher, CISO of Kontoor Brands, who I also met in Chicago with Ken as well, he said, “Great question,” to you, Steve, “I have found that human nature when faced with the option of doing what is ‘right’ versus ‘what is easy’ will always tend to do what is easiest for the task at hand.” And we talk about people just trying to get their job done. “While we try education and other tactics to address this, ultimately the best answer to ensure that what is right is also the easiest option. We need to think about what we are doing from the user’s standpoint rather than trying to force the user to see it from ours.” Extremely good point here, and we’ve definitely addressed this.

And we close with Jonathan Waldrop of Insight Global who said, “I don’t think IT/InfoSec does a great job of simplifying things, so it feels overwhelming to the end-user/customer, and when you’re overwhelmed, things turn to, ‘Make it easy for me. Just tell me what I have to do.’ An individual’s responsibilities with technology can’t be distilled into a checklist – critical thinking is required!” So, let’s close with this discussion of simplicity. How do we make things simpler, Steve?

[Steve Zalewski] I would say you make things simpler not by dumbing it down and giving the user a get out of jail free card. You make it simpler by setting the expectation on the user, on the key business processes they’re accountable for, and then introducing friction when necessary to make sure you find that balance. So, that’s where I was struggling in the beginning is we don’t seem to want to do that. We always want to seem to just say, “It’s our job to make security easier,” and unfortunately, that only goes so far. There is an expectation on the user, and I think we need to be more upfront with making that expectation clear as part of the training and then introducing friction as necessary so that everybody realizes, yes, indeed, to drive a car I do have to learn how to do it. I can’t just go in, get in an accident, and say, “Oh, well, I didn’t know I had to learn how to drive.”

[David Spark] That’s a really good point. We keep talking about blaming us, blaming the user, but we all have a certain level of responsibility, self-responsibility. Right, Ken?

[Ken Athanasiou] Yeah, 100%. We are responsible for what we do, we’re supposed to know exactly how to handle all this stuff, but again, we will make mistakes. I really love John’s commentary here; I think it’s absolutely spot on. If the right thing to do is the easiest thing to do, then that’s what the users will do every time. It nails it. Our job is to make sure that our users certainly understand their responsibilities to be good custodians of our data and do the right things as they’re interacting with our systems, or at least attempt to do the right things as they interact with our systems.

However, I do think it is our job, not just security’s job but technology’s job, to make these users’ experience the best it possibly can be, the smoothest it possibly can be, and the easiest it possibly can be. It’s extremely important that we look at user experience in all of the things that we do, and we understand how these folks are going to interact with these systems. And again, we put the controls in place – those smoke detectors and the railings and etc. – to make sure that we keep them between the guardrails and don’t let them just go off the road.

[David Spark] And the smoke detector is a really good example because I was thinking about this – they tell you change your battery once a year. I don’t know if you’ve learned this trick, but they remind you around Daylight Savings, like, “Oh, here’s a once-a-year type thing to remember to change your smoke detector,” it could be any, could be Halloween, whatever the heck you want to pick for that. But that becomes a thing. Like, the smoke detector’s only going to work up until that battery works. When that battery doesn’t work, then it’s on me.

[Ken Athanasiou] You’re absolutely correct, and smoke detectors are only so good, right? Yeah, you can detect the fire, but you need all those other controls in place to make sure that you can actually do something.

[David Spark] A fire extinguisher and an exit plan.

[Ken Athanasiou] A fire extinguisher, an exit plan. So, all of these controls that you layer around these things because humans make mistakes. You leave a pot on the stove, you could have a bad circuit breaker, you could plug the wrong thing into an extension outlet. All of these things cause fires. And guess what? It’s not necessarily always the user’s fault, right? And these are all examples where a user’s just perhaps doing a dumb thing, not where you also have a burglar throwing a Molotov cocktail through the window, right? So, you really have to make sure you understand, again, what’s the user’s experience. If they’re trying to do the right thing, how do you make it as easy as possible for them?

[David Spark] This was an analogy-laden episode. I love that. And I want our audience, hopefully, take these analogies and use them as well. Because while a lot of responsibility, we talk about security taking a lot of responsibility, the example of the smoke detectors, the example of driving the car, great examples. You can’t just hit the gas, close your eyes, and say, “Let’s hope I get there.” No. [Laughter] There’s a certain level of responsibility that takes a place here as well.



[David Spark] All right. We’ve come to the point of the show where I ask both of you which quote was your favorite and why. Ken, we’ll start with you.

[Ken Athanasiou] Yeah, I’m going to go back to John’s.

[David Spark] John Scrimsher’s quote?

[Ken Athanasiou] Yeah. Absolutely. And I 100% agree that if we understand the user experience, we make it as easy as possible for them, and we make it so that the easiest thing for them to do is to do the right thing, then we’re going to get much better results. We may have to introduce friction in certain places, we may have to do certain things to guide them down the right road, just like cops hand out speeding tickets. But at the end of the day, the easier it is, the more compliance you’re going to get.

[David Spark] All right, excellent, and I agree John’s quote is great. Steve?

[Steve Zalewski] So, I am going to pick Simon Goldsmith from OVO that says, “What we should be striving for is the psychological safety to make mistakes and own them.” Because I think that best represents what I was trying to pose in my question is you’ve got to train them, you’ve got to make sure they own the problem, it doesn’t come for free, but it’s the psychological safety. And therefore, the people have a role to play, the process has a clear role to [Inaudible 00:28:19] for defense in depth and for the introduction of friction. But then I look at the technology and I say, “And this is also a case where the technology can get better.” Natural language processing, understand emotional content. That we have some opportunity there as well to introduce the, “We’re going to make mistakes. Let’s just get good at managing the mistakes, not try to prevent them.”

[David Spark] Very good point. All right. We’ve come to the end of the show, and Ken, I let you have the very last word. One question I ask all our guests is are you hiring, so make sure you have an answer for that one. And a huge thanks to our sponsor Code42. Thank you so much, Code42, for sponsoring this very episode of the CISO Series. Remember – they are the insider risk management leader. For more about Code42, just go to their website It’s just the way it sounds,, you can handle that. Learn more about dealing with insider risk management with Steve, any last thoughts?

[Steve Zalewski] I want to thank Ken. David, this was a topic that I thought deserved a great conversation.

[David Spark] This was awesome.

[Steve Zalewski] I really felt like we did it justice today for Defense in Depth. We took what everybody says is how do you get phishing better, and I really worked with you guys to drive that where I think there’s some underlying challenges in how we approach the problem and how we can get better, so thank you.

[David Spark] All right. Ken, any last thoughts and are you hiring?

[Ken Athanasiou] [Laughter] You know, we’re always hiring. The question is is how soon we can get a position open, right? I mean, we’re always looking for good people, and if you find somebody that’s a top-notch candidate, you try and shoehorn them in if you can. It’s just there’s such a dearth of good, solid, cybersecurity folks across the industry, it’s hard to find really quality folks. So, I thought this was an excellent conversation. I think we covered a lot of different areas, covered a lot of ground. The concept of how do we actually protect our users from themselves, how do we hold them responsible for their activities, and how do we wrap the controls around them so that they are better corporate citizens, so to speak. I think it was excellent. Steve, really enjoyed the conversation. David, thank you for having me on.

[David Spark] All right, awesome. Well, if you want to be possibly working with Ken, Ken – who is, by the way, Ken Athanasiou who is the CISO of VF Corporation – we’ll have a link to his LinkedIn profile so you can reach out to him. Mention you heard him on the show, that might help get him to respond. Thank you very much, Ken. Thank you very much, Steve. And thank you, audience. We greatly appreciate your contributions. Send us any great discussions you see on LinkedIn, please. We can turn them into episodes, send them to us, we appreciate it. And thank you for listening to the show. Tell your friends to listen.

[Voiceover] We’ve reached the end of Defense in Depth. Make sure to subscribe, so you don’t miss yet another hot topic in cybersecurity. This show thrives on your contributions. Please write a review, leave a comment on LinkedIn or on our site,, where you’ll also see plenty of ways to participate, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at Thank you for listening to Defense in Depth.