By just doing their jobs, your employees are introducing risk to the business. They don’t mean to be causing issues, but their simple actions and sometimes mistakes can cause great harm. Is it their fault, or is it security’s fault for not creating the right systems?

Check out this post for the basis for our conversation on this week’s episode which features me, David Spark (@dspark), producer of CISO Series, co-host, Steve Zalewski, CISO, Levis, and our sponsored guest Mark Wojtasiak (@markwojtasiak), vp, portfolio strategy & product marketing, Code42 and author of Inside Jobs: Why Insider Risk is the Biggest Cyber Threat You Can’t Ignore.

Thanks to this week’s podcast sponsor, Code42

Redefine data security standards for the hybrid workforce. Check out Code42.

Got feedback? Join the conversation on LinkedIn.

Full transcript

David Spark

By just doing their jobs, your employees are introducing risk to the business. They don’t mean to be causing issues, but their simple actions, and sometimes mistakes, can cause great harm! Is it their fault or is it security’s fault for not creating the right systems?

Voiceover

You’re listening to Defense in Depth.

David Spark

Welcome to Defense in Depth. My name is David Spark, I am the producer of the CISO Series, and joining me for today’s discussion, as my co-host, is Steve Zalewski, CISO over at Levi’s. Steve, thank you so much for joining us. Could you grace us with the sound of your voice?

Steve Zalewski

Absolutely. As always, it’s a pleasure to have the opportunity to chat.

David Spark

We are available at cisoseries.com, we’re on the subreddit of r/CISOSeries. Every Friday, we have a super fun CISO Series video chat, just join us over there. Just go to cisoseries.com and click the “Register for video chat.” We do it at 10:00 am Pacific every Friday and then at the end of the hour, we have a fun one on one meet up. So join us. Our sponsor for today’s episode is Code42. They are also responsible for bringing our guest today and bringing us our subject today, which is insider risk. But you introduced this discussion, Steve, on LinkedIn. What was the feedback and what were you asking?

Steve Zalewski

I was doing some research internally, around insider risk and most of the conversations were around malicious. Everybody was focused on malicious and I thought, to be clear, let me look at non-malicious. And the theme that came back consistently was human error. So unfortunately, I can’t get rid of humans, so it was really interesting to then see how we could address human error.

David Spark

We should also mention, and this is going to come up in the show. Not only can you not get rid of humans, because you also won’t have much of a business, and also I believe it’s illegal to “get rid of.” I mean, there’s a polite way of doing that. But humans also make mistakes and you cannot eliminate mistakes, which is another interesting subject that will come up. Anyways, joining us in this discussion, Steve, is our sponsor guest for today’s episode. He is the Vice President, Portfolio Strategy and Product Marketing at Code42 and he’s also one of the authors of the book, Inside Jobs: Why Insider Risk is the Biggest Cyber Threat You Can’t Ignore. It is Mark Wojtasiak. Mark, thank you so much for joining us.

Mark Wojtasiak

Thank you, David and Steve. This is going to be fun. I’m looking forward to it.

Why are they behaving this way?

00:02:42:17

David Spark

Autumn Warnock of Egress Software Technologies said, “In our remote working environment, it’s easier to be distracted and fatigued, causing split second mistakes.” And Murtaza Nisar over at Elanco said, “Categorize your insider types and think about the mistake scenarios and tailor the mitigations.” That suggestion, that last one right there, Steve, seems kind of on target. Like, what type of people do you have working? What are the common mistakes they make and what kind of protections could we put around it? Is it that simple?

Steve Zalewski

I actually have to agree. It is that simple. What I get back to is, sure, so if I can just have humans stop making mistakes, I can point out all the mistakes they’re making and tell them to stop it. What I like about his comment was, and what I saw as a theme, human error, but it’s like being in the middle of a blast zone, because once you say that’s what it is, when you look at all the ways that human error impacts your insider risk, APDAV, APSEC, phishing, supply chain compromise. That’s where it gets really challenging is everybody now is simply saying, “Well, in my view and looking at it down this vector, here’s what you have to do.”

David Spark

Let me take this to you, Mark. Is just identifying common mistakes a good way to handle this, or are the mistakes people make, A, hard to predict and kind of infinite?

Mark Wojtasiak

Oh, that’s a great question. I like both the comments actually. They both resonate with me in a couple of different ways. We’ve categorized different types of insiders in our book, but we don’t base it on the mistakes they make, we base it on how they get their jobs done. Different employees or people are going to work in different ways in order to get their jobs done. We’ve got people categorized as savers, planners. Their intent doesn’t really matter, but if you hone in on, okay, how are people actually working and is this work behavior introducing risk to corporate data? And then what do we begin to look for to decide whether that risk is unacceptable or of material risk to the business? And then I love Murtaza’s response, thinking about the mitigation tactics. We call that right size response. I think, in the old days, you would say, “We don’t allow this data moving to this place, therefore write a policy and block it from happening.” Well, there’s all kinds of workarounds around that, so it’s more of, “This data is moving to this place. What’s the right size response to that, based on the level of risk or the level of severity it poses to the organization?” That kind of context, and that’s the thing that you need to manage this insider risk problem, is where is the context, what’s the context behind the activity, in order to have a right size response to that activity.

David Spark

That’s a really good point. And, you know, I was thinking about this, you know, looking at the activities. Could sometimes, and maybe from your experience you’ve seen this, could simply changing an application, changing a process, greatly reduce risk? I’ve got to assume that it does. I think about the days when we had assembly lines and they look at every step in the assembly line and they go, “Well, people can hurt themselves less if they stop doing that, and we can save four steps here.” So it’s kind of a thing like, we can be both more efficient and more secure at the same time if we’re looking at this in a sort of a detailed way. Have you seen companies do that? Have you worked with companies like that, Mark?

Mark Wojtasiak

Yes, I mean obviously, people work in unique ways and in each culture, each organization is different. So the insider risk is not a technology problem, it is a people process and technology problem that’s largely a risk to data. So, you have to think about, is there a better way, and we don’t necessarily think about changing processes, you know, I think that’s been attempted relative to, okay, this is a sanctioned application and this is an unsanctioned application. So let’s just sanction and bless the applications that the employees can use. And, in some cases, that doesn’t work because employees will continue to use whatever they want to use to get their jobs done. We see it all the time. We’re a Google house, but these employees are moving stuff to Dropbox. Why is it going to Dropbox? Hey, we don’t allow USB devices to be used, we even lock them down out of the notebook, but some way we write an exception and the employee is moving stuff to the USB device. So, sometimes you have to think about it more holistically and we’ve seen customers think about that. It’s like, again, I go back to that context, that visibility. We may have policies and processes in place that say, “Hey, you are allowed to use this cloud platform exclusively for working with external clients.” So the corporation might be in Office 365, but clients may use Box, for example. So that’s a process change, but you’ve got to have visibility and context to that.

How do we go about measuring the risk?

00:07:59:00

David Spark

Robert Fly, Elevate Security, mentioned that the insider risk equation includes the variables of the actions they take, the access they have, and how often they’re attacked. “How we treat employees isn’t a one size fits all problem,” kind of referring to what you said, Mark, just a moment ago. “Each individual inherently is more or less of an unintentional risk, based on the buckets above.” And Bal Aditya of Right-Hand Cybersecurity said, “How about tailored education and training? It would involve understanding a user’s profile, including the data entries.” So I’m going to throw this one to you, Steve, first. We’re all sort of special flowers, is the way they’re describing it here. You know, we do want to have processes, but to what point do we have to stop with the processes and look at the individuals?

Steve Zalewski

So, how about another way which is, how about if we understand humans make mistakes, it is what it is. That is the brutal truth. We can try to prevent them from making mistakes by making it so hard to do their job that they then circumvent what we’re doing: multifactor authentication, defense in depth type things. The other way to look at this, and there’s a couple of companies out there is, how about if we look at the behaviors of the individual and can we map their behaviors against how the mistakes are being made. So that we realize the types of mistakes being under high pressure for deadlines, being remote with your kids in the room with you and you’re trying to be on a call so therefore you’re not paying attention. Is there a way that the context of how human mistakes are made can be incorporated so that what we can do is be more in tune to you making a mistake, regardless of your role? Because I don’t want to try to define a set of rules for an app developer and a set of rules for an administrator and a set of rules for a designer of jeans or for the CEO. What I’d like to do is be able to characterize the indicator of attack or the indicator of mistake, and then take action.

David Spark

Good point. Mark, it seems like to go individual by individual would be just a ludicrous task and it’s impossible to know the individual so much, but those sort of basic understandings that Steve points out, seems like kind of a logical way to go. Is there more to the equation here?

Mark Wojtasiak

We’ve actually thought about this in a couple of different ways. I think it is hard to look at each individual and manage the risk of each individual, so first and foremost, and Steve mentioned this again, the word “context”. If you have the ability to look across every file in the organization, every user, every potential vector where that data can flow or that file can flow, and you begin to look for behaviors or activities, you begin to see patterns of work, right? So, for example, someone may have certain active hours that they work, no one works nine to five anymore, not in this era. Are their active hours changing and how does that implicate or signal risk? And when you see those patterns, we call them insider risk indicators, and there are severity levels of these indicators. So, based on severity level, you begin to see trends in behavior activity and some of those things require you to go back to right size response. I go back to, hey, this person may need a nudge, a reminder, via Slack, via whatever the communication channel; “Hey, you have an open file share on Google Drive. We have a policy against open file shares. Can you please lock that down to our organization.” Just a simple nudge. You may see trends across the organization, or within departments, that focus on, “You know what, we need to give finance more security awareness training around phishing attacks.” You can look at it at department level. One approach that’s interesting is security going to each line of business and trying to understand what their risk tolerances are. So you mention R&D or Dev Ops, they might have a different risk tolerance, they might have different processes for data, knowing that you’re better equipped to understand data activity and potential insider risk.

Well, I guess that’s one way to solve it.

00:12:34:07

David Spark

David Moratti of Leviathan Security Group said, “Your goal should be to use technology to make it as easy as possible for people to make the right choice.” And Keyaan Williams of Cyber Leadership and Strategy Solutions said, “Technical controls that enforce least privilege and least functionality help to reduce the frequency and the impact of mistakes.” And Philip Winstanley of AWS said, and I like this one, “How do we make it safe for users to be phished, and how do we make it safe for them to actually give away their credentials, but still protect their accounts and company assets?” So I’m going to start with that last one from Philip. Thats a doozy of a request. The way I envision it, it’s like this virtual padded room. How can you make it that they could just bang themselves against the walls as much as possible and still not hurt themselves and hurt anybody else? Steve?

Steve Zalewski

Well, the upside for Philip is, that is possible. That is things that I think a lot of companies are doing, the technologies of multifactor authentication and then the ability to detonate malicious attachments and everything, so we’ve done a lot there. What I can’t do anything about are the people that will get an email and it says, “Please enter your user name and password so I can do blahdy-blah” and they do it repetitively. They just trust so much that they can’t imagine that somebody would be malicious. And so when you talk about user error or mistakes, again, it gets back to the, okay, in this case is this a user that just has a behavioral trait of just trusting everybody, because they can’t imagine somebody would be trying to do the wrong thing? And we have some of those. And so you talk with them, and then you do that again and then again and you’re like, this is the fourth time that they’ve done that.

David Spark

Have you ever canned someone, or has your company ever canned someone, because they just kept repeatedly making mistakes like that?

Steve Zalewski

Let’s just put it this way. That topic has come up recently.

David Spark

Okay, alright. Let me throw this to Mark. Mark, to what level can we protect people from themselves? I think that’s what it really boils down to. To what level can we do this?

Mark Wojtasiak

I don’t know if we can.

David Spark

I’m not saying 100%, but is there a level we can do this?

Mark Wojtasiak

Yeah, there’s a level of technical controls we put in place. We’ve been putting in place technical controls for everything and that’s about accidents.

David Spark

Right, but let’s address the two things that were mentioned by Philip here. To what level can we prevent people, when they click on a phishing link, and to what level when they give away their credentials? I know there are other mitigations that we can put in place here, because credential theft is the most common thing and now you’ve got to watch the behavior, you know, is this anomalous to how this person normally behaves?

Mark Wojtasiak

Yeah, one thing that we’ve pondered about is, we’ve brought this up in some conversations of an individual’s risk posture, for example. Obviously we’re very aggressive in our security culture at Code42. We have a ton of security awareness training and security is embedded into the culture. We’re a security company, therefore it has to be. And it isn’t necessarily a “three strikes you’re out” type of thing, but it is this idea of how risk aware are you as an individual? And we celebrate the employees that are risk aware, the ones that pass every phishing test that we do. We call it the Ninja program, we developed it at Code42. There’s belts, like white belt, black belt, brown belt and you take a number of different trainings and courses and have to pass tests and curriculum, and it’s fun to make the employees part of security, make them be security aware, risk aware, have them second guess. And those are just, obviously, technical controls, but there are also those emotional controls that we can put in place, so that they’re part of the solution, not necessarily something that we’re trying to manage the problem out of, or manage the risk out of.

Steve Zalewski

So I want to jump on that, which was, we’ve got to also talk about so what, now what? How about, instead of trying to train them to not make the mistakes, acknowledge that they are and contain it. It’s perfectly reasonable to disable people’s accounts or to stop them from doing business, and so the other part we have to realize is, you can’t fix people and so, therefore, let’s get back to containment. Let’s acknowledge that they’re going to make the mistakes and train them, because they may lose efficiency.

David Spark

I don’t think it has to be an either/or situation. I think both can very much coexist, can’t they? Can’t we train them and contain? Training, containing?

Mark Wojtasiak

Yeah, that is the right size response. That is like, what is the mitigation, right? So in some cases, to your point, Steve, we have to contain it. There has to be a heavy hand. You have to lock the device, you have to freeze their identity, you have to do something, whether it’s though our identity management program or system or what have you. There is the heavy hand. There are cases we’re going to have to pull the rip cord. But a majority of the cases, it is a different type of response, right? That’s tied to severity, it’s tied to frequency, it’s tied to what level of employee they have and access they have. It depends on the organization. But you’re right, Steve, containment has to be one of the types of response.

Steve Zalewski

I will push back and simply say, look, it’s a carrot and stick. You’re an adult. We pay you, we train you, you get the job done, okay? I’m not always going to give you candy and pat you on the head and say “It’s okay.” There is a consequence and if you can’t do your job and your manager says, “How come you can’t do your job?” sometimes, to your point, it has to be heavy, but I think it also has to be fair and that’s what we’re trying to get to, which was, active containment means it will be easy to start with but if you’re a consistent repeater, then the consequences of that are going to impact you directly and there’s going to be consequences.

Mark Wojtasiak

It’s interesting, we threw out just one more thing, David, we threw out the idea of what if security was part of your compensation model?

David Spark

We’ve brought this up on the show before, yes.

Mark Wojtasiak

What if you don’t get bonused? The more risk you introduce to the organization and the more that we’ve tried to remedy it, but you continue to do it, there’s prices to pay. There’s all kinds of ideas on how to handle this from a cultural standpoint. We don’t do that today, but it does require some out of the box thinking, sometimes.

What are the best practices?

00:19:20:22

David Spark

So, before I read these two quotes, I’ll just say something I stumbled on today. I was doing a money transfer today, from my bank to another bank and the first time I ever saw this, they had a pop up window that came up that sort of alerted me about, “Make sure you’re sending this to the right person.” And then a bulleted list of, “These are the ways they get you.” You shouldn’t be sending money if someone just called you out of the blue or you don’t know the person, you haven’t verified it, all this kind of stuff. And it was the first time I ever saw something like that which, in my mind, is a way to, hopefully, stop people from making stupid mistakes. And anyways, it was forced, I had to look at it before I had to make it vanish. But I did know who I was sending the money to and, God willing, it’s going to get to them. But let me read the quotes here. Heather Hinton of RingCentral said, “Make it real, understandable and clearly relatable so that the stakeholders understand how easily something bad could happen, based on your products/services/data and an overworked person trying really hard to get something done/take a short-cut.” And Erik Bloch here has, I think, a really great quote: “Sit down with your admins and developers and ask them, if you wanted to cause damage, how would they do it? It’s amazing how many everyday tools and processes have gaps that those who use them everyday can point out easily,” and this goes back to the assembly line model I was talking about. So, Mark, this just seems like a good idea. Like, “Hey, you’re using this, what would make your life easier and help you make less mistakes?” Does it just come down to that?

Mark Wojtasiak

Yeah, I think it does come down to conversations like that. One of the things that we talk about often is, and I think Erik brings it up relative to admins and developers; sit down with, at a department level. How do you get your jobs done? How do you work? And this is where we introduce the idea of risk tolerance. The risk tolerance of an admin or a software developer and that group, the leader of that group, is probably different than a risk tolerance for the head of marketing or head of sales. Begin to think like an insider, begin to think like an employee. How do you get your job done on a daily basis? Put them into scenarios. “Okay, what if you’re working from home and you have a deadline that’s a day away and you’re only ten percent done with something, but your child needs your computer? What do you do?” “Well, I put everything on a thumb drive and I run to my parents’ house and I put it on their computer and work.” “Well, that introduces risk. What happens–” And they have those types of conversations. And then you begin to understand data movement and where data’s going, and how employees are using files and what vectors they’re using, and where potential risks exist, so that you can cater training awareness, tools, processes, what have you, to how those departments work or how those specific employees work.

David Spark

Steve, I like the scenario that Mark threw up. Let me ask you, have you or anyone on your team, actually sat down and had that kind of conversation with a non-cybersecurity employee?

Steve Zalewski

Yes, more times than I care to imagine.

David Spark

Really?

Steve Zalewski

Not always with a positive outcome. And now I’m going to explain that. This whole conversation, for the most part is, hey, there are cultural norms and there are company policies. I work in an international company. I do business in 110 countries; we have a physical footprint in 65. Well, I got to tell you, in large parts of the world, the cultural norms in the countries do not align with our security practices and policies as a company.

David Spark

Is there an example you can actually give us here?

Steve Zalewski

Let’s just say there are certain Asian countries where the people consider it okay to use the corporate assets for third party revenue generating opportunities.

David Spark

Okay, got it.

Steve Zalewski

And you have to explain to them, “That is not a good idea,” and then, “What were you thinking?” And they look and they go, “Yeah, but that’s pretty common around here. It’s a corporate asset but when we’ve got time to spare on lunch and we want to be cutting third party movies and selling them on the side,” and you’re like, “What!?” So that’s where I bring in the mistakes and cultural norms. Again, human error doesn’t just mean they made a mistake. It means cultural norms introduce human error in ways that you didn’t normally think about and that’s the part of the best practices, the rock I needed to throw into this pond, which is everything we’re talking about is assuming the two are aligned, and we didn’t talk about that on a LinkedIn link. But a lot of people have to realize, again, human error and cultural norms, it is a continuous conversation. There is no one single answer, but again, you have to think about carrot and stick. I always say, there is consequence. People can claim, “We do this out here.” We can claim, “You do and you’ll be doing it somewhere else, because we cannot accept that risk.” That’s my story and I’m sticking to it.

Wrap

00:24:31:05

David Spark

And that’s where we’re going to wrap up this show. That was great and, by the way, a good thing for people to ponder is the cultural norm hook, because if you don’t live there, you don’t see it, you have no visibility as to what’s going on.

Steve Zalewski

And that was a true story, about the guy that was cutting CDs and making movies on the side on corporate time.

David Spark

You know, he’s finding another revenue stream. Alright, that brings us to the point of the show, and I will start with you, Mark. What was your favorite quote in the show and why?

Mark Wojtasiak

Oh, my favorite quote is probably, I’m going to go back to Murtaza. I think that, when you think about categorizing the types of insider risks and then mapping those to what we call, “right size response”, what he calls, “tailoring the mitigation”, I think that’s critical and it’s, to Steve’s point, carrot and stick. There are going to be times where you have to use the stick and there are times where you’re going to have to use more of a carrot, or somewhere in between. So it’s not a one size fits all problem. That sums up Code42’s perspective of it in one sentence.

David Spark

Excellent. Alright, Steve, your favorite quote and why?

Steve Zalewski

I am going with Heather Hinton from RingCentral, because it dovetails on my last comment which was, you’ve got to make it “real understandable and relatable”, and that’s just it. You can’t rely on English, right? I’ve got 18 or 19 languages. I’ve got culture, and so you’ve got to make it understandable and relatable to the individual in the instance, which is not one size fits all, it’s not also that everybody’s a snowflake, but you have to realize again, it’s a balanced risk assessment of countries, processes, what you do. And so it always comes back to, I can’t stop it all, but what I can try to do is address the ones that are most important to the company. So I leave you with, I can’t protect everybody equally, but I can protect the key assets which are my humans and key business processes, adequately.

David Spark

Alright. I want to thank your company, Mark, Code42, and Mark’s book, Inside Jobs, which you also, I know, co-wrote with two other people, one of them being a guest we’ve had on our other show, and he’s going to be a guest on, well actually two of our shows, JD Hanson. And what is in Inside Jobs that is different than, or more than what we’ve talked about so far?

Mark Wojtasiak

Yeah. Thanks, David. Inside Jobs is, yes it’s a security book, but it’s more of a business book. So, it’s everything that we’ve learned over the last five years around the insider risk problem, and we talked about a lot of it today. Cultural. The cultural catalysts behind it, the need for it’s not one size fits all. The need to have partnerships across line of business, IT, legal, HR, the right-sized responses. It’s got a lot of stories in it, there’s a lot of story telling, there’s a lot of practical advice and then we finish up the book with some actual frameworks that we’ve deployed and used at Code42, around managing the problem.

David Spark

And any last advice you’d like to give to our audience, or what they can do, if there’s any offer from Code42 or anything they should check out? What’s your recommendation?

Mark Wojtasiak

Yeah, we recently launched a framework around insider risk management that coincides with our product, Insider, so it’s a practical approach to managing this problem, it’s a five step approach. Encourage listeners to visit code42.com and check out our new insider risk management framework.

David Spark

Thank you very much. And thank you, Steve Zalewski. And thanks to Code42 as well for sponsoring this very episode. And, as I always say, thanks to the audience. Thank you for all your awesome contributions and listening to Defense in Depth.

Voiceover

We’ve reached the end of Defense in Depth. Make sure to subscribe so you don’t miss yet another hot topic in cybersecurity. This show thrives on your contributions. Please, write a review, leave a comment on LinkedIn or on our site, cisoseries.com, where you’ll also see plenty of ways to participate, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at david@cisoseries.com. Thank you for listening to Defense in Depth.