Security professionals should turn in the cyber hero mentality for the “sidekick” role. Many cybersecurity leaders believe they need to save the company from all the stupid users who can’t protect themselves. The reality is security professionals should lose the saviour mentality for a supporting role where they’re running alongside different business units trying to find a way to make their process run smoother and more secure.
This week’s episode is hosted by me, David Spark (@dspark), producer of CISO Series and Andy Ellis (@csoandy), operating partner, YL Ventures. Our sponsored guest Clyde Williamson, product management, innovations, Protegrity.
Got feedback? Join the conversation on LinkedIn.
Huge thanks to our sponsor, Protegrity
[Voiceover] Best advice for a CISO. Go!
[Clyde Williamson] Don’t base your budget on fear. Stop thinking about information security as, “How do I comply with the regulation?” or “How do I make sure that the bad thing doesn’t happen?” and instead focus on how you can help the company do whatever it is they need to do in a way that keeps the bad thing from happening. It’s really got to be about enabling the business rather than worrying about what could go wrong with the business.
[Voiceover] It’s time to begin the CISO Series Podcast.
[David Spark] Welcome to the CISO Series Podcast. My name is David Spark, I am the producer of the CISO Series. Joining me on this very episode is the one and only, the magically talented Andy Ellis, currently operating partner at YL Ventures. Andy, thank you so much for joining us. What do you normally sound like?
[Andy Ellis] Good morning, good afternoon, good evening, or perhaps good night.
[David Spark] Our listeners are experiencing one or the other.
[Andy Ellis] Hopefully.
[David Spark] Or they could be listening because they can’t fall asleep, and they need our dulcet tones to put them to sleep.
[Andy Ellis] Absolutely. There are days that I listen to myself and fall right asleep.
[David Spark] Do you, by the way – my wife does this – do you ever listen to podcasts to try to fall asleep?
[Andy Ellis] I generally don’t, but I will do like meditative things, guided meditation to fall asleep.
[David Spark] All right. You know, we’re available at CISOSeries.com. I know you know that, Andy. I hope our audience knows that as well. We have lots more programs. Go check them out. Hey, guess what? Our sponsor today – it’s Protegrity. Data fuels your business, Protegrity protects your data. And guess what? We’re going to be talking about data protection today because our very guest comes from Protegrity. But before I get to that, I have a huge announcement to make, Andy.
[Andy Ellis] Whoo.
[David Spark] As some listeners here know, I’m a big fan of the pinball, I like playing the pinball. And I did something that I’ve never done before in my life. I played a new pinball game, the Toy Story 4 game from Jersey Jack Pinball, which is a spectacular game. The very first game I played on this game, I got the high score.
[Andy Ellis] Whoo.
[David Spark] I was way impressed with myself.
[Andy Ellis] Well, if it was the very first game you played on it, and nobody else had played on it, you would have the high score.
[David Spark] No, others had played on it.
[Andy Ellis] Oh, okay.
[David Spark] Honestly, it hadn’t been there for too long. My guess, I think the game physically, it’s a really spanky new machine, and I think it was only in this bar for like a few days.
[Andy Ellis] Okay.
[David Spark] So, it only had a few days of play, but still, first drop, high score.
[Andy Ellis] Congratulations.
[David Spark] I was pretty darn proud of myself.
[Andy Ellis] We should always celebrate even the minor achievements.
[David Spark] I am big about these. And by the way, every time I get a high score on a pinball machine, I take a photo, I post it to the Facebook. I care. A few of my friends care as well. That’s what I’m all about.
[Andy Ellis] That’s all that matters is sometimes people really need to just stop for a moment and celebrate the wins that happen every single day. It doesn’t have to be this gargantuan world-changing win but just recognize awesome stuff is always happening.
[David Spark] I can’t begin to tell you how much thrill I get out of getting a high score on a pinball machine.
[Andy Ellis] Oh, I’m pretty sure I know how much thrill you get.
[David Spark] I do! [Laughter]
[Andy Ellis] Because I can see you on the video right here how much thrill you’re getting telling me about getting the thrill of the high score.
[David Spark] Oh, yeah, yeah, I do. And by the way, I am trying to buy my second pinball machine. I currently own a Star Trek 2013, and I’m trying to buy Jurassic Park. If any listeners are also into pinball, the marketplace has spun out of control. They’re very expensive, extremely hard to get, demand is far outpacing supply right now.
[Andy Ellis] Welcome to supply chain issues of 2022.
[David Spark] Yeah, and it’s hitting pinball, I’ll tell you that much.
[Andy Ellis] That is a tragedy.
[David Spark] Mm-hmm.
[Andy Ellis] That is like not even First World problems, that’s like Zeroth World problems.
[David Spark] I know. Well, that’s the kind of struggle I’m dealing with. Feel my pain, Andy.
[Andy Ellis] I feel your pain.
[David Spark] I appreciate that. With that being said, let’s bring on our guest. Very excited to have him onboard. We were chatting just moments ago, and I know he’s going to be awesome. Not too much pressure on you. This gentleman’s our sponsor guest, the product management and innovations with Protegrity. It’s the one and only Clyde Williamson. Clyde, thank you so much for joining us.
[Clyde Williamson] Thank you for having me, David.
Sit down everybody! It’s cyber community circle time.
[David Spark] We are suffering from an enormous sensitivity around sharing information. Many vendors catalogue and anonymize threat information in an effort to disseminate more intelligent information about threats, but it never seems global. There are too many concerns, often rightfully so, around privacy, legal issues, and corporate survivability that prevents many businesses from being open about sharing information. This is one of the core reasons I believe we have a cybersecurity staffing shortage issue. What could be done by one to service many is being done by many to service many. We don’t need that. So, I ask you, Andy, what effort in sharing information, in any format that you’ve seen, make a demonstrable impact on your security posture, and what are you striving to achieve next?
[Andy Ellis] So, I actually think the problem’s pointed in a different direction. I think we try to share too much information.
[David Spark] Really?
[Andy Ellis] I actually think that most threat intelligence – and I put air quotes around that – needs so much context to be useful to anyone and is so specific to certain environments that this idea that, “Oh, everybody has their own feeds, and you can integrate all the feeds,” like, the feeds of low-level data is not interesting. What’s interesting is highly enriched data, yeah, things that are about TTPs. Somebody has done analysis, and that analysis gets shared which says, “Hey. Here’s what we’re seeing about the ways in which attacks are being done.” Like, I don’t need to know, “Oh, here’s an IP address that was a problem yesterday,” because today it’s not a problem anymore. It was a rental IP address in some fashion, and I think there’s been this over-focus on automated information sharing rather than a focus on…
[David Spark] Sharing TTPs.
[Andy Ellis] Sharing the TTPs in a meaningful fashion, and I do think we see a lot of that. Granted, a lot of vendors want to do it for competitive purposes, want to be the only one who knows the TTPs that they know about, at least for some period of time. It’s like, “Oh, I discovered this. I have the signature for it. I’m going to deploy the signature first, and then I will announce the TTPs to the rest of the world so that my competition is a month behind me.”
[David Spark] Clyde, I throw this one to you. Do you agree? I think this is an interesting tactic. We don’t just need more raw data. We just need more analysis of the raw data that brings more context.
[Clyde Williamson] I agree completely. I think that the trick is really in getting the knowledge rather than the information shared. I was doing some research as part of what I do in my day job around some machine learning stuff, and I came across some really interesting white papers. I was doing research on federated learning, which is an idea where you deploy machine learning models down to, for example, endpoints. The machine learning models learn from the data at that endpoint, and then the machine learning models take the trained models and return those back so they can be integrated into an overall view. So, the data never leaves the endpoint.
The first paper I read on this, they were doing this with cellphones. So, they could monitor behavior on cellphones, train models on what was happening on that cellphone, and send that information back to a central location where that could all then be aggregated, and they could actually get alerts out quickly and updates out quickly to detect when there were things happening on the phone that shouldn’t. There’s several other papers like this, most of them over the past two or three years, where they’ve been focused rather than on, “How do I share the data?” it’s around, “How do I learn what I need to learn from the data you’ve got over there, and then take that education and apply it with everybody else?” And I think that’s a really interesting way to try and tackle the problem.
[David Spark] I don’t think this is going to solve the staffing issue, which is what I was arguing about, but what is the next then level that we need to achieve? Is it just more of this sharing of knowledge?
[Andy Ellis] So, this isn’t even a key contributor to the staffing issue. The single biggest contributor to the staffing issue is that corporate HR departments have become risk avoidance departments and have learned a lot of wrong lessons from a bunch of legal activity over the last 20 years, and the job postings that go up don’t actually fit the skills that people need or have, and so we’re not trying to hire the humans that exist. That’s the staffing shortage in a nutshell.
[David Spark] You get the last word, Clyde.
[Clyde Williamson] I honestly wish we were still back where we were in the early ’90s when I started, and Bill Cheswick was writing papers like An Evening With Berferd where the way that you learned about what was happening was people just shared that data freely all the time. I wish we could go back to that.
You couldn’t have done better than that?
[David Spark] Security professionals should turn in the “hero” mentality for the “sidekick” role, as you said, Andy, in your opinion piece on CSO Online.
[Andy Ellis] What a genius writer that person was.
[David Spark] I’m taking it back. You outlined the stereotypical opinion many cybersecurity leaders have of themselves that they need to save the company from all those stupid users who can’t protect themselves. The reality is security professionals should lose the savior mentality for a supporting role, where they’re running alongside different business units trying to find a way to make their process run smoother and more secure. As much as I support this theory, many users view their security department as protectors and the solver of all their problems. Now, that’s the users. I mean, heck, it’s in their name, the security department. So, is it okay if users see security as heroes, but security professionals shouldn’t see themselves that way? What do you think, Andy?
[Andy Ellis] Oh, absolutely. When I’m driving my kids to school, which I’m about to be out of the business of doing because my daughter’s about to have her driver’s license, and then I drive back, I have to drive past a school where there is one of the most amazing crossing guards I’ve ever interacted with. She is engaged, she’s happy, she’s dealing with everything. The kids come by; they give her a high five. Like, amazing human being, and I am certain that those kids see her as a hero.
But I’m pretty sure she just sees herself as somebody whose job is facilitate getting these kids to school safely. She’s trying to help keep them safe, but there’s no antagonistic, she’s not a superhero, she absolutely does not want to have any heroics because that means something went really wrong. Her job is to stop traffic in a way that causes nobody to have any heartbreak, and everybody moves in and out. She’s also juggling the cars that are trying to get out of the school at the same time. I couldn’t imagine dealing with what she deals with. But a smile on her face, she very clearly has this affect that she is just helping to facilitate everything smoothly. She’s not in charge, she’s not like, “Aha! I get to hold you up, and this is fun.”
[David Spark] Essentially she’s not letting this go to her head is what it is.
[Andy Ellis] It’s not going to her head. There’s no ego trip here.
[David Spark] All right. Let me throw this to you, Clyde. Do you believe that’s okay? Because that’s not easy. If this dynamic is true, the users believe the security department is heroes, but the security department does not see themselves that way and doesn’t let the – and God knows if this happens or not, whether the users laud praise on the security department, God willing we would all love that – but they wouldn’t let something like that go to their head. What do you think?
[Clyde Williamson] Well, yeah. I think the security team certainly has a reputation inside most organizations. I don’t know that hero is usually the reputation that they get though. Usually it’s the jerks that won’t let me do the thing I want to do.
[David Spark] Valid point, Clyde.
[Andy Ellis] Right. Because security sees themselves as the hero…
[Clyde Williamson] Exactly!
[Andy Ellis] …but nobody else does.
[Clyde Williamson] So, I think I can say this because Stranger Things has normalized it, I’m a Dungeons and Dragons nerd. I have been for decades. I have run games, I have been a dungeon master, I have done live action games. And one of the most awesome things is when a player does something that you did not prepare for. You’ve spent hours building this dungeon, and they do something to get around a puzzle or something like that just it wasn’t planned for. Or the rules don’t cover how the thing they just described that they wanted to do would work.
And so in storytelling, we have this kind of maxim that we use which is “yes and.” So, whenever the player does something completely crazy, you say, “Yes, and here are the controls that we’re going to use,” you’ve got to roll these dice or whatever, this is the challenge rating, in order to make it fit in the bounds of the game. So it still works within the rules, whatever the crazy thing is you just came up with, “Yes, and here’s how we’re going to make that work.”
And I think that’s what information security needs to be. We’re not the heroes, that’s the people who are doing stuff with the data to make money. We’re not even the sidekicks. We’re really kind of like those dungeon masters who are there saying, “What is the crazy thing you’re about ready to do? Great. Let me help you figure out how to do that safely.” And I think that the more that information security becomes that, the more that information security becomes the people we go to when we want to do something right, and they help us get it done. I think that makes it work a lot better.
[Andy Ellis] I don’t know that I completely agree with Clyde’s model, but I’m going to run with it anyway. As a long time gamer myself, both live action and desktop.
[David Spark] And by the way, we’ve got D&D nerds across the board here.
[Andy Ellis] Yeah, written 10-day live action games, and one of the best games writers I ever worked with MIT, Jamie Morris, once said, he said, “GM coolness is inversely proportional to player coolness,” and I think it applies here. That when you have the player, the business unit who’s doing something cool, your coolness as the GM, or in this case the security person, gets in their way and takes away their ability to do awesome things. So, you want to get out of the way of their coolness and support it, the “yes and.” Okay, so you want to do this crazy thing, and how am I going to fit that into the world that we’ve got going on here in a way that doesn’t blow everything up? But it’s not that I’m taking away your ability to do this.
It’s time to play “What’s Worse?”
[David Spark] Clyde, you’re familiar with this game, correct?
[Clyde Williamson] Barely, yes.
[David Spark] Barely. All you need to know – it’s a risk management exercise.
[Clyde Williamson] Right.
[David Spark] I’ll give you two scenarios. They are usually brought to me by listeners. [Inaudible 00:15:14] and I will say to our audience – please send in more. This one I took from the Twitter feed called “Bad Things Daily” which is great if you are creating tabletop scenarios or need some topics for your tabletop scenarios. I recommend it, it’s excellent. So, the Twitter Feed “Bad Things Daily” and I just picked two items that I thought were, hopefully, equally painful. And I always have Andy answer first. I love it when people disagree with Andy, no pressure on that.
[Andy Ellis] I win if you don’t disagree with me, just to be very clear.
[David Spark] Supposedly Andy wins, but I don’t think Andy ever wins.
[Clyde Williamson] Well, let’s see what’s worse then.
[David Spark] Here we go. Andy, here we go. From “Bad Things Daily,” scenario number one. An employee’s active session to your enterprise chat was just purchased by an adversary. Or an employee has taken their device to a local repair shop instead of your Help Desk, and all credentials needed for access are provided. Enterprise chat is not available though via the phone, I should mention. Which situation is worse?
[Andy Ellis] Oh, the first one. Because it’s an active adversary who is attempting to gain access to what we’re doing. The second one, not necessarily an active adversary, although we should try to recover that laptop and pay for it, so we don’t have a giant political scandal when the shop owner goes and sells it to The New York Post. Not that that happened in the last couple years.
[David Spark] It’s interesting. You’re saying the known one is purchased by an adversary, so it’s a known bad. The second one, the thing is they’re handing over credentials needed for access on the second scenario.
[Andy Ellis] Absolutely. In that second scenario, they’re handing over the credentials. Now, if the repair shop owner is good, and I’ve interacted with some who’ve actually reached out to us and been like, “By the way, I have this device right here.” I’ve had it both from former and from current employees though, former employees who claimed they lost their laptop and then took it to a repair shop and said, “Hey, could you wipe this for me?” Repair shop called us.
[David Spark] Really?
[Andy Ellis] Oh, no. Absolutely, happens all the time. So, this one, it’s not a good thing, but it’s a routine thing. It just happens. You hope that you can clean it up. But generally, like at that point, here’s the deal the repair shop owner has. They basically have three choices. So, one is they do their job, and they move on, and they forget about everything. Which honestly, that’s kind of the norm. The second is they do something nefarious, in which case they have committed a crime from a physical point of presence that is going to put them out of business when they get caught.
[David Spark] Yeah. And the thing is you’ve seen that person too.
[Andy Ellis] Right. That’s a pretty rare crime. Or the third thing is they actually reach out, and they say, “Hey, I got a problem here because I didn’t want to have access to all this stuff but now I do. Can you please take it all away?” So, that one, I’m not actually that stressed about because it’s going to happen. The first one, that one really sucks.
[David Spark] All right. Well, the first one could be innocuous too.
[Andy Ellis] An adversary bought something of mine.
[David Spark] There’s something there to spend money on.
[Andy Ellis] There’s something to spend money on.
[David Spark] That’s a good point. All right. Clyde, which one’s worse?
[Clyde Williamson] I’m going to agree with Andy for almost the same reasons except that for me, the person taking the laptop somewhere they shouldn’t. That’s a human behavior, it’s one that, like Andy said, we expect will happen from time to time, we can plan for that. But somebody selling off a chat to a competitor or to an adversary of some kind, that’s probably not something anybody planned for or had any kind of a control in place to keep an eye on, which means that it’s something new. And the new thing is always worse than the other thing that we’ve already got at least some answers for.
[David Spark] Excellent point.
Please. Enough. No more.
[David Spark] Today’s topic is data protection, and that is a very grand topic. It’s huge! It’s kind of everything in cybersecurity, isn’t it? So, it’s being handled many different ways. But Andy, I will start with you, and you can take this from any angle you want. What have you heard enough about when it comes to data protection, and what would you like to hear a lot more?
[Andy Ellis] I have heard enough about data classification. And I don’t mean data categorization of like, “Oh, this is obviously PII,” things that are very clear structured data. I mean, the “Oh, which things in here are corporate confidential?” versus corporate secret versus corporate top secret. And “Oh, that’s why our data protection solution didn’t work because you were supposed to categorize every single piece of data in your environment, and because you didn’t, we couldn’t protect it.” I’m tired of hearing that. That’s the old DLP thing, you buy DLP service, and what it does is tell you that, “Oh, yeah. You have a couple of spreadsheets of Social Security numbers floating around, but yeah, we got nothing else for anything.” What I want to hear some more about because this is a big open problem, is this problem’s about to get a lot worse. In the pre-cloud era, data was expensive.
[David Spark] By the way, it’s not painful enough?
[Andy Ellis] It’s not painful enough, it’s about to get way worse. Because pre-cloud era, data sort of protected itself from movement because it was expensive for you to make more copies of it. So, if somebody was like, “Oh, hey. I’m a data scientist, and I want to run a bunch of queries against our massive database.” Everybody was like, “Yeah, have a nice day because we don’t trust you on production, and so you need to get a copy.” And the cost to get a copy of it was like, “Oh, you have to go through procurement, and we have to get a whole new data center, we got to do all…” blah-blah-blah.
Well, now we’re in the cloud era, so when someone says, “Oh, I want a copy,” it’s like go into AWS and push a button, and now you have a copy of all the data for that person to go run all their queries on, and they’ll try something new, and 9 times out of 10 it’ll be fine. But 1 out of 10 times, they’re just going to leave that data sitting there. And by the way, I think it’s actually the opposite ratio, but I don’t want anybody to argue with me that I was being too paranoid. I’ll say only 10% of the time, that data gets left somewhere and nobody’s keeping an eye on it. Now, writ large, that means it’s going to be everywhere.
[David Spark] All right. I’m throwing this one to you, Clyde. What have you heard enough about with data protection and what would you like to hear a lot more?
[Clyde Williamson] The thing that I’m really tired of hearing when it comes to data protection is this idea that the minimum required is enough. It kind of goes to the idea that information security is very dynamic. In so many industries, stuff is constantly changing, but it’s kind of constantly just evolving in a vertical direction. Data storage kind of evolves vertically, data analytics kind of evolve vertically. Cybersecurity, you kind of go up, and you go sideways, and you go in non-Euclidean angles, and you never know what’s going to come about next. And with data protection, it’s the same way. What was useful data protection 10 years ago doesn’t work today. And I’m so tired of somebody saying, “Well, I turned on the disc encryption, so everything should be fine.” I mean, the last time somebody lost information because someone stole a hard drive is so long ago that I don’t even remember when that kind of attack happened.
[David Spark] Going back to Andy’s comment – it’s all about stealing it on the cloud.
[Clyde Williamson] That’s right, that’s right. Or via the application that’s got that loaded in memory. Now, what I want to hear more about though, and I think that this goes to Andy’s point as well, is how do you protect the data so that it doesn’t matter where it goes or who makes copies of it? And this has been something that in information security has been talked about a long time as the data-centric security model.
The data-centric security model argues that you protect the actual individual data, right? So, you use an encryption or tokenization, some kind of pseudonymization that replaces the sensitive data with non-sensitive data so that if there is a copy of that data that’s sitting over there, it doesn’t matter because nobody’s got the keys to figure out what it means.
And that may not be every value inside that spreadsheet, right? It may only be the columns of PII data. But if you can control the actual data itself so that no matter where it sits, it’s protected by default – sort of that privacy by default that you hear about with GDPR – the data is always protected and then only unprotected specifically as needed for that user, it addresses a lot of the data protection problems that we see right now.
[David Spark] Let’s dig down, and I’m going to ask you specifically about Protegrity because I want to hear how you’re dealing with this. In this issue, are you, I don’t know, creating wrappers around data? Like, what is it that you’re doing to essentially tell this data protection story?
[Clyde Williamson] Sure. So, Protegrity’s data protection story started out with something pretty basic, which was the idea that we wanted to be able to have a centrally managed encryption policy that you could deploy across platforms that were of whatever that organization had. Back then, that was mainframe AS/400, DB2, but you could deploy the same keys everywhere, so that data that was encrypted on platform A, when it migrated to platform B was still secure and still able to be accessed by the right people.
And since then, that’s grown now, we’ve got cloud native solutions and so forth, but all of it is wrapped around this idea of getting as close as possible to where the data is first captured or ingested, protecting it there using some kind of pseudonymization like AES encryption or tokenization, And then that encrypted or tokenized value floats through the rest of the enterprise, and it’s only at points where that value needs recovered that you integrate with Protegrity via a system call or whatever that will allow you to de-tokenize the data or decrypt the data. And we’ve done that with everything from integrating with applications…
[David Spark] Can you give me a real world example of this? Say I’m [Inaudible 00:24:52] integrating with applications, I’m a user using some type of SaaS application.
[Clyde Williamson] Sure, I’ll give you a real simple example. Let’s say that we have a website, commercial website that’s got some kind of ecommerce and they’re going to accept credit cards. You could put a data security gateway in front of that website, and it watches the traffic that’s flowing from the customer across. And when it gets to that form where they put in their credit card number, the gateway will actually capture the packet, transform the credit card number inside the packet into a token so that what actually hits the application, the website and the application, is tokenized data. And then that data remains tokenized as it goes back into wherever it’s getting stored, as it goes through whatever authorization process. And when it’s going to be called out to the bank for authorization and settlement, at that point, we call Protegrity again, it’s decrypted as it’s being sent out to that bank, encrypted on whatever channel they need. So that way the entire process from end to end, the data itself is protected.
Now that’s a credit card example, that’s pretty simple. But we do the same thing with some of the largest banks, with governments where we’re dealing with citizen data, with healthcare. We’ve got some major healthcare providers where we’re doing the same kind of stuff, whether that’s integrating with something at the cloud level or integrating with something at a database level, the goal is to really protect the data as soon as possible, leave it in that protected state everywhere in the enterprise, and then have specific interfaces that you can use to access that data as needed.
[David Spark] Correct me if wrong. Is data being protected in use too? Because you were saying about “we encrypt it at one point so people can use it.” Because there’s some times the data becomes, like during queries, it has to become visible at some time. So, is there a period of time in use that it is still encrypted? What’s going on there?
[Clyde Williamson] Yeah. What we’ve found is a lot of data can be used without being unprotected. For example, machine learning analytics. Machine learning analytics do not care if your first name is Bob or your last name is Jones, they don’t care if your birthday was July 1st, 1971. They care that you’re a guy and you’re somewhere in your late 40s, and you live in this general area of Ohio. By tokenizing or encrypting what we call the direct identifiers, anything that can specifically identify a person, and then by maybe partially tokenizing some of the indirect identifiers like post code, maybe only leaving the first three digits of the real post code, for example. Or the last four digits of a Social Security number as part of the encrypted value. Then you can actually use the protected data to do whatever it is you need to do for that use case without exposing all of the sensitive data.
We’ve got a solution that we designed for a government where they not only needed that sort of protection at the field level, so this type of data needs to be protected, but it needed to be protected so that depending on the sensitivity of the citizen, are you a member of the parliament or maybe you’re a celebrity of some kind. And so they needed separate protections based on the type of sensitive data they needed to protect and who that sensitive data belonged to. And in many of those cases, we were able to do all of the analytics work they needed to do without unprotecting the data at all.
Ok. What’s the risk?
[David Spark] Boards understand the importance of cybersecurity but do they “understand the economic drivers and impact of cyber risk”? This awareness is critical according to a report “Principles for Board Governance and Cyber Risk” by the World Economic Forum. Corporations are already thinking about risk but often struggle to align it with cyber hygiene. In a related article by Joshua Jaffe of Dell and Nisha Almoula of PwC, they recommended creating a cyber risk balance sheet – I’d never heard this before, sounds interesting – where you quantify the likelihood and impact of a certain cyber incident, and you align that with the risk mitigation and its associated cost.
So, Andy, I’m going to start with you. This sounds interesting, but it also sounds like, “Really? Can you make this?” Can you give some specifics of how you could actually create a cyber risk balance sheet? And it seems it would be an ultimate decision-making tool, but how do you get some validity around your risk predictions and mitigations?
[Andy Ellis] Well, I think you just asked the right question, which is this is not going to be valid. The challenge is…
[David Spark] It sounds awesome though. Let’s just start with that.
[Andy Ellis] This is just annualized loss exposure and return on security investment. There is nothing new here. When I got my CISSP 20 years ago, this was on the CISSP exam, just used different words for it. So, first of all, let’s just start from this isn’t novel, it’s just new words that unfortunately will be very eye-catching and there’s going to be a lot of boards that are going to say, “Whoo, cyber risk balance sheet, I want to see that.”
So, here’s the problem. There’s two types of risks that you deal with. One is actuarial risk. Things that happen on a regular basis and where the past is a good predictor of the future. Shoplifting, for instance. Fantastic predictor like, “Oh, look. Every day we lose $50,000 of goods across our entire chain. Pretty sure tomorrow we’re losing $50,000 worth of goods.” And then when the DA’s office in San Francisco says, “Oh, by the way. We’re not going to prosecute shoplifters anymore,” you should throw your actuarial model out the door because it isn’t a good predictor anymore. The world has changed, you have to build a new model. But it’s still going to be actuarial data, you can quickly train that up, and that’s actually really… You could do this for those things, and that’s why people get attracted to this because they’re like, “Oh, we do this all the time in the insurance world.”
Well, the problem is there’s some things that you really don’t get insured for because they’re just gambles. And you can get insurance, but it is somebody who’s literally gambling on bad things happening to you. And those are the outlier risks, things that don’t happen on a predictable cycle, that happen very, very infrequently, once in a hundred years. We talk about the hundred-year floods, except then it turns out the hundred-year floods were actually happening every 60 years, and so all the math was wrong that everybody had done.
So, the idea that you could put a number that said like, “Here’s our annualized loss expectance for this specific architectural defect in our enterprise,” it’s basically made up data. You’re going to pick some numbers out of random, you’re going to put them down, you’re going to say, “Oh, this feels good to me,” and you’re going to hope nobody argues with you. It’s like CVSS on steroids.
[David Spark] I think that’s the key line – you got to hope nobody argues with you. If you come off confident, you make a good-looking spreadsheet, and no one argues with you…
[Andy Ellis] I am using FAIR which is this industry standard way of estimating risk in which we put in 80 variables that we declare from low to high, and each one has some random number associated with it, you multiply them all together like it’s some giant [Inaudible 00:32:18] problem, and out of the end you get like, “Oh, this is a $75 million risk.”
[David Spark] I also recommend using the colors green, yellow, and red.
[Andy Ellis] No! No! I hate those colors. Stop that! Yellow, orange, and red.
[David Spark] Clyde, what do you think? Can this actually be pulled off?
[Clyde Williamson] Well…
[David Spark] You don’t have to listen to Andy on this.
[Clyde Williamson] No, no, no.
[David Spark] Let me see if you can pull this off.
[Clyde Williamson] So as I was preparing for this, and you sent me the link to that article, I read through that. And the first thing that came to my mind is when I was doing my CISSP 20 years ago, this was on the test.
[Clyde Williamson] I mean, literally. At one of the previous organizations I worked for where I was the security architect for this big retail organization, we had standard formatted spreadsheets that every time a new application was going to come up, I would send somebody out there and they would do a risk assessment. What are the possible vulnerabilities, what are the attack vectors, what are the controls that we have in place to stop that from happening, and we would do the math with it. And it’s a great thing to do if you’re the IT security people figuring out what IT security controls you need to put on that system.
But if you’re at the board level, I really think that you’ve got a lot more areas of concern to be worried about. I mean, rather than what’s the risk related to the consumer beginning to ask questions about privacy. I mean, if we think about the recent Apple ad on data privacy where they’re talking directly to consumers about privacy, the changes in our laws here in the United States that suddenly certain apps that maybe were healthcare-related now may have some serious privacy applications. Or even the kind of apps that we use with COVID for tracking contact. All of those have massive implications, and I think that that’s something that a board should be more focused on. Rather than what are the specific risks that are going on in my IT security space, it should really be about what are these existential threats that are to my whole business. Let your CISO figure out which of those threats needs to be addressed in his area, that’s his expertise. But at the board level, I mean, trying to work at things at this sort of decision level I feel like is a effort in futility.
[Andy Ellis] And I think that the idea that you have a risk and you talk about a control is fantastic, right? Your CISO should absolutely, she should be able to get up in front of the board and say, “Here is the narrative of a risk.” It’s not about, “There’s a $75 million risk.” It’s, “Here is ransomware, here’s what it looks like, here’s the sorts of things that it exploits, and here’s the problem it causes for companies,” and it’s a narrative, and then, “Here’s the set of controls that we have that protect us against it.” Because Clyde’s right. You should be talking about existential risks with the board. If you’re up saying, “We have $35,000 risk and we are spending $20,000 a year to reduce it down to a $12,000 risk.” Like, really? First of all, we all know those numbers are made up, but why is the board seeing that level of weird granularity?
[David Spark] And let’s close on weird granularity. Why not?
[David Spark] Thank you very much, Andy. Thank you very much, Clyde. This was a great conversation. I loved it. And by the way, Clyde, talk about you getting in the weeds for us, I love that, that was awesome. Thank you. I greatly appreciate it. All right. Let’s wrap this show up. Clyde, I let you have the last word so hold tight. The question I ask everybody is are you hiring, so make sure you have an answer to that question as well. And I want to thank your company Protegrity for sponsoring us, being a brand-new sponsor with the CISO Series. We greatly appreciate it. Welcome onboard! We love it! Andy, any last thoughts?
[Andy Ellis] Well, in eight months, my book will be on the bookshelves, so I’m just starting your plugs early.
[David Spark] Just get those plugs in, early and often. And you’re willing to sell more than one copy to an individual, is that correct?
[Andy Ellis] Oh, absolutely. I expect Clyde’s going to buy like 50 of them.
[Clyde Williamson] Fifty, got it.
[David Spark] You got to get at least one book for each eye, right?
[Andy Ellis] Well, one book for each eye, and one book for each colleague that he might run into because it should be in the office, and he can be like, “Oh, yeah. I read that. That’s Andy’s book.”
[David Spark] Many of the intelligent things that Andy has said on the show will appear or are appearing in that book, correct?
[Andy Ellis] That is correct. I mean, all three of them.
[David Spark] All three intelligent things he’s said. It’s a children’s book, isn’t it?
[Andy Ellis] Oh, no. Well, yes, it’s a leadership book.
[David Spark] All right. Clyde, I throw it to you. My first question – are you hiring, yes?
[Clyde Williamson] Yeah, we just had another internal reshaping in R&D where we’ve taken a team that was doing AI research and it’s now starting to productize some really cool new privacy technology, so we’re constantly looking for bright new minds to play in the data security and privacy space.
[David Spark] So, if you want to be a bright new mind and potentially play Dungeons and Dragons with Clyde, Protegrity’s the place to go. All right. Any final pitch for our audience? By the way, protegrity.com, I have that correct? You’re a dot-com?
[Clyde Williamson] That’s correct.
[David Spark] Pitch for audience? Anything you want to say? Any offer, how to get in contact with you, let’s hear it all.
[Clyde Williamson] Sure. So Protegrity is focused on helping organizations solve their business problems while protecting their data. And we’re doing that with all kinds of industries, with governments, like I said earlier in the show, and we can do it with you as well. We’re able to massively reduce a lot of cost and I think that it’s really just a great way to solve the cybersecurity and data privacy issue. So, that’s what we do, and we’ve done it for a long time, and we’ve got some really great people at Protegrity who are focused on that every day.
[David Spark] Awesome to hear. Well, we have links to you, and to Protegrity on the blog post for this very episode. Thank you very much, Clyde. Thank you very much, Protegrity. Thank you very much, Andy Ellis as well. Thank you, audience. Thank you’s go all the way around. Again, it’s not a pat thing I say when I say I greatly appreciate your contributions and listening to the show. It is like literally the lifeblood of the show. We really love it. And so the more that you give us, the more we want to recognize you, and the more that we want to share back with you. Agreed, Andy?
[Andy Ellis] Absolutely.
[David Spark] That’s what I like to hear. Thank you, everybody, for contributing and listening to the CISO Series Podcast.
[Voiceover] That wraps up another episode. If you haven’t subscribed to the podcast, please do. We have lots more shows on our website, CISOSeries.com. Please join us on Fridays for our live shows – Super Cyber Friday, our Virtual Meetup, and Cybersecurity Headlines Week in Review. This show thrives on your input. Go to the Participate menu on our site for plenty of ways to get involved, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at David@CISOSeries.com. Thank you for listening to the CISO Series Podcast.