Is It a Promotion or a Red Flag Telling You To Get Out?

A young woman is killing it in her first cybersecurity job out of college. Management is so thrilled with her that they want to give her a promotion. Problem is the promotion reveals a lot of other innerworkings that don’t speak well of the company’s culture.

This week’s episode is hosted by me, David Spark (@dspark), producer of CISO Series and Mike Johnson. Davi Ottenheimer (@daviottenheimer), vp trust and digital ethics, Inrupt.

Got feedback? Join the conversation on LinkedIn.

Huge thanks to our sponsor, Code42

As the Insider Risk Management leader, Code42 helps security professionals protect corporate data and reduce insider risk while fostering an open and collaborative culture for employees. For security practitioners, it means speed to detection and response. For companies, it means a collaborative workforce that is productive and a business that is secure. Visit to learn more.

Full transcriptI

[Voiceover] Biggest mistake I ever made in security. Go!

[Davi Ottenheimer] I didn’t know the line, and I let it fall. You got to know where your line is, and you got to have integrity. So, when a vice president, or a CEO, or a CTO says, “We’re going to walk this back. We’re not going to make this change,” and you’re in security, it’s better to die fighting for that thing that you believe in than to let them walk over you.

[Voiceover] It’s time to begin the CISO Series Podcast.

[David Spark] Welcome to the CISO Series Podcast. My name is David Spark. I am the producer of the CISO Series. Joining me for this very episode, my cohost, Mike Johnson. Mike, what do you normally sound like?

[Mike Johnson] I normally sound like someone who’s complaining about the fact that Friday is still not over yet. That’s what I’m sounding like right now.

[David Spark] Oh, so you feel like… Because we’re recording late on a Friday.

[Mike Johnson] Yes.

[David Spark] You feel like after this you’re going to be doing a lot more work.

[Mike Johnson] I am. I have work waiting for me, so that’s exciting. I can’t wait to get back to it.

[David Spark] We’re available at of which we have a bevy of other programs, not just the CISO Series podcast, as well. I want to mention our sponsor for today, who has been a spectacular sponsor of the CISO Series. That would be Code42 – reimagined data protection for insider risk. That is their bailiwick – insider risk management and how they tackle that. If you’re interested in that, which heck, why shouldn’t you be, we’ll be talking about them later in the show. But first, Mike, this episode is going to drop the end of April, and I will have just been in Dallas, Texas for a pinball festival.

[Mike Johnson] Oh, nice.

[David Spark] Yes. I will be…

[Mike Johnson] How many machines are going to be there?

[David Spark] Hundreds. Hundreds of machines. I don’t know. I’ll let you know. I have no idea. I signed up. I’m going to Texas just for a pinball festival, and I will meet up with my ex cohost, Allan Alford, while I’m out there because it’s going to be in Frisco, Texas.

[Mike Johnson] Awesome.

[David Spark] Where he lives. So, it’s literally going to be in his backyard.

[Mike Johnson] Well, he has no excuse.

[David Spark] No, he has no excuse not to show up.

[Mike Johnson] Is he joining you at the pinball thing as well?

[David Spark] I’m sure we’ll play pinball together. I’m sure it’ll happen.

[Mike Johnson] There you go. That’s awesome. Pinball fest.

[David Spark] Pinball fest. There should be more of these, I hope.

[Mike Johnson] Yes.

[David Spark] Anyways, that’s my little pointless banter. We don’t want to have any more of it because I always get annoyed when other podcasts have too long banter. That’s it. We’re done. We’re going to get into the show now.

[Mike Johnson] Done. No more banter.

[David Spark] No. No more banter. We’re done with the banter.

[Mike Johnson] No, we can’t have anymore banter.

[David Spark] You’ve gone on too long, Mike.

[Mike Johnson] [Laughs]

[David Spark] I want to introduce our guest, who I adore having on the show. He’s one of my favorite guests mostly because he introduces things that none of our other guests talk about. And he is not afraid to step on peoples’ toes who may get upset about the things he has to say. So, if you have not heard our guest before, you’re in for a treat. He is the VP of trust and digital ethics for Inrupt. It is Davi Ottenheimer. Davi, thank you so much for joining us.

[Davi Ottenheimer] Great to be here. Thanks for having me again.

How scared should we be?


[David Spark] A student has serious privacy concerns when their college said that for school internet access, “All access is being monitored and anonymously collected.” Now, the student wants to know what this really means. Does it mean everything over HTTPS? Does it also include usernames and passwords? If they said anonymously collected, my guess is not. But also the cyber security subreddit where this was posted said that HTTPS is encrypted and changes are they’re not pulling that kind of data. Now, the student is mostly concerned what would happen to their personal data should the school get breached. So, Mike, what could possibly happen? What should this student be fearing?

[Mike Johnson] Well, I think first of all the thing to keep in mind is the student is using someone else’s network. They have to abide by their rules. These rules aren’t that different than most others. You can argue as to whether or not they’re good rules, but the reality is it’s not abnormal. That school has the right to protect their own network, and these are probably just canned stuff in order to do that. I do think as long as the school isn’t doing something very unnatural about pushing certificates, pushing configurations to the students’ machines, it is unlikely that they’re able to see anything. So, they’re getting some information around what sites someone is visiting. They can see that. They can’t see passwords. They can’t see usernames as long as the student is making sure that they’re visiting the HTTPS websites. That said, what they are able to gather, all of that intelligence, knowledge, stuff about what they’re visiting. That’s information that ISPs collect a lot and use almost against you. They’re selling it off to ad companies who can then turn around and target stuff, and that’s what the student should expect is going to happen is suddenly their ad targeting is going to get a whole lot better. I don’t think anything is going to get stolen though.

[David Spark] All right. Davi, Mike has a little bit glass half full attitude towards it. Should this student be concerned by a statement like this? Do you think it’s a canned statement? What do you think?

[Davi Ottenheimer] This is actually a complicated problem because you have schools essentially set up as a protected learning environment. It’s a sandbox where you can do anything, and you have the confusing aspect of people researching controversial subjects. So, if I’m researching child abuse then I’m scanning child abuse sites. Is that an implicating in me being a child abuser sort of thing. I’ve actually dealt with this as an investigator, and I got into a sticky situation where the school would not… We had a duty of care, duty to warn. The school would not release us to warn the authorities even though we were required as professionals to do so because they consider themselves a separate authoritative entity than the environment that they were part of. In other words, the police should step into this or even more the military in some cases, and the school would say, “No, we are in a place where they are not allowed. We have our own forms of enforcement.” So, I think it gets into it’s actually better for the students to have… And I’ve run university security. I’ve been the head of security for a number of different institutions, and it’s better for the school to have the ability to see into what the students are doing because they have a better sense of context. They’ll have a relative sense of danger and risk. It’s worse if you hand it over to authorities outside who…whether it be tele cos, or whether it be the police, or whether it be the government…because they’re going to have a much more absolute sense of what’s right and wrong, not at all taking into account what the student is studying, or what the course work is, or what they were asked to do by others. So, students should sort of back off the idea that people cannot see into their data and think about how good it is that someone can actually work with them on seeing their data to help them protect themselves because they have a relationship with their institution that they’re in.

[David Spark] Mike, Davi’s glass is even more full than yours.

[Mike Johnson] We’re just full of full glasses, I guess.

What we’ve got here is failure to communicate.


[David Spark] “Can we break from the internet oligarchs who appear to be consuming, selling, and using to much of our data,” noted J. Paul Duplantis in an article on Medium. Tim Berners Lee’s company, Inrupt, that’s where Davi works, has been working on an effort to decentralize information and allow individuals to control pods of their personal data that they choose to release to organizations or not. Now, Google, Apple, and Mozilla filed objections with the World Wide Consortium, that is the internet standards body, on specifications for the decentralized identifiers or DIDs. Is it pronounced that way or DIDs? How do you pronounce them, Davi?

[Davi Ottenheimer] DIDs.

[David Spark] DIDs. I got it right. Okay.

[Crosstalk 00:08:13]

[Davi Ottenheimer] …say DID. [Laughs]

[David Spark] All right, so most others approved, and it appears that those who disapprove don’t even understand the specifications, said Drummond in an article on Evernym. So, Davi, what are these DIDs, and what are the concerns? What would change if these actually become standard? Because I know that’s what you’re driving towards.

[Davi Ottenheimer] Yeah, decentralization is actually a really interesting topic because it gets into political science, and I think every computer science person, any engineer should have to study political science at least one semester to understand the dynamics of power. Decentralization means moving away from a central authority, but it doesn’t mean that you don’t have any centralization at all. In that sense, a DID is meant to say, “We don’t have to be part of a centralized system. We don’t have to roll up into something. We can establish our own ledger, if you will. So, if I took a spreadsheet, for example, and I put a bunch of IDs in it, and I said, “This is our mastery repository of our identities and how we’ve managed to do exchanges of some kind,” that would be a way of decentralizing identity exchanges because you don’t have to roll that spreadsheet up to some other authority that reviews is like the US government, or your state government, or the UN, or whatever people are afraid of. So, DIDs are essentially a way of taking decentralization and using it for the expression of what your identity is and proving it in a much smaller sense because it’s an exchange between people.

[David Spark] All right, I’m going to come back to you on this. Mike, how much exposure have you had to decentralized identities?Because, by thew ay, in general, managing identity… And, again, not from the sort of security perspective but more from the privacy perspective, has been an ongoing issue for which there’s been many, many suggestions of which many have kind of fallen by the wayside. So, where are you in your acceptance and interest, and “let’s go team” attitude towards decentralized identifiers?

[Mike Johnson] I’m at the I’m trying to learn more stage. In one respect, it’s not my intention, but we have the concept of decentralized identifiers every website out there you create an identifier when you log in, unless it’s using login with Google, or login with Facebook, or what have you. So, on the one side we have that today just by default. And so what I’m trying to understand on the flipside is there’s a lot of convenience and to some extent security with the centralized identification that we see today. When you’re using login with Google, as an example, you’re able to login in a normal way as that end user. You go to a new website, you’re not having to re login again. You’re not having to reregister, and so on, and so on, and so on. So, there is a convenience factor of centralized authentication. And so what I’m trying to understand… I’m completely new to this. Is how are DIDs trying to I guess get the best of both worlds and still allow for convenience. But at the same time, I don’t want Google knowing every time I log into every website I go to, and so what are the advantages and disadvantages of DIDs. So, I’m in the learning stage.

[David Spark] Davi?

[Davi Ottenheimer] Well, I feel like there’s so much overthinking in some of these topics. TCPIP might be a good example of way, way, way back when you might remember IBM and AT&T were like, “We’re going to do X.25. And MCI was like, “BUT TCPIP is so much better.” And there was a lot of hand wringing about how could we allow people to have something that was so open as a standard to just… Everybody would run their own TCPIP stack, and every vendor would have their own interpretation of protocols. The internet runs better when it’s decentralized in the sense that there’s much more innovation and much more market. Way more market for ideas to express themselves in a relative sense for their own smaller markets. And when we assign IPs for example, we have a centralized registry of all the IPs, but we also have NATing, which has been for some people a pain but for other people a reality. They always live in a NAT world.

They never get a real IP. And so how to express one’s identity, if you look at TCPIP as a form of identity, has to always have some centralization where it’s really good for you and some decentralization where it’s really good for you. So, the point is to have choice. In a world without DID or some similar decentralization effort like Solid or what I’m working on, you don’t get choice in the sense of Facebook assigns you an identity because it has a relationship with the US government, and that’s your identity run by Facebook as an extension of an entity that you’d think you would want to get away from. You can’t because it’s part of the government that you are a citizen of. And we get into these issues of too tight couplings that don’t have the kind of choice that reflects the human condition. Or even more extreme example – I wash up on a beach, and I’m a refugee. I have no identity, so I need a way of establishing who I am in a very relative sense for the refugee camp that I’ve just entered. I need a way of proving who I am so every day I can get my meal. And then eventually I’ll be assigned a citizenship, or a passport, or a member of some society that accepts me. But in the meantime, I just have to prove who I am in this relative sense. And there are all these life conditions where immigration refugees move that you have to sort of represent in the digital world, and that’s what DID is trying to do. And verifiable credentials is probably a better way of looking at this than even DID.

Sponsor – Code42


[Steve Prentice] In the info sec community there is a lot of conversation around inside risk versus inside threat. Michelle Killian, director of information security at Code42, says it is important to stay clear on the differences between these two concepts in order to shape the appropriate protections and responses around the key priority, which is data.

[Michelle Killian] Insider risk occurs when any data exposure regardless of intent jeopardizes the well being of the organization, its employees, customers, or partners. And when we’re looking at insider risk, we consider the impact of the data exposure, whether it’s harmful or not, the intent, whether it was accidental or intentional, and then whether it was a willful violation of policy or not – AKA like could the user know better.

[Steve Prentice] The choice of words becomes extremely important. For example..

[Michelle Killian] Most data breaches are due to negligence and not maliciousness. So, when we think about threat versus risk, we’re really missing the bulk of what we need to care about. Focus needs to be on the data as opposed to the user, which is a different mindset.

[Steve Prentice] Then when it comes to external threats, it’s mostly about bad actors, but it’s still the data that comes first.

[Michelle Killian] We need to focus more on the data. Because at the end of the day, that’s what we’re most concerned about. We’re not looking to paint good guys or bad guys within our organization. Intent isn’t always bad when data gets exfiltrated, so it’s really about protecting the data and making that the heart of the program around insider risk.

[Steve Prentice] For more information visit

It’s time to play, “What’s worse?”


[David Spark] All right. Mike, we’ve got another great “what’s worse” scenario from Jason Dance, who I was saying…

[Mike Johnson] Oh, Jason.

[David Spark] …he and Nir Rothenberg have been the heaviest suppliers of “what’s worse” scenarios.

[Mike Johnson] [Laughs] Of difficult ones, too.

[David Spark] Yes, they supply difficult ones. So, I try to go for the difficult one. By the way, sometimes I think they’re difficult, and you’re like, “Oh my God, this is easy.” And I’m like, “All right.”

[Mike Johnson] [Laughs] We’ll see which path this one goes today.

[David Spark] All right, this comes Jason Dance, who is from Greenwich Associates. And he asked, “What’s worse? You have an open Amazon S3 bucket that is empty, is world readable, and anyone in the company can write to it.” Okay, that’s scenario number one. Scenario number two, “You have an S3 bucket that allows only company people to read and write. There’s no identity password complexities configured, and the minimum length is set to nine characters for the password. Which one is worse?”

[Mike Johnson] Okay, so on the one hand you’ve got strong authentication but weak authorization. So, you’re basically saying, “We know who can write to this, and we trust the way that they’re getting in. But we don’t know what they’re doing, and anyone in the world can read it if something happens to show up there.”

[David Spark] By the way, the first one is open bucket that is empty, too, I should mention. Second is not necessarily empty.

[Mike Johnson] Right. And I assume the risk that we’re trying to look at there is the fact that something could be written there by accident. That seems to be the risk of that world. And if someone does, it’s automatically exposed. The other one you have weak authentication but strong authorization. You have people who are not necessarily asserting their identity in a strong way. But you have this concept of only the people who are authorized can see it. So, if you’re able to assert your identity, you get access. So, that’s kind of how I break these two down is…

[David Spark] It’s a little complex here.

[Mike Johnson] Yeah. And it’s such weird scenarios, and I was trying to yammer through that to buy myself time for the answer.

[David Spark] I know. This is how it works. I know exactly how this works.

[Davi Ottenheimer] Can I simplify?

[David Spark] Yeah, go ahead.


[David Spark] David, simplify.

[Davi Ottenheimer] We have one that has no presumption of any value of data in it, so you can write a script that just erases it every five seconds, and that one is way easier and better than one where you have people putting things in that might be of value that you would be in trouble if you tried to do something with it. So, I would just wipe the one that was public all the time because there’s no guarantee that that data would ever be allowed or useful. And it reduces the risk of somebody putting something in there that would actually get you in trouble.

[Mike Johnson] Well, there we go. This one is easy.


[Davi Ottenheimer] For me.


[Davi Ottenheimer] I work on this problem constantly. This is access grants and public access. So, if I had a public web server, and I put public…

[Crosstalk 00:18:18]

[David Spark] See, Davi, you should have waited until Mike answered, and then you could have done that and make Mike look foolish. Now Mike is like, “Yeah, whatever Davi just said.”


[Mike Johnson] That’s my answer to this is I call up Davi, and I ask him the answer. And this one is solved. Clearly. Clearly the second one is the worst.

[David Spark] So, I’m sorry. The second one is the worst because the first one is empty? You could just wipe it?

[Mike Johnson] Yes. Yeah, just wipe it. We don’t care. It’s valueless, whatever.

[David Spark] Whatever. There you go. All right, an easy decision that was made thanks to Davi’s…

[Mike Johnson] Thank you, Davi. Thank you for answering and rescuing from my squirming that people were having to listen to.


[Davi Ottenheimer] I’m sorry I couldn’t wait.

[David Spark] Davi to the rescue.

[Mike Johnson] I now have a new tactic on this, David, is I just him and haw until our guest eventually…

[David Spark] It’s just is frustrated with you.

[Mike Johnson] They’re either frustrated, or they’re uncomfortable watching me squirm, and they answer. So, I have to keep that one in mind.

[David Spark] Do you know the old radio comedians Bob and Ray?

[Mike Johnson] No.

[David Spark] Davi knows. Their most famous routine or one of their most famous routines was slow talkers of America. And the whole schtick is the reporter is interviewing someone who’s from the slow talkers of America, and he’s responding incredibly slowly the whole time. And the reporter is trying to fill in all the blanks because he’s frustrated with the fact that he’s not getting to it. I get the sense that that was the game you were playing. Even though not talking slowly, you’re just like, “Maybe it’s this. Maybe it’s this. [Inaudible 00:19:42] with this.” Davi is like, “Oh, geez. This guy, he’s not getting to the point.”


[Davi Ottenheimer] I can’t take it. Please stop.

What’s the best time to do this?


[David Spark] With the push of GDPR, more regulations, and egregious abuses of user privacy, there are now arguably new rules for data privacy. Over on Harvard Business Review, Hossein Rahnama and Alex “Sandy” Pentland said, “As organizations are collecting data, they must adhere to the following – trust over transactions, so get consent first, insight over identity, glean information from the data without harvesting personally identifiable markers, and flows over silos, so location and behavior of data should be mutually understandable, no conflicting or duplicate data sorts. So, I’m going to start with you, Davi. What are organizations giving up to adhere to this behavior, and how can it actually benefit them, especially if they get out in front of it before their competition?

[Davi Ottenheimer] I feel like this misses the point entirely. I feel like what we need is real time consent, and that comes from personal data stores where people can in real time say whether they want something to be accessed or not. They can also work on the integrity of the data by saying, “Hey, whatever you’re finding isn’t an authoritative source. This is the authoritative source.” And so you get less of this how do we regulate data where it’s been pulled into some place that you don’t even know, and it’s used for far longer than you want it to be used. Duplicate data sources and things like that would all go away if we allowed people to put them into a personal data store where you knew there was one authoritative store. That’s sort of like post GDPR, but that’s what we need is to move away.

[David Spark] Davi, play this out for me. What does this look like where I’m improving things in real time? Because I’m imagining people just dealing with dialogue boxes constantly and not actually understanding the thing that they’re accepting or not accepting. How does this manifest itself?

[Davi Ottenheimer] Well, that’s the W3C protocol, Solid, or the specification that actually says if you can put the data in a place…in a machine [Inaudible 00:21:49] format you can be asked for access. Then you can consent to that access. And every time somebody wants access, they would ask you for it, and you could set a timer. Like, “You get this amount of time to access this data.” Or you can set a cookie or a token type modifier. In other words until I pull this thing, you get access. But as long as you have it, you have access to the data. That’s the way the consent could work in a really time based way that’s more relevant to your life where you say, “I no longer want access to be given or granted.” That’s very different than what happens now where you actually… What we’ve been calling it lately is the graveyard of past consents where you make a decision, and then it moves off into some data store. And people are accessing it years after you knew it was even there, and you don’t know they’re accessing it.

And that’s sort of what this study or what these people are getting towards in regulations. We shouldn’t be regulating how people take your data, and move it somewhere else, and do whatever they want with it, trying to keep them honest. Because in a way, that to me is the digital twin, which is a form of slavery. And this is where I get people often upset, or they get lost. Because to me, the digital twin concept is a form of dehumanization that shouldn’t occur. You should always be augmenting using technology, not being split off and used in ways that you have no control over. That’s the fundamental aspect of the exploitation that we see in the current platforms is that they’re doing things to you, but they’re saying it doesn’t really affect you or that they don’t care because it’s this other version of you, and you don’t even know about it. That shouldn’t happen. So, real time consent is you have the data in a place that you can actually say in real time, “Oh, I got this request. I’m going to accept,” or, “I’m going to deny.”

[David Spark] Again… And I’m hearing you all the way through, but let me ask this question – do you have any data…and you may not at all…in how people are responding to the accept all cookies or not alerts that we get on websites now? Which is essentially for GDPR consent.

[Davi Ottenheimer] This goes way back. In the book that I published in 2012, we talked about the operator consent where we were pushing please accept this…not the cookie. We were saying, “Please accept this certificate.” And we were able to bypass a lot of security controls and pop boxes because operators got fatigued with certificates. And that’s in an even more security aware environment – administrators trying to manage systems, and you can… So, of course the average user isn’t going to be resilient to that kind of fatigue, and of course it’s going to fail. But we don’t want that. We want something where it’s actually relevant to their experience. So, it’s like when the plumber says, “I’m going to come over to fix your sink.” You say, “Okay, I’m going to let you in for this job, for this purpose.” And you don’t get fatigued by that because you ask the plumber to do some work for you. And then you say, “That’s it. You’re done, and you can’t come back in and use my plumbing for anything else or use my bathroom for whatever purposes you deem fit.” And in the sense right now with the platform model, they pick everything up, and they take it somewhere else. That’s the twin aspect of this that’s so dangerous. And that shouldn’t be occurring. You shouldn’t be decoupled from your humanity.

[David Spark] Mike, I throw this to you. What are your concerns here?

[Mike Johnson] First of all, to your point of the cookie tracking, and people clicking and accepting, I think a better thing to look at is what happened when Apple made their changes around the advertising identifier and how much that impacted Facebook. Because people were not opting in for Facebook to have their information, and that I think almost alone resulted in wiping off a massive valuation of Facebook. They lost a lot off of just that one change. So, it’s clear that people are thinking about this when they’re given the option, and it’s convenient to do so. So, I do think that people care. I like the idea of the time based consent model. That you can use this data for a period of time. What we definitely see today is, as Davi described, you give consent once, and then it’s essentially you’ve given it forever. You’ve given them that data to then…

Even if they only use the way that they say that they’re going to. There’s not a time component to it. They get to use it that way forever. And there’s some amount of control with GDPR where they’re trying to restrict where that can be transferred to. But that company itself, they can already do what they want with it. I do think GDPR is heading us down a direction where something fundamentally does have to shift because they’re making it more and more difficult to the point where it’s almost impossible to meet the requirements of GDPR. And that’s going to then force us to head into a different direction. Maybe it’s in the direction that Davi is describing, which I think would be awesome. Maybe it’s a completely different direction. But where we are heading right now is not sustainable with regulations like GDPR, CCPA , with the decisions that we’re seeing coming out of European courts. Something has to change.

[Davi Ottenheimer] I’ll give you two quick examples very quick. One, everything that’s said here is interesting, and I think it’s sort of on the right path. But they missed the point. And so the point should be that we have a right to be understood. The right to delete is GDPR, which is a broken solution to a problem that we have. We don’t need the right to delete. We need the right to be understood so that we don’t have to go around deleting all the information. That’s a shift that we need. The second… And the right to be understood is a consent based, time based, real time interaction with people. Understand who I am right now because I control that, and I tell you who I am with some link probably to some authority. Second, if you’re regulating like the smell of horses for cities, that’s one thing. That’s GDPR. It’s like really bad, and we really got to take care of this problem. But that’s different than what the future holds, which is electric cars. And we could totally screw this up and end up with gasoline cars instead. But there’s an opportunity here for us to move from the problems of GDPR which is that Facebook is basically turning our cities into these terrible filthy environments that nobody wants to live in, and it moved them towards a clean environment using electricity. But we can’t let it be hijacked by another centralized interest group like gasoline of petroleum, which will just take us down the wrong road again. And that’s why decentralization fits into this, and that’s why the right to be understood is really consent based in real time, so hopefully that explains.

They’re young, eager, and want in on cyber security.


[David Spark] A young 23-year-old female is being promoted to SOC lead at her organization. According to her post on the cyber security subreddit, she’s been performing exceptionally, and her management wants her to replace the person who is performing poorly. The young woman has only been doing this for ten months and has serious imposter syndrome, as there are three other men on the team who are ten years older than her. She’s afraid she’s not cut out for the job, and I should note that the bump in salary is from 65 to 80K, which actually seems below industry average. She’s worried about the additional responsibilities the new job will require, as her old job doesn’t require her to monitor nearly as many environments. So, Mike, I’ll start with you. Many people said no one is fully comfortable with a promotion, but this is obviously very quick. And I think that’s why she’s shocked. What’s your advice?

[Mike Johnson] First of all, I think there’s a few ways of thinking about promotions. This person probably hasn’t been exposed to the office environment to industry for very long.

[David Spark] Right, first real job it seems out of college.

[Mike Johnson] Right, so first real job. And one of the ways of looking at promotions is potential. Somebody is then promoted into something with new responsibility. That’s doing a new job. That is something different. Another is the person is already doing the job, and it’s recognition that they’re already there. That kind of promotion… That’s something that people are comfortable with. It’s just simple recognition of what you’re doing. But a new job is something new. And that can be something that someone has to get used to, and it is totally normal to be nervous. That’s what I would expect for your first job, to be nervous about that. It’s okay. On the other hand, the job that they’re in is measurable. Their performance is something that can be defined.

[David Spark] And she’s doing extremely well. She’s like top of the class.

[Mike Johnson] That’s something that she should feel good about. Performance is being measured and can be compared against others. Can be compared against those people who have been in their job for ten years. And if their performance is at that same level, cool. There’s nothing to be nervous about. You’re showing that you’re able to execute. So, I think it’s very easy for me to sit back and say, “Yeah, don’t be nervous about it.”

[David Spark] Yeah. But again, you’re speaking as someone who’s not ten months out of college.

[Mike Johnson] And so for them, what I would say is just come back to the fact that the job is very measurable, and your performance is very well understood. And if you’re hitting those numbers, you should try and be comfortable with it because you’re going to continue to be measured in that way. And as long as you’re hitting them, you’re performing.

[David Spark] Davi, what’s your advice to this young woman?

[Davi Ottenheimer] There are a bunch of red flags I pulled out of this. One, she said…or at least the way I read this was there are people who are ten years older, and age is irrelevant. Age should not come into this at all. It doesn’t matter if they’re ten years older or ten years younger or whatever. If they have ten…

[David Spark] But I understand. When you’re out of college, you think… You’re looking at the seasoned people differently.

[Davi Ottenheimer] But even that, you can come out of college being 65 years old. So, the point is what’s their experience. After ten years of work, you’re an expert. Ten years is the amount of time it takes to really become an expert. But it doesn’t take ten months to become good at something. And so I think she may be getting confused as to, “I’ve been working ten months. That doesn’t seem like very long.” It’s actually a very long time to be working in security. You can learn a lot very, very quickly. So, I think what she needs to think about is why that person is performing poorly, and she doesn’t want to be the next person performing poorly. You don’t want to step into a trap. That person may be performing poorly because they didn’t get support they need, they didn’t get the education they need, they didn’t get whatever it was that was going to make them successful. So, I would suggest first identify can you make that person successful given why they’re performing poorly. That’s stepping up. That’s becoming a better leader to begin with.

[David Spark] One of the things I want to say that I didn’t mention in the intro was that person… One of the reasons they wanted to take that person off… And I think this is pretty severe. Although supposedly they had a track record of performing poorly. There was one alert that was very critical that they missed, and it seemed to upset some other people. Again, the person that was being taken off, the person she’s replacing. That seemed like a big red flag to me. Did you see that?

[Davi Ottenheimer] I did not see that, but I agree. That is a big red flag. You can’t have an environment where you can’t make a mistake and expect people to perform at their ultimate. Because making mistakes should be what you’re all focused on. You want to be able to learn from your mistakes. And so the question is… A perfect example – what did they learn from that mistake. Well, it’s to fire the person that made the mistake…

[Crosstalk 00:32:38]

[David Spark] They didn’t get fired. I was told all but fired, so pretty darn close to it.


[Davi Ottenheimer] The question is are you in over your head after ten months. So, will you be set up for failure? Are you given the runway you need to sort of take off so you can make some mistakes like that and still be successful? There’s a lot of questions here that for me were red flags, and I would emphasize can you make the person who is performing poorly perform better. That would be a better goal first. Two, can you identify around time in the saddle, doing the work as expertise as opposed to the actual age of people or any other personal attribute of them. And so then she can orient around people who as opposed to being older are really good at their jobs and figure out how they got there. So, you can figure out why people are doing poorly, figure out why people are doing well. And then she should decide if she wants to take that big leap. Because she’d be in a place where she could recover from mistakes, and she can align with people who are going to help her take big steps ahead that she might be uncomfortable with. But if she doesn’t have that kind of mentality than this is just a whack people who don’t perform well because it’s easy and bring in newer people at low paid rates, and it just sounds like abody shop turn that… Yuck. No thank you.

[David Spark] Well, the other red flag is how she reacted to the fact that there were men ten years older. I think she fears, and maybe rightfully so, that she’s going to be resented in the new role. That’s very [Inaudible 00:34:04]

[Mike Johnson] That’s a fear that she has no control over.

[Davi Ottenheimer] Yeah, the question is can she prove herself in a way that resonates. Is this an environment that would recognize someone who makes mistakes but recovers from them? If we’ve already identified that it doesn’t allow anyone to make mistakes then it doesn’t sound like you should take a big leap of faith if you don’t feel comfortable. You should play the conservative game and avoid mistakes because they’re terminal for anyone, and there’s no real safety of the learning environment. You’re not supposed to be whacked on a mistake anyway. Anyway, there’s a bunch of red flags here that I would dig in deep, and I would say you should orient around ten years is the level that you need of experience to get to expertise. Ten months is actually a very long time, so you should be ready to step up after ten months to all sorts of things that you’re not… You’re doing something for ten months? Get ready to move on. You’re ready.

[David Spark] There’s Davi’s advice – if you haven’t figured out cyber security in ten months you’re doing something wrong. Is that what I’m…?

[Davi Ottenheimer] Exactly.

[David Spark] …am I echoing your…?

[Crosstalk 00:35:02]


[Davi Ottenheimer] But you’re not going to be an expert until ten years.



[David Spark] Okay, there you go. There you go. All right, thank you very much, Davi. Thank you very much, Mike. Davi, I’ll let you have the very last word, and I also want to know if you’re hiring as well over at Inrupt. Because what you guys are doing over there is quite impressive, and I know that you’re working with Bruce Schneier, who we’re going to be interviewing next week actually, too. So, I’m going to have Bruce on as well. Sorry, Mike. I’m doing to do it with Andy this time. I know we had Bruce last time with you, but I’m going to have him with Andy.

[Mike Johnson] I figure I guess Andy can have one conversation with him. That’s fine.

[David Spark] You get to talk with Bruce all the time, Davi, right?

[Davi Ottenheimer] I do. It’s phenomenal. Working with Sir Tim and Bruce Schneier has been an amazing experience for my career. I just love it so much.

[David Spark] Tim Berners Lee, who runs Inrupt. And essentially Davi, Tim Berners Lee, and Bruce Schneier… And let me just ask, are you hiring right now?

[Davi Ottenheimer] We are. We’re hiring a lot of positions, yes. Yes, we are.

[David Spark] You get an opportunity to be around that kind of brain trust, and I’m sure there’s plenty more, too.

[Mike Johnson] Why would you not want to?

[David Spark] On top of it.

[Mike Johnson] Go do it.

[Davi Ottenheimer] We’re hiring all kinds of positions. Yeah, check on our website.

[David Spark] Let me thank our sponsor, Code42. Simple as that. The way I said it, that’s where you should go. It isn’t one of those bizarrely spelled company names. It’s the word code and the number 42, and then you would add a .com after it. If you’re interested insider risk, which is what they deal with, they’ve got some very intriguing ways on handling that very, very issue. So, thank you again, Code42, for that. All right, Mike, any last thoughts?

[Mike Johnson] Davi, thank you for joining us. As David said in the very beginning, you bring different perspectives, and I’m glad you came back so we could have another conversation because I learned so much. I’m sure our audience has learned a lot as well. So, thank you for bringing those perspectives. One of the things you said that really set something in my mind that I hadn’t quite put my finger on was your mention of the graveyard of past consent. And that was just such a brilliant way of explaining these decisions we are making on a regular basis and not realizing it. So, thank you specifically for that mention. That’s something I’m going to carry with me, and I’m sure our audience will gather so much experience and knowledge from this episode. So, thank you for joining us and sharing with our audience.

[David Spark] All right, thanks.

[Davi Ottenheimer] Thank you for having me.

[David Spark] All right, and so given us a pitch for Inrupt. And I know you’re hiring like crazy, and I’m assuming just There’s job listings there, yes?

[Davi Ottenheimer] Yeah, You can definitely check out all the things we have open. We’re hiring tons of engineers, of course, to build the next web. It’s going to be amazing. And really help people center their information around their true identity, take control back from the dragons – Sir Tim, the knight of the British empire.


[David Spark] Awesome. Well, thank you so much. And by the way, we’ll have links to Davi on Twitter. Which I notice you’re not too active on Twitter these days.

[Davi Ottenheimer] I’m not on much social media anymore, and I was obviously kicked off of LinkedIn, which is a whole story in itself.

[David Spark] Don’t understand that quite yet. But anyways, just reach out to Inrupt in general, and maybe you’ll have an opportunity to work with Davi. Anyways.

[Davi Ottenheimer] You can look at my blgo if you want to see the social aspect of my life. Yeah, I blog quite a bit. I get a lot of readers there, yea.

[David Spark] We will link to that – flying penguin. Again, there’s many controversial things he likes to stir up. He was showing me something recently. Maybe I’ll point to that, too, as well.


[David Spark] All right, have I teased our audience enough? Good. Thank you very much, Davi. Thank you, Mike. Thank you to our sponsor, Code42. And thank you to the audience. As always, we greatly appreciate your contributions as well. By the way, if you want to start submitting some “what’s worse” scenarios and try to beat out Jason Dance and Nir Rothenberg, start sending them in. We want to hear from you. Thank you so much for contributing and listening to the CISO Series Podcast.

[Voiceover] That wraps up another episode. If you haven’t subscribed to the podcast, please do. We have lots more shows on our website, Please join us on Fridays for our live shows – Super Cyber Friday, our virtual meet up, and Cyber Security Headlines – Week in Review. This show thrives on your input. Go to the participate menu on our site for plenty of ways to get involved, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at Thanks for listening to the CISO Series Podcast.