Giving Slack Slack Will Lead Your Teams to Discord

Even before the pandemic, we’ve been increasingly living in online collaboration apps. So why are organizations still making basic security mistakes with them? Is this a case of shadow IT or do these apps present unique challenges?

This week’s episode is hosted by me, David Spark (@dspark), producer of CISO Series and Mike Johnson, CISO, Rivian. Joining us is our sponsored guest, Rich Dandliker, chief strategist, Veza.

Got feedback? Join the conversation on LinkedIn.

Huge thanks to our sponsor, Veza

75% of breaches happen because of bad permissions. The problem is that you don’t know exactly WHO has access to WHAT data in your environment. For example, roles labeled as “read-only” can often edit and delete sensitive data. Veza automatically finds and fixes every bad permission—in every app—across your environment. Learn more at

Full transcript

[Voiceover] 10-second security tip. Go!

[Rich Dandliker] Tell your grandma to protect her email as the number one thing. Because of account recovery, you can get to everything else from email. That’s the key.

[Voiceover] It’s time to begin the CISO Series Podcast.

[David Spark] Welcome to the CISO Series Podcast. My name is David Spark, I am the producer of the CISO Series. And joining me, my co-host, you know him, unless this is your first time listening, maybe you don’t know him, but it’s Mike Johnson. He is the CISO of Rivian and the co-host of this very show.

By the way, I know that you’ve been CISOs for three different companies since [Laughter] you started here, Mike.

[Mike Johnson] [Laughter] It’s been quite the journey, David.

[David Spark] Yeah.

[Mike Johnson] But really, that tells a lot about how long we’ve been doing this.

[David Spark] Yes. Now, you and I have been doing the podcast for five years, more than five years because it was June of 2018. But – what I do want to mention is that we’re going to be hitting the five-year anniversary of, the address, which by the way, if you have not gone there you should go there right now.

We will hit the five-year anniversary of that come October, and just teasing that, and first though, I do want to mention our sponsor Veza, who is, by the way, responsible for bringing our guest here today, who I will introduce also in a second as well. Secure your identity access – More about that very topic a little bit later in the show, and we’re going to be talking about identity more in the show.

But I did want to just do a tease for our anniversary, which will be coming up less than a month from the day that this episode drops. While we were at Black Hat, I hired a camera crew, we were filming some Man on the Street videos, and hopefully actually one will drop before this episode airs, but I got some testimonials from people.

[Mike Johnson] Oh, cool!

[David Spark] And the question I asked them was – how has the CISO Series impacted your career in security? You’ll be very pleased to know we got some spectacular responses to that question.

[Mike Johnson] That’s awesome!

[David Spark] Yes.

[Mike Johnson] That is so cool! That’s why we do this, right? We want to help people out. So that’s really cool to hear that feedback that really has impacted folks.

[David Spark] We’re going to use every one of them!

[Mike Johnson] [Laughter]

[David Spark] Every single one of them! We’re somehow going to squeeze them in. Now, if they don’t fall here in that one, we’ll do kind of a compilation. If they don’t appear in that, they’re going to appear somewhere else. You’re going to hear all the nice things people say about us!

[Mike Johnson] Well is it you have a tolerance for compliments, David?

[David Spark] Extremely high. You wouldn’t believe what I could take while I was there. It was spectacular. All right. I want to bring our guest in right now. Very thrilled to have him, and also thrilled to have his company sponsoring us. It is our sponsor guest who is the chief strategist over at Veza – Rich Dandliker.

Rich, thank you so much for joining us.

[Rich Dandliker] It’s fantastic to be here, David.

They didn’t think that through all the way, did they?


[David Spark] Startups are by nature a risky business. Most fail, but why do they? Ross Haleliuk who’s the head of product at LimaCharlie posted on LinkedIn 12 reasons. Some of our favorites were building a product rather than building a business and having unrealistic expectations about distribution channels and customer adoption.

Mike, what did you think of this list and what has been your experience watching startups succeed and fail?

[Mike Johnson] I think a lot of these really come down to two categories. One is product market fit or growing too fast. If I really were to just summarize the majority of Ross’s list, it’s really those two things. I would argue though that these are not specific to cybersecurity. These are reasons why startups fail, full stop.

[David Spark] But you know this show is about cybersecurity, just so you know, so you can stay on that topic.

[Mike Johnson] I will stay there but I guess what I’m really trying to remind folks is there’s some things that are unique about cybersecurity and there’s a lot that isn’t.

[David Spark] Mm-hmm.

[Mike Johnson] Some areas we are treading new ground and some, you can actually look into the broader market and understand what’s going on. One of the things that I think that was missing from his list that I do think might be a cybersecurity-specific thing is he makes a point of they’re building a product not a company.

Some of these companies are building a feature. That’s all. They’re building a feature of a product of a company.

[David Spark] Yes.

[Mike Johnson] That’s not going to go very far. So I have certainly seen that.

[David Spark] In a world where a lot of security leaders are looking for platform plays rather than point solutions, to be the point of a point solution, that’s a tough call. [Laughter]

[Mike Johnson] Not going to go well.

[David Spark] All right. Rich, I’m throwing this to you. What did you think of this list? Were there any you disagreed with and what were the ones that stood out to you?

[Rich Dandliker] Yeah, no, I totally agree I think with what you guys were saying and what Mike was saying about being a feature versus a product or versus a platform. But I think one of the challenges is that everybody knows and you read any sort of advice you get as a startup – you need to have maniacal focus.

You’ve got to be really, really targeted and you can’t try and do too much. And yet, you need to be not just a feature. You need to be a product or a platform. And so how do you sort of bridge that gap?

I think the key is you got to start small but have a plan for how you’re going to dominate the world, right? Why does being great and being adopted for even that feature, something small, take you to something bigger? I think the best example of this I’ve seen, even though it’s not very popular these days to reference Elon, I think you look back at his master plan for Tesla for 2006.

Hey, we’re going to start with a super-fancy expensive roadster but that’s going to give us scale. That’s going to give us the ability to produce at lower and lower unit costs. That’s a great example I think of having thought that through of how you go to something small and focused and niche into something larger that can really take you to the next level.

[David Spark] Was there anything that you disagreed with though?

[Rich Dandliker] There was. I think you can worry too much about competitors. And so just looking at the market, reading one too many analyst reports, I tend to think personally that could be a mistake. The center of gravity has to be talking to customers, has to be talking to the people who are actually operating, who are going to buy your product.

I think that is the key to success and sometimes you have to ignore whatever that Gartner report was that talked about all the different things and you have to ignore what all the competitors are doing and even ignore like, “Hey. What if this competitor decides this is a really good idea and comes to stomp us?” Just do your thing, keep your head down, and understand the customer problem.

That’s the recipe for success, I think.

[David Spark] Yeah, worrying about the competition, it’s actually not going to make your product any better.

[Rich Dandliker] Exactly.

How can we secure new technology without creating new risks?


[David Spark] For all the potential we’ve seen from generative AI tools, there’s also a lot of fear about how these tools will impact cybersecurity. Sravish Sridhar explained some of the common fears, things like data leaks and generic information, in a recent piece for SC Media. Now, a recent Reuters/Ipsos poll of US workers found that only 22% of employers explicitly allowed using tools like ChatGPT at work, with 10% saying the tech was banned.

Now we’re seeing some big names putting out blanket bans, like Samsung and Procter & Gamble, but as Matthew Sullivan of Instacart noted on LinkedIn, we have a long history of companies wanting to ban new technologies but the users keep pushing for it because it’s so desirable. So, I’ll ask you – why does this keep repeating itself with the attitude “but this time it’s going to work”?

“This time the ban’s going to work! They’re not going to be able to use the product they want to use!” What do you think, Rich?

[Rich Dandliker] Yeah. David, I think you answered your own question here. [Laughter] I totally agree. You do not put the genie back in the bottle. I’m actually a huge fan of all the really fantastic things I think that LLMs, AI is going to bring to workers and to companies. It’s going to be a tremendous source of value.

And so I think it is that. It’s that initial fear of like, “How do we handle this? What are we going to do?” There’s a knee-jerk reaction to say, “Let’s ban it.”

That said, I think those bans are really about public large language models, like the public version of ChatGPT. I was recently at an event with a ton of CISOs and this came up in conversation just down the hallway. Every single one of them was doing a test of the enterprise version, something like Azure for OpenAI or some sort of internally focused large language model trained on internal company data.

I think that is going to be where tremendous adoption occurs because you can get the benefits without worrying as much about leaking data, mixing company data with public data. I think there’s going to be a huge uptick in that. But I agree. You really have to distinguish between the public versions of ChatGPT and the internal enterprise versions of these internally trained large language models.

[David Spark] But I will argue, and I just posted this video of Sounil Yu, and I’m going to bring this Sounil Yu video up many times because I love what he said. He referred to CISOs being the CFOs of IP. That they need to learn how to spend it to get value back, and it may be putting it into a public space.

Mike, what do you think?

[Mike Johnson] Well, one of the things I wanted to call out is I really want to highlight what Rich said there about public ChatGPT versus enterprise, and it relates to what Sounil had to say, which is this market is moving so quickly.

[David Spark] Yes. Crazy fast.

[Mike Johnson] I actually wonder when this poll was taken if the enterprise version of ChatGPT even existed. That’s how quickly things are moving. And so the question that was asked might not really be what we’re facing today. Are people looking at banning posting intellectual property into public models?

Probably. That’s probably a pretty safe thing for people to be looking at. But I do think it really does come down to how do you get the value out of your intellectual property. A lot of that is with your internal usage where you can have a sense of looking and leveraging that data safely. But maybe there are some external uses for maybe some of your intellectual property.

There might be value of actually training public models on that.

Sounil’s point is interesting that you had this initial, in the age of search engines, it was, well, just put your content out there. Google will find it and that will be great. Then there was search engine optimization became a thing. And then there was the concept of black search engine optimization, that was the term that was used, of posting your data in ways to get it indexed that was not necessarily up with the terms of service of Google.

And so kind of thinking about what Rich was saying and what Sounil was saying, how do we see people inserting their own intellectual property into these public models so that it does get added into the answers, so that you now have someone is asking, “Hey, what is the best recipe for macaroni and cheese?” and the next thing you know, you’re getting recommendations for which brand of cheese to buy, which noodles to go buy, not just the actual recipe.

So, I think there’s an interesting thing there of the confluence of feeding your own information into these public models.

[David Spark] Right. And this was this whole idea that a CFO spends money to get value back, and the idea that a CISO can spend IP to get value back. Same concept here.

[Mike Johnson] I would argue that it’s not the CISO’s choice and the CISO’s decision to actually decide what intellectual property should go into the public.

[David Spark] Good point.

[Mike Johnson] But that’s certainly a conversation that a CISO should be a part of and maybe even leading internally into the company.

Sponsor – Veza


[David Spark] Seventy-five percent of breaches happen because of bad permissions that cannot be detected by traditional identity, governance, and administration, or IGA tools. For example, traditional IGA tools fail to detect roles based as “read-only” that in fact grant permissions to edit PII data or users and admins created locally within a SaaS app bypassing the IGA system.

That’s because traditional IGA tools cannot track granular permissions across enterprise data and applications.

Veza – that’s our sponsor – is the next-generation IGA platform that manages individual permissions across all cloud, on-premise, and hybrid enterprise systems and applications. Veza supports the full life cycle of identity management from creation to monitoring to reviews, and offers over 100 integrations – ah, our audience likes that – with platforms like AWS, GitHub, Salesforce, SharePoint, and Snowflake.

The Veza Open Authorization API makes it quick to connect to any cloud, on-premise, and hybrid system. Now companies like Expedia, Intuit, and Blackstone use Veza to streamline audit prep, entitlement certifications, and user access reviews, as well as to find and fix bad permissions, enforce security policies, and to continuously update every permission to maintain least privilege.

That’s what we like. Head to to learn more.

It’s time to play “What’s Worse?”


[David Spark] All right, staying in the theme of ChatGPT, that is where today’s topic is, and I think you guys have both brushed upon it. Rich, are you familiar with this game that we play, “What’s Worse?”

[Rich Dandliker] Only a little but hit me.

[David Spark] All right. It’s not going to be difficult for you to understand. Two scenarios from our audience submitted, they both stink, but you have to tell me which one is the worse scenario. It’s a risk management exercise. Mike, are you ready?

[Mike Johnson] Well, first, I have to ask did ChatGPT write these?

[David Spark] No. But we have toyed with that before.

[Mike Johnson] [Laughter] Great.

[David Spark] We have toyed with that. This comes from Neil Saltman who looks a little bit like ChatGPT, he works for Armis. Here’s the scenario. What’s worse – finding out users have shared sensitive data in LLMs like ChatGPT, violating GDPR and other data sharing guidelines, or sensitive data was shared publicly on a website but then removed within an hour.

Someone definitely saw it because it was reported, but it’s not clear if it was copied or distributed anywhere else. Which one is worse, Mike?

[Mike Johnson] What’s interesting, I think what Neil’s trying to get at is the fact that you share something with a public LLM model doesn’t necessarily mean that someone is going to receive that.

[David Spark] Right. You don’t know. It could stay invisible forever.

[Mike Johnson] Right, right.

[David Spark] Or not.

[Mike Johnson] And then the second one, the scenario is it was definitely exposed just for a very short period of time.

[David Spark] Mm-hmm.

[Mike Johnson] That’s kind of how he’s getting at it.

[David Spark] And you don’t know if it went anywhere else.

[Mike Johnson] The issue is it doesn’t matter if somebody saw it if it was shared outside of your own promises that you’re making to your customers.

[David Spark] Oh. Then you got your legal troubles. But you have legal troubles in both cases, by the way.

[Mike Johnson] But again, the point here is from a legal troubles’ perspective they’re actually identical.

[David Spark] Yes.

[Mike Johnson] Even if you’re trying to say, “Well, it was seen versus not seen,” that’s not the issue. So, the reality is in terms of what you’re going to then share with your customers and try and have those conversations. You look at how long was it exposed and can you actually delete it, can you remove it.

Removing data that’s actually been shared into a public LLM, pretty close to impossible.

[David Spark] I don’t know how it’s done.

[Mike Johnson] I think you’d have a really hard time convincing OpenAI to delete something that had been shared.

[David Spark] But in the second scenario, it’s very possible it was deleted because someone saw it, you saw it, you took it down, but you don’t know for sure.

[Mike Johnson] Right. And what you’re having to communicate to your customers is in the second case it was shared, it’s been removed, it could have been viewed, we’re not sure, we’re terribly sorry, so on and so forth.

[David Spark] And then would you please add, “Security and privacy are very important to us.”

[Mike Johnson] I was trying very hard not to say that.

[David Spark] I said it for you. [Laughter]

[Mike Johnson] Thank you, thank you. In the first case, you can’t actually go to a customer and say, “Hey, the data’s been deleted.” You actually cannot say that. And so I really think in these two scenarios, the public LLM model is the worst of the two because you’re actually having to go to customers and say, “I don’t know.

Might get out there. Don’t know. Not sure. Uncomfortable.”

[David Spark] Well, you’re kind of saying that in both scenarios. All right. I’m throwing this. So, you say the LLM is far worse. All right, Rich, I want you to parse this one out. Which one is worse of the two?

[Rich Dandliker] I’ll take the other approach. I actually say that posting up on the public site is worse because having to back it out and even say, “Oh, but it was only up for an hour,” you just look like a complete weasel.

[David Spark] [Laughter]

[Rich Dandliker] There’s no defense for that. Plus I was actually at a conference around InfoSec for AI, and some of the frontier AI labs were there, it was actually the day after Black Hat, and I was having this conversation with someone who runs the AI infrastructure for one of the very, very large AI companies.

She mentioned that they actually are – we talked about this exact topic – she mentioned they’re actually doing hashes of every single piece of data that comes in. They’re recording that hash and linking it to the training the model. So, some of these labs actually already do have the capability to actually pull individual pieces of data under the expectation that they’re going to start getting these kinds of requests for copyright, for GDPR.

They’re ahead of the game. I think that may be easier than we give it credit for to actually pull out certain types of data because of this hash table lookup. There actually is a path forward, at least an N=1 example, of someone at a large lab.

[David Spark] Well, let’s hope what you’re describing is moving at the speed that the rest of ChatGPT is moving at. One of the things, I was just at Black Hat, and when I was at Black Hat last year, the person who introduced me to these AI image generators, specifically Midjourney, was my cameraman from last year who I hired again this year.

This was from last year. I mentioned that to him, I go, “You’re the one who pointed it out to me.” And a year later has passed and the amount that that one program – again, it’s AI imaging – has drastically changed. I just don’t know if I’ve seen the speed of any other technology move at this rate. Rich, I mean, have you seen anything move this fast?

[Rich Dandliker] No. I think it’s not only the rate but it also seems to be, at least from the outside, completely unpredictable. It goes almost as a step function. You see this as you’re following it, some of the things from DeepMind and reading about AlphaGo and well-informed people were thinking it was going to be 30 years before AI can actually win at these things, and then all of a sudden it’s here.

I think that’s been the thing that’s been most surprising is that it’s very unpredictable and you have these spikes of rapid advances of capability. So it’s a little bit fascinating and wonderful, and a little bit terrifying at the same time.

[David Spark] What is it for you, Mike, quick, fascinating and wonderful, or terrifying?

[Mike Johnson] Oh, I think it’s both simultaneously. The capabilities and the what we can potentially do with it are both, “Hey, this could be awesome,” or “Hey, this could be really, really difficult for us to deal with.” So, I think it is fast moving in a way that I haven’t seen something like this that I can recall.

[David Spark] Yeah. It is fast. And it’s exciting that something changes every two weeks.

[Mike Johnson] Yeah, and a lot of it comes down to the fact that it is so approachable.

[David Spark] Even my mom could use it.

[Mike Johnson] Exactly. Rich had mentioned DeepMind. That’s deep within Google, nobody else could ever use it. That was a thing that they were doing themselves, and this is something that anybody can go to a chat interface and start actually gaining value from one of these generative AI platforms.

Please, enough! No, more!


[David Spark] Today’s topic is least privilege. Oh, my God. We’ve talked about this I think a couple of times on this show. Have we talked about this on this show?

[Mike Johnson] More than once.

[David Spark] More than once. I’ll search our site to see if we’ve done it more than once.

[Mike Johnson] Mm-hmm.

[David Spark] Okay. So, Mike, I’m going to ask you, what have you heard enough about with least privilege, and maybe it’s something we said on this show, and what would you like to hear a lot more?

[Mike Johnson] It’s one of those interesting things where I’ve both heard enough of, “It’s too hard so don’t bother,” or “It’s easy just to figure out what everyone needs.” It’s this weird thing where I hear both too much.

[David Spark] So, some people have figured it all out, and others are like, “You can’t do it.”

[Mike Johnson] I think the reality is no one has figured it out. They’re claiming that they have, and they’re saying “Oh, well, it’s just easy. Just do this.” And so I really think what I would like to hear more of is genuinely how do we make least privilege happen in an existing running environment…

[David Spark] That’s key.

[Mike Johnson] …without a ton of manual effort. And make it work sustainably, make it something that we can scale. I’ve already gotten 14,000 employees to deal with. I can’t just throw everything out and then just build a new system, that doesn’t work. I’m flying a plane.

[David Spark] You got to do this all in mid-air.

[Mike Johnson] Yes, yes. So, that’s what I’d like to hear more of is realistically real-world how do you get there.

[David Spark] All right. Rich, I’m going to ask you the same question. What have you heard enough about and what would you like to hear a lot more, and can you solve this in mid-air?

[Rich Dandliker] Absolutely. I think it is absolutely one of these things that you hear everybody agrees with in principle. Everyone who’s listening to this may have actually checked the box to say, “Do you follow the principle of least privilege in your compliance framework?” Of course, “Yes.” But when you get down to the brass tacks of how do you operationalize it, how do you actually put that into practice, it is incredibly hard.

And I think there’s some fundamental reasons for that.

One is that when you look at the foundation of role-based access control, of really just doing that, what is actually a role? When you’re asking people out in the business to do things like access reviews and to go out and say, “Hey, is this right? Here are all the people on your team. Here are the roles they have.

What the hell does Super-Secret Admin Number Two mean? Right? And it’s like you’re asking a director of HR to make that decision. It’s just like the tools and the process do not fit the problem, and I think that’s one of the things is that we’ve relied a bit too much on this simplifying assumption of just, “Hey, just make roles, make roles that everybody needs,” and then leveraging that and it’s all based on…

Actually, it’s even worse than that. It’s just the name of the role, and you’re expecting everyone in the business to manage based on a naming convention. It’s no surprise that there is terror and gnashing of teeth when you go through these things, and access reviews is a universally hated business process.

[David Spark] Yeah. Why would anyone like that at all? So, this is something that Veza plays around in in identity. Where are you tackling this that others are just not doing it as strongly, I will say?

[Rich Dandliker] Yeah. The first thing is that we really are going and integrating into these systems that already exist. So, if you look into the problem of authorization…

[David Spark] So, this is in-flight reference that Mike was making.

[Rich Dandliker] Exactly right, exactly right. Because, yeah, going and trying to rearchitect something or there’s some, I’ve seen plenty of vendors, that are going and saying, “Hey! We have this inline approach. Just go and deploy our thing in the middle between users and the data.” Oh, my God! [Laughter] Who in their right mind would do that in production mission critical systems?

It’s just not going to happen.

And so we’ve taken a much different approach where we essentially embrace that complexity and we help our customers manage the policies and authorization for the systems and the authorization systems they’re already using. We’re going into those native systems. And you can think about Veza essentially at its fundamental layer, it’s a way to translate and rationalize all these different systems of authorization.

Because when you go system by system by system, the authorization scheme because the resources and all the objects that exist in these systems, they’re all different, and so it’s like each one is its different language, and then you’re trying to apply universal policy.

Or we have CISOs that want to do things like, “Hey. Make sure no contractor in China ever gets access to my customer data,” and customer data is across 30 different systems. And so how do you actually implement a policy and implement those tactical controls that make that very simple-sounding statement come to a reality?

It’s incredibly hard but that’s exactly what we’re trying to do at Veza.

[David Spark] You make a really good point because I just think about just this whole concept of one company dealing with all the integrations so you don’t have to. I liken this to the payroll system I use where I don’t have to know any of the tax laws in all of the 50 states, they just deal with it for me.

[Rich Dandliker] That’s right.

[David Spark] The sense of relief I get from that is huge, and the sense of relief that’s saying, “Oh, someone whose full-time job is dealing with these integrations and they’re doing it for me? How wonderful.”

[Rich Dandliker] Absolutely. One of the great examples I’ve seen is even just go to one system and you go into AWS, and you go into AWS IAM which is sort of their universal policy system for managing all these things across the 200 different services that live in AWS. The user guide for that thing is 1200 pages long, just the user guide.

That’s not technical documentation. You can imagine the complexity of this stuff and that’s just one piece of the overall puzzle. That’s not enough. You’ve got to link it together with identity systems, you got to link it into ACLs and local permissions, and you got to do it on every single system that you have.

It’s really tough.

[David Spark] Do you want to know all the integrations for every single system, Mike?

[Mike Johnson] No.


[David Spark] I think it’s as simple as that – no, not at all. So, is there anything else – I’m sorry, and I want to give you the floor to sort of explain a little bit more with Veza – but is there anything more besides the integrations, the key thing being the integrations here to Veza?

[Rich Dandliker] Yeah. I think it really is that we started with this fundamental data model building a graph that connects everything together around permissions. So it goes all the way from user to group to role to policy into each system down to the resource level, and we hook that all up, right?

It’s that fundamental data model that it really is the foundational piece of Veza. And then we built a bunch of products on top of that, but that’s the thing that I’ve never really seen any other company put that together all the way from user all the way down to resource and permissions, like kind of user, create, read, update, delete on that particular resource.

[David Spark] And then just quickly closing – the scale issue, which is very, very big. How are you dealing with scale?

[Rich Dandliker] It is an issue but it’s one we’ve been able to deploy at, you mentioned in the beginning, customers like Intuit, Blackstone, AMD, Wynn Resorts, all our customers we’ve been able to operate at scale. It can be done. It’s not easy but it’s possible.

[David Spark] Actually we spoke with the CISO at Wynn Resorts and he said, “This is not something that happens overnight. It’s something that does take time. You can’t think that there’s a magic solution here.” But the fact that he’s been doing this for a year, he’s in such a better place today than he was a year prior because you start that journey.

[Rich Dandliker] Absolutely. You just can’t ignore least privilege. I mean, it would be great if we could just say, “Oh, it’s too hard, let’s move on,” but the reality is there’s no other way to really get yourself ready for the inevitable breach. You always hear it’s not a question of if, it’s only a matter of when.

[David Spark] You’re speaking the language of our audience.

What we’ve got here is failure to communicate.


[David Spark] Even before the pandemic, we’ve been increasingly living in online collaboration apps. So why are organizations still making basic security mistakes with them? On Computerworld, Linda Rosencrance wrote about some common problems. The biggest is that organizations don’t provide central governance on these tools, leading business units to make their own choices, often with little oversight.

Actually, I’m going to start with you, Mike, on this. Is this a case of shadow IT or do collaboration apps provide more unique challenges?

[Mike Johnson] So, first the term “shadow IT.” I was listening to a podcast the other day and it reminded me that…

[David Spark] Oh, do you listen to others besides this one?

[Mike Johnson] I do, I do.

[David Spark] Mmm, Mike.

[Mike Johnson] We only put out what, 20 hours of content a week, David?

[David Spark] Yes. [Laughter]

[Mike Johnson] I’ve got some other time.

[David Spark] Don’t you have a job as a CISO as well?

[Mike Johnson] Yeah, yeah. That takes the time. I don’t sleep so…

[David Spark] Well, you’re flying planes midflight fixing security issues, you must have tons of extra time.

[Mike Johnson] There’s no time, there’s no time to sleep.

[David Spark] All right, go ahead.

[Mike Johnson] So, the comment was made and I totally agree with it that there’s no such thing as shadow IT anymore, it’s just IT, and we need to figure out how to empower the business. I think that a really good example and a good point here was about the need for centralized governance. That really goes a long way to saying, “This is our collaboration app.

These are the ones that we use. We’re going to make these very easy to use, but we’re going to also provide the guardrails. We’re going to make it difficult for you to make a mistake, for you to accidentally share something to the world. We’re going to make that really hard, but we’re going to make the collaboration actually really easy.”

And that’s the problem that I think a lot of people deal with. Employees, they see these restrictions as problems to be worked around. They need to share a file with a company that they work with that is a key supplier, a key vendor, a long-term relationship critical to the company, and they go and try and share a simple file and they run into walls and they can’t use the normal tool.

They can’t use whatever is the base collaboration tool, so they go and find others. And this is really that opportunity for CISOs to embrace the business, to help out, to make sure that the collaboration tools that we have today that are the anointed ones are easy to use, are meeting the needs of the business, and then you won’t have people trying to work around them, you won’t have a lot of these problems that are called out.

[David Spark] Rich, what’s your say?

[Rich Dandliker] Mike, I think you hit it dead on the nail here because you really got to get out in front of users. You got to make it easy. I think the benefit of things like collaboration apps is that they’re built in with network effects, so once you get to a tipping point and once you get most of the people on a collaboration app, your problems with so-called shadow IT go away.

It becomes so much more valuable to get on the same universal standard there. So I think it’s absolutely right. I think the days of finding things out of compliance and shutting them down, it feels like those days are long gone. It’s an indication like, “Hey, my users need something else. Let’s go find a most secure solution that can meet those needs and get out in front of it.” But the days of shadow IT, that was a decade ago.

I think [Laughter] worrying about that stuff is a thing of the past.

[David Spark] Well, it’s just the way IT operates. I mean, this goes back a number of years ago but I went to the AWS re:Invent Conference and I was interviewing people, and I asked them a question about their IT department. And half, at least half – and again, it was AWS re:Invent I was at – half of the people said, “What IT department?” Doesn’t even exist for them because they were fully cloud based for that matter.

[Rich Dandliker] Exactly.

[David Spark] But the thing is is that we have a long, long history of using collaboration tools that keep evolving and I’ll quote something that Clay Shirky says. The reason we run into these problems of information overload or too much email or whatever communication tool you’re using is because the filters break down, they stop working, and it may be just the way the communication’s coming in.

He also pointed out we’ve had the information overload issue since the Gutenberg Press, so it’s not new to the internet. It’s been around a while.



[David Spark] All right. Well, I want to thank both of you for joining me today. Rich, thank you so much for our discussion and specifically around least privilege. Actually, it was a really good discussion that we have not had on this topic about least privilege, mostly around the integration issue, which I greatly, greatly appreciated.

I want to thank your company Veza – secure your identity access. Remember – their website is Rich, I’ll let you have the last word. Mike, any last thoughts?

[Mike Johnson] Rich, thank you for joining us. It was great conversation back and forth. I learned a lot, I learned some more about generative AI which every day goes by, happy to learn more. What I really wanted to call folks’ attention to though was back at the beginning of this show, we were talking about why startups fail, and you made the point that it’s critical to understand the customer problem.

And I really think if that’s something that… One thing that you can take away from this show is understand the customer problem either as a vendor selling things to CISOs or as a CISO who’s working with your own company. Understand what your company’s problems are and help solve those. And so thank you for that particular insight and thank you for joining us.

It was wonderful having you on the show, Rich.

[David Spark] Rich, any last thoughts? And we always ask – are you hiring?

[Rich Dandliker] Thanks. David and Mike, this has been a real joy here, I’ve had a great time. I think my takeaway here is just to encourage everybody – embrace the large language model, start looking into it. If you don’t think your company’s doing it, you probably should look harder because someone’s probably doing it in some corner of the organization.

It’s not something that’s going to go away. It’s not something I think that any CISO is going to be able to keep out of an organization, it’s coming. Get ready and be ready. And absolutely, we are hiring. Come to the website and we are in growth mode, absolutely.

[David Spark] Excellent. Well, thank you to Rich, thank you to Mike, and thank you to our audience. We greatly appreciate your contributions and listening to the CISO Series Podcast.

[Voiceover] That wraps up another episode. If you haven’t subscribed to the podcast, please do. We have lots more shows on our website, Please join us on Fridays for our live shows – Super Cyber Friday, our virtual meetup, and Cybersecurity Headlines Week in Review. This show thrives on your input.

Go to the Participate menu on our site for plenty of ways to get involved, including recording a question or a comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at Thank you for listening to the CISO Series Podcast.

David Spark is the founder of CISO Series where he produces and co-hosts many of the shows. Spark is a veteran tech journalist having appeared in dozens of media outlets for almost three decades.