Mitigating Generative AI Risks

Mitigating Generative AI Risks

As with any new technology, generative AI comes with a set of risks. So how can we address these risks to take advantage of its benefits?

Check out this post for the discussion that is the basis of our conversation on this week’s episode co-hosted by me, David Spark (@dspark), the producer of CISO Series, and Geoff Belknap (@geoffbelknap), CISO, LinkedIn. Joining us is our guest, Jerich Beason, CISO, WM.

Jerich has just launched a LinkedIn learning course on securing Generative AI and it’s now available. Check out this promotional video.

Got feedback? Join the conversation on LinkedIn.

Huge thanks to our sponsor, SpyCloud

Get ahead of ransomware attacks by acting on a common precursor: infostealer malware. SpyCloud recaptures what’s stolen from infostealer-infected systems, and alerts your team to take action before compromised authentication data can be used by criminals to target your business. Get our latest research and check your malware exposure at spycloud.com/ciso.

Full Transcript

Intro

[David Spark] Generative AI, like any technology, comes with a set of risks. Problem is we’re not so clear as to what those risks are. How do we approach a much desired technology we’re not so sure how we should secure?

[Voiceover] You’re listening to Defense in Depth.

[David Spark] Welcome to Defense in Depth. My name is David Spark. I am the producer of the CISO Series. And joining me for this very episode, you’ve heard him before, and whether you like it or not, you’re going to hear him again. It’s Geoff Belknap.

It’s the CISO of LinkedIn.

[Geoff Belknap] Oh, hey, that’s me. Yes, hello. Welcome.

[David Spark] Do you think there’s anyone listening to the show that does not like the sound of your voice, and every time they hear it they go, “Ugh, it’s Geoff again.”

[Geff Belknap] I listen to the show, and I can assure you that is exactly my reaction every time.

[David Spark] Most people hate the sound of their own voice. That is true. People squirm. They’re like, “Oh, do I sound like that? Ew, that sounds gross.” Our sponsor for today’s episode is SpyCloud, the new way to disrupt cybercrime. We’ll tell you exactly how they’re doing it and how they can help you later on in the show.

Geoff, our topic of discussion today – understanding and managing risk is one of the primary jobs of a CISO, right? We talk about it all the time. So, this can be challenging enough with an established technology, but with the rapid rise of generative AI, we’re only starting…and I mean really only starting…to understand the risk it can present.

No one wants to fall behind by not using a potentially disruptive technology, so people do want to use generative AI. So, how can we take what we’ve learned from past technical advances and actually apply it to mitigate risk with generative AI, and is that the tactic to take, or is this a whole new ball of wax?

What do you think, Geoff?

[Geoff Belknap] I think in most cases, you’re just applying lessons that you’ve learned in other situations against new technology. And at the beginning, that’s always the case. You don’t have a point of reference on something brand new. And this, in most cases, is just SAS.

It’s just software, and you’re talking about data you’re putting into it and how to manage that risk. And you know what? I think our guest today is going to be a great asset to fleshing out this conversation.

[David Spark] I’m going to introduce our guest in a second, but I just want to sort of qualify and say that I’ve gone to a lot of conferences lately, of which there’s always a generative AI session. And everyone enters that session with this high hopes of, “Oh, well, this is going to be the session where the person knows everything and will give me all the answers.”

[Geoff Belknap] Hmm, the Konami Code, the cheat code to get you through it.

[David Spark] Does not exist. And as awesome as our guest is, I don’t think he’s that person either. He does not have the code for us. You don’t either, do you?

[Geoff Belknap] I feel like the code is hard work, but I’m going to wait to hear what our guest says.

[David Spark] He may surprise us and say, “You’re wrong. I do have the code.”

[Geoff Belknap] I’ll take it.

[David Spark] “But I’m keeping it. I’m not telling you.”

[Laughter]

[David Spark] We had this gentleman on earlier. I’m going to give him a little bit of buildup. We had this guest on earlier. I’m going to take some credit for his success because he’s actually made quite a brand name for himself in media and now doing LinkedIn Learning sessions as well about becoming a security leader.

I was the one who actually recommended his very first microphone, which he uses a different one now. But I don’t know if we were his first podcast. We can find out that later. But he has done really well, and he’s been an excellent guest. I’m thrilled to have him back on again.

He’s just the brand new CISO over at WM, Jerich Beason. Jerich, thank you so much for joining us.

[Jerich Beason] Happy to be back. Thank you for setting the bar low for expectations, and also thank you for my first mic. At this point, I just write posts on LinkedIn in hopes for enough engagement to make it on one of your shows.

[David Spark] Well, let me ask you a question. We’ve quoted you umpteen times, Jerich. Jerich, were we your first podcast?

[Jerich Beason] You were my second podcast. You wouldn’t have let me on if you didn’t see me on a previous podcast.

[David Spark] Aw, there you go. That is a good point. Yeah, that does happen sometimes. [Laughs] I’m glad I waited, because, God…

[Geoff Belknap] David, you got to take a risk on some people.

[Crosstalk 00:04:08]

[David Spark] Yeah, I can’t take a risk on you, Jerich.

[Geoff Belknap] Yeah. Well, this is a mistake you’re still paying for.

How do I start?

4:14.450

[David Spark] Sandesh Mysore Anand of Razorpay said, “The first step is to understand how LLMs are already being used, and they definitely are even if the security team does not know about it. And prioritize risks that are critical for your use cases.” Adam Dennis of AntiguaRecon said, “We should also consider creating an adversarial AI model which would work in conjunction with any other AI to enforce a simple set of moral/ethical standards for AI.

Let’s give old Isaac Asimov a try with his 3 Laws of Robotics and go from there.” And I added the three laws for a reminder for everybody who’s listening – a robot may not injured a human being or through inaction allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law. And lastly, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Isaac Asimov.

Very prophetic, and we still listen to them today. All right. Have you been adhering to the Isaac Asimov Laws of Robotics yourself, Geoff?

[Geoff Belknap] I can’t comment on that on advice of counsel, but what I can say…

[David Spark] What? You have killer robots over at LinkedIn?

[Geoff Belknap] Again, can’t comment one way or the other. What I do in my spare time is my business, David. But I’d also just point out as a fan of…

[David Spark] Just invite me to your garage. I want to see the projects you’re working on.

[Laughter]

[Geoff Belknap] As a fan of Asimov, every one of these stories that relies on the three rules ends up going awfully and brings into question whether the three rules are actually very good. Which I think is sort of a metaphor for this kind of discussion.

These are great, and we should absolutely find a way so that everybody who’s building LLMs or other AI technology can agree, “Hey, this probably shouldn’t hurt somebody either physically or impugn their copyright licenses, or their technology, or their likenesses, or damage people’s democracies and thinking.” But it’s harder than that as it turns out.

I think where we’re at right now we should probably start a little more simply in just, “Hey, how do we provide a paved path in our organization for people to use and experiment just a little bit with AI so they can learn what’s going to be useful for the business and not?” And I think that is on people like Jerich and I to help figure out how do we enable experimentation in a safe way so that nobody is going to get harmed up front, and we can figure out how this will disrupt our business in a positive or negative way before we start really clamping down on the controls.

[David Spark] So, Jerich, I know you’re very brand new where you are, but I don’t know if you did in your past job or it’s even been discussed in the few days you’ve been at your new job, how are you looking at this in terms of letting the employees have a safe space?

[Jerich Beason] When it comes to AI and having a safe space, it really just comes down to how you want people to use it, to the first point that was made. I can’t speak to how my current company is doing at the advice of counsel. I like your phrasing, Geoff.

But what I really find is that we usually lump large language models into one category, but I see them in at least two. You have the one that we talk about, the public models like ChatGPT, Dalle, and Bard [Phonetic 00:07:39]. But there’s also the private models that organizations are exploring using their own data in the public cloud usually, and they each have their own risk profile and associated mitigations.

And so Sandesh’s point… I hope I pronounce your name right, Sandesh. Somebody in your organization is already using it, but that’s usually in bucket one. The real question I don’t hear enough about is who is in bucket two. That’s where the real innovation is and where the real transformation is for organizations.

[David Spark] I’m sorry, bucket two being…?

[Jerich Beason] Building it yourself, private AI models. That’s where you have complete control of the algorithms as well as the data that the model is trained on. And that is going to now put the burden of ethical, and reputational, and explainability on you, and those are risks that cyber people typically aren’t used to talking about.

[David Spark] And I think there’s no answers right now, but let’s just talk about how would you just begin a discussion of how do we explore this? I guess it’s the question of how do you explore this to understand what you’re dealing with, right? Where would you begin that sort of exploration?

[Jerich Beason] Yeah, I think the first gentleman put it pretty clearly – what are the use cases, how do we plan on leveraging generative AI. Of course once again you’re going to have your public AI model approaches. But what are we doing to level up our organization, and that’s usually going to be something that we’re building privately for ourselves.

What are the complaints?

9:01.466

[David Spark] Kristen Bianchi of Threatrix said, “Today GitHub Copilot is behind an average of 46% of a developer’s code across all programming languages.” I don’t know where that comes from, but all right. “Where is this open source code derived, and what licenses are attached?

I have CISOs reaching out to me each week with concerns because their devs are using these tools but are completely blindsided by their inability to locate the original author within these code snippets.” Yuri Soldatenkov of Kemper Development Company said, “A big issue is being able to verify human versus AI speech similar to how we have phishing resistant MFA today.

We need to have AI resistant verification.” Well, that, I’m going to tell you…that last one is the $64,000 question, isn’t it, Geoff? AI resistant verification.

[Geoff Belknap] Kind of. I’m really kind of unsure of how I think about this. I’m going to tell you out of the bat I agree, it would be great especially at this phase of AI discovery that we’re on, this journey, to have something that can detect whether I’m interacting with an AI or not.

But the security practitioner in me just says it shouldn’t matter. My process or my infrastructure should be robust enough that regardless of whether it’s AI or a human trying to pull one over on my helpdesk technician or my login services, I should be robust enough to resist if it’s fraud.

I think this is a reminder that AI accelerates all these timelines of where we thought, “Oh, we don’t have to think about that technology…protecting that technology a certain way because it’s going to be years before anybody can successfully attack us at scale.” And I think AI is the thing going, “Nope, we can do at scale attacks cheaply and easily now.” So, it’s time to go think about all those things you sort of wrote off in your risk assessments.

[David Spark] Is there…? Maybe, Jerich, it seems like kind of the fun way to deal with this and a lot of people who are in security say is red teaming. Maybe some little AI effort red teaming would be a valuable effort. Yes?

[Jerich Beason] I can see that in the future. And if you really look at AI, at the end of the day it’s an efficiency and quality accelerator. We use it for ourselves. But our adversaries, they’re going to do the same thing. We’ve already seen it with WormGPT.

Just Ilke our organizations, the bad guys are using it to their advantage, and the scale of their attacks are going to grow. The accuracy and the agility of their attacks are going to grow. But I actually predict we’re going to start seeing some startups combating AI based attacks with AI based attack disruption.

And the real question is who’s going to win that innovation battle as an eternal optimist. My money is on the good guys.

[David Spark] As it should be. And yes, we at CISO Series are…because we’re looking at AI for production models. But we don’t know what we don’t know yet, so we are just doing this sort of slow education process, which we have found to be the best tool for us.

But at the same time…and one of my producers actually brought it up…says we can’t just sit here and educate ourselves and not question how we’re going to use it. So, there is this balance, Geoff, I’ll throw it to you, of, yes, educate yourself but keep questioning yourself how to use it.

Have you tried that?

[Geoff Belknap] I think every new technology is like this. If I go back and think about things that were very transformational at the time like Napster, and BitTorrent, and Bitcoin, they all kind of went through this process where up front we went, “Wow, this is amazing.

Let’s think of all the promise.” And then there was sort of the gradual descent into like, “Well, there is tradeoffs in these things.” But in the meantime, people didn’t stop and wait for it to be perfect before they thought about how this could really create a lot of value for them as a business.

Now, there was some… Like with Napster, for example, there was some downsides, and regulation and legal people got involved. But the whole idea of peer to peer sharing, that exists in almost everything we use today in one way or another, just not in a way that violates copyright law.

So, I think we are at that phase with AI right now where we should be leaning into it. We should be thinking about how this will transform things for us as organizations that need to do work where this can add value. But I think you’re exactly right.

You always got to keep in the back of your mind like, “Where’s the line on this, and where is the risk?”

[David Spark] I think the Napster example was a great example to use because I think about Napster… And in fact I was at ZDTV. I was the first one to actually break the Napster story when I was working at ZDTV at the time. And what that showed is there is severe demand for digital distributed content, specifically at that time music.

Because it was easy at the time. It was very hard to distribute video at that time. And today, we have all these music services, most notably Spotify, which would not have existed if there wasn’t that sort of demand breakthrough that Napster allowed us to see.

ChatGPT, Jerich, is that demand breakthrough right now. Although it’s not an illegal service or a service perceived to be illegal like kind of Napster was at the time. What do you think?

[Jerich Beason] Yeah, I completely agree. As an industry, we’re sometimes slow to adapt. Whether it was BYOD, whether it was cloud, whether it was SAS. We’ve seen so many trends come and go, which makes us a little bit more resistant to change. But generative AI is different.

It’s more like the iPhone. It’s here, and it’s posed to transform our organizations whether we secure it or not. That train has left the station, but it hasn’t fully reached full speed, so it’s our time to hop on now.

[Geoff Belknap] I think that’s such a great point, because it is also so, so obvious up front that this is going to make big strides and big differences in every business and every organization. It’s just a matter of figuring out how. So, I think to Jerich’s point, we can’t say no.

We have to figure that out now.

[David Spark] Yeah. And I think it’s interesting. It’s the one time everyone is in agreement. Like, “Yeah, this is it.” The question is how, and we don’t know. But I think what scares everybody… Because we’ve just watched it in the past year. I think about the image programs like MidJourney, which I was introduced to just over a year ago.

The speed these things are accelerating is a little scary.

[Geoff Belknap] It is. But this is why I’m in this place. This is why I’m in this space specifically. I’m an engineer at heart. I’m an entrepreneur at heart, and I want to find the intersection of where do things like this help a business, and where we can protect ourselves and others from the being exploited.

I think there’s no better place to be in the service of others than to figure out how do you help companies adapt transformational technology while protecting people that could potentially be harmed by it.

Sponsor – SpyCloud

15:56.600

[David Spark] Before I go on any further, I want to share some really interesting research from our sponsor, SpyCloud, about what we’re missing when it comes to ransomware protection, what predicts the likelihood of an attack? So, the team at SpyCloud has poured over the data from ransomware attacks, and what they found should give you goosebumps.

And get ready for this, listen, nearly a third of ransomware victim companies this year were infected with info stealer malware beforehand. Okay, so you may have heard of some of these info stealers like Racoon Stealer, Vidar, RedLine. SpyCloud found that these stealers increase the probability of ransomware even more.

So, if they’re in your system, chances are pretty high ransomware is coming next.

So, clearly we all need to pay closer attention to info stealers as an early warning signal for ransomware. SpyCloud specializes in recapturing the data stolen from info stealer infected systems and alerts your team to take action before compromised authentication data can be used by criminals to target your business.

Now, my favorite thing about their solution is that you get data that’s actually actionable and relevant to your business, and it feeds into your existing security tools for fast remediation. It’s pretty crazy what these folks can tell you about your existing info stealer exposures.

They’ve got a free tool you can use to check it out at spycloud.com/ciso. So, just go to spycloud.com but add the /ciso. You can do that. Be sure to go there, grab the new research and check your exposure so you can act on it before the criminals do.

Remember, that’s spycloud.com/ciso.

How do we approach governance?

17:53.964

[David Spark] Ogaga Umukoro of Newtopia Inc. said, “Organizations need an AI policy. There needs to be specific guidelines on what not to use generative AI for. These LLMs are basically spitting out the information we feed it with. In a matter of time, proprietor information will be fed to the public.

We need to redefine our security goals to include tech tools such as ChatGPT.” Doruk Yalcinsoy of CyberArrow said, “Let’s not forget the importance of user education and awareness. Training employees about AI related threats and safe practices can significantly reduce the risk of human induced vulnerabilities.” And lastly, Eric Silberman of USDA, “NIST published AI risk framework with lots of links, pages, guidelines, articles, ideas, and checklists.

It’s a whole program.” So, somebody has already written something up here, so how far can we get down this AI policy? It sounds like it’s going to be pretty much a living document. Yes, Jerich?

[Jerich Beason] Yeah, I’m absolutely a proponent for an AI policy. It should precede the training. I would also suggest potentially having two different policies – one for your public AI consumption and another one for your private developed models. The policy needs to define the approved use cases and the measures needed to leverage AI in a risk mitigated manner, especially how the organization will enforce it.

Eric brought up the NIST ai risk framework which was established specifically for the AI built. We talk about public AI risk all the time, but the policies don’t talk about how you’re going to protect your models. It’s extremely difficult to check the different boxes to patent your models, so the burden of protecting that IP, it’s really on cyber.

And you’re not going to have a lot of legal support outside of maybe privacy. So, measures like mitigating the risk of model theft, or model poisoning, or model drift, or data leakage. These are all things that you’re going to have to think about when you’re building your own models, and most cyber professionals have never had to do anything like that.

Not to mention they still have all the same stuff – web application firewalls, monitoring access control. All the usual things are still on their list when it comes to securing new built AI models.

[David Spark] Geoff? By the way, you could ask ChatGPT, “Create a good use AI policy for me.”

[Geoff Belknap] [Laughs] I’ll go over to Bing and figure this out afterwards. But it’s such a good thing that Jerich and I’s budgets always go up with every additional threat and that we’re not just figuring out how to manage this along with everything else.

[David Spark] That is great. By the way, is your budget tied to the threat meter? So, just you get a direct correlation between the two of them? Geoff?

[Geoff Belknap] No. No, it’s not.

[David Spark] Oh, so you were being facetious when you said that.

[Geoff Belknap] I don’t know if you’ve heard of these grammatical devices, things called sarcasm or cynicism and irony. No…

[David Spark] Oh, that’s what you were using. Well, because what you described actually sounded wonderful.

[Geoff Belknap] It would be wonderful. And hey, board members…public company board members, if you want to talk about how to balance investments in security based on current threats, come look me up. I’ll do your CISO a solid. No, really I think this is hard, and I think Jerich said it exactly right.

There’s really two categories that we, as security leaders, have to think about. There’s our consumption of AI models, which is the thing I think right now are our third party risk approach to using any kind of SaaS service or integrating any kind of third party libraries or service into a product we’re already building.

We’re already pretty good at that. It’s the latter that we’re not in tech great at yet, which is we’re building our own models. We’re building our own technology and piping it into these things. The thing that most of us are good at though is taking an approach that’s very practical and deliberate, thinking about building something trusted by design.

I know…

[David Spark] That’s a key line right there.

[Geoff Belknap] This is a really important part is you’re going to get it wrong. But the idea is you approach it trying to build the most trustworthy thing you can build, which means both building trust in the sense of you’re not going to abuse anybody’s data, you’re not going to misuse it in a way that you didn’t articulate or give them choice about, and trust in the sense that if it starts to harm people they have a way to flag that, or report it, or manage that and that you’re going to respond to that.

And I think at a base level, that’s what people are doing as they build these things now. That’s what scrupulous people are doing as they build these things now, and that’s the part where we’re going to learn as we go. And I’m going to be frank, plenty of companies are going to screw this up, up front.

I think the NIST framework is a really great way to sort of assess what other people are doing. Asking hard questions about any product that you’re building or using is really important right now. But just in general, I still go back to we’ve built new things before.

This one feels very transformative, but at the end of the day we just have to root ourselves in principles.

[Jerich Beason] Geoff, I love that you used the word trusted. I built an AI security framework for people that aren’t going to have that big NIST document, and it was actually called TRUSTED. The T is for transparent, R is for robust, U is for unbiased, S is for standardized, T is for traceable, E is for ethical, and D is data driven.

And combined, if you take that approach it’s actually a lot easier to structure your approach to mitigating these AI risks.

[Geoff Belknap] A fantastic opportunity to build something safe with something Jerich has helped with.

What’s the next step?

23:31.411

[David Spark] Varun Grover of Veritas Technologies said, “Generative AI can be a double edged sword.” Like a lot of new technology. “But by proactively leaning into security and compliance we can ensure the longevity of this breakthrough technology. Something that I’ve been thinking about is the future of predictive cyber security that can preempt cyber attacks proactively versus being reactive.” Aw, “Minority Report,” Isaac Asimov, same thing.

We’re going to be jailing people before they commit crimes. Is that it? No, probably not. Chad B. of united Patriot Coin said, “Consider an AI tool that monitors our communications in real time, intercepting vishing or social engineering attempts on employees.

It’s akin to the spam and phishing email detectors we use now but applied to live voice conversations.” I think this could be an opportunity. Because if they’re using AI to create this stuff, we can use AI to detect it being that we could create the same thing that they’re trying to get us with.

What do you think, Jerich?

[Jerich Beason] Completely agree with you. We kind of talked about this earlier. We’re in multiple different AI arms races. Nation states want to be the leaders. Organizations want to out digitally transform their competition, and cyber teams are trying to stay ahead of that looming threat.

Today AI is the dumbest it’s ever going to be.

[David Spark] That’s a good line. It’s the dumbest it’s going to be. There, and the same thing the next day – the dumbest it’s going to be that day because it’s only getting smarter. Good point.

[Jerich Beason] It’s growing in intelligence and capability over time. And as it evolves, those use cases are going to change with it. And if the knowledge and the skills of the humans harnessing it don’t, we’re going to be at a disadvantage. So, I really challenge ever CISO to leverage every capability at your disposal and get your teams up to speed learning how to extract the value out of this disruptive technology.

Mark my words, November 30th, AI versus AI cyber duels will happen one day, and humans will continue with their one on one battles on the sidelines. But this is the future of cyber war.

[David Spark] You know what? Here’s the interesting thing… And I’ve mentioned this before on one of these shows is I keep reading about how if you put private information they’re going to hack and get the information out. And I go wait, no. If you put private information in, normal users will get it out.

That’s the thing with these generative AI programs is when your data goes in, it comes out just by normal use, and that’s very different than what we’ve dealt with before, Geoff.

[Geoff Belknap] Yeah. And there have been some really interesting examples recently where you can trick some of these chatbots to just tell you about bugs, about keys, credentials, all kinds of things. And it’s really… One has to respect the craft involved in crafting prompts to get them to sort of give these things up.

But that’s why this is unusual. Usually if you’re interacting with something, it’s a one way. It’s a post. I’m posting something on my feed, or I’m entering a chat. I’m interacting with a human. But, again, I think to Jerich’s point, the AI is really dumb right now.

AI is going to get better, but so are the bad guys. And so it’s going to be interesting to see this ladder its way up. I do think there’s going to be a lot more uses of AI for good than there are for evil, and my sort of faith in humanity is such that we’re going to find a way to reasonably battle a lot of these immerging threats.

But right now it’s a little scary.

[David Spark] It’s the unknown.

[Geoff Belknap] It’s always the unknown.

[David Spark] I don’t think we’ve had a situation where the looming unknown is as big as this one, and this is in cyber. Yes? What do you think?

[Jerich Beason] There was a period of time when COVID first occurred, and I didn’t know how bad it was going to be when people rushed home. But I knew that that was going to be for a short period of time until we figured it out. I don’t know if there’s a figuring it out with AI.

Because as it gets smarter, we need to hurry up and get smarter with it. And that is a fun challenge but also scary at the same time.

[David Spark] I will say this – that I haven’t gotten as smart as AI has in the past year personally. I have not.

[Geoff Belknap] [Laughs]

Closing

27:54.278

[David Spark] Now, Jerich, by the way, we’ve come to the portion of the show where I ask you and Geoff which quote was your favorite and why. And you may choose one that AI generated if you so choose. Which quote is your favorite?

[Jerich Beason] I want to go back to the quote that was about Copilot and how 46% of our code is being developed from Copilot. Not necessarily my favorite, but I definitely want to call out the fact that that’s a really high number, but let’s assume that’s accurate.

The developer using Copilot should still be held accountable no different than the lead pilot on your plane going from San Francisco to LA. If it has issues, he’s the lead. Developers cannot blame Stack Overflow when it gives them bad advice, so they can’t blame AI Copilot either.

It’s really no different except for GPT is faster but less accurate on a consistent basis.

[David Spark] Yeah, if you’re using a tool, you’ve got to check the validity of whatever the tool is – Copilot, stack overflow, whatever the heck. Geoff, what’s your favorite quote and why?

[Geoff Belknap] I’m going to go with Sandesh, who started us all off talking about the first step is to understand how LLMs are already being used. And I’m just going to underscore this, my emphasis, and they definitely are even if the security team doesn’t already know about them and prioritize the risks that are critical for your use cases.

I just go back to basics here. If you’re in an organization, and you’ve said, “Nope, nobody should use AI until we decide what the policy is,” I’ve got news for you. Everybody is already using AI, and they have no idea what the policy is. So, you have to start somewhere simple.

And if you absolutely are working on some critical that you don’t want people using external AI, you’re going to have to do like Jerich talked about earlier and provide them an internal tool to use. Everybody is using it. And if you think they’re not, they are.

Provide people a paved path. Provide them an onramp to do things safely. It’ll go along much better than just trying to say no.

[David Spark] Excellent point. Well, that brings us to the end of the show. I want to thank our sponsor, SpyCloud. Thrilled to have them back onboard. Go check out what they’ve got at spycloud.com/ciso. Get your free tool to check your info stealer exposure and check out the report that will give you lots of great information about the early warning signs that… Ransomware is coming.

Jerich, I want to make a little plug for you. You have an awesome LinkedIn Learning class. I’m going to have you explain it all. I have seen a little bit of it. It’s pretty darn spectacular. And Jerich… And I’m going to take credit here for all of your success on podcasts and in LinkedIn Learning.

May I take credit for all of your success?

[Jerich Beason] Anything that’s occurred over the last three years was solely because of David Spark.

[David Spark] Thank you. Oh my God, we have that quote.

[Crosstalk 00:30:44]

[David Spark] We have that quote. We’re playing it over and over again.

[Laughter]

[Geoff Belknap] I didn’t hear him say anything about sharing residuals with you though, so…

[David Spark] Ah! Damn. Jerich, tell us about the LinkedIn Learning class.

[Jerich Beason] Absolutely. So, we’ve talked about it. AI is here. As security professionals, it is our job to figure out how to enable the business to use it securely and mitigate the risks of it. And I created a LinkedIn Learning course that should be available if not now in a few weeks from now, and it gives you the tools to write your AI policy as well as mitigate both public and private risks, and try to make it as accessible for the beginner in cyber security all the way up to the executive that just wants to better understand it.

It’s made to enable all of you guys to do AI securely.

[David Spark] That’s awesome, and I’m so glad you came on our show to talk about it. That you came to us with a fountain of information on this very topic. So, we greatly, greatly appreciate it. I want to thank everybody else for your contributions, as always, and for listening to Defense in Depth.

[Voiceover] We’ve reached the end of Defense in Depth. Make sure to subscribe so you don’t miss yet another hot topic in cyber security. This show thrives on your contributions. Please write a review, leave a comment on LinkedIn or on our site, cisoseries.com, where you’ll also see plenty of ways to participate including recording a question or a comment for the show.

If you’re interested in sponsoring the podcast, contact David Spark directly at [email protected]. Thank you for listening to Defense in Depth.