Something Stinks In Here. I Think It’s Your Code.

The problem isn’t our users, it’s you and your past due code. Did your code step in something? Maybe it’s tainted or expired. Whatever it is, it smells and you need to clean it up.

Subscribe to CISO Series podcasts - CISO/Security Vendor Relationship Podcast

This episode is hosted by me, David Spark (@dspark), producer of CISO Series and Mike Johnson. Our sponsored guest this week is Brian Fox (@brian_fox), co-founder and CTO, Sonatype.

Got feedback? Join the conversation on LinkedIn.

Thanks to our episode sponsor, Sonatype

With security concerns around software supply chains ushered to center stage in recent months, organizations around the world are turning to Sonatype as trusted advisors. The company’s Nexus platform offers the only full-spectrum control of the cloud-native software development lifecycle including third-party open source code, first-party source code, infrastructure as code, and containerized code.

Full transcript

Voiceover

Ten second security tip. Go!

Male Voiceover

Modern software supply chain attacks are focusing on your developers and your development infrastructure. If you’re only focused on scanning the things you ship, you’ve missed the whole game.

Voiceover

It’s time to begin the CISO Security Vendor Relationship Podcast.

David Spark

Welcome to the CISO Security Vendor Relationship Podcast. My name is David Spark, I’m the Producer of the CISO Series. And joining me as my co-host, again, is Mike Johnson. Mike, that moment we all wait for, the sound of your voice. The first moment we get to hear your voice and it sounds like-

Mike Johnson

It sounds like this, David. I am here and, actually, my cat has joined me for this particular recording as well.

David Spark

Is your cat going to be offering any security advice?

Mike Johnson

She’s asleep, so she’s busy.

David Spark

Probably not. We’re available at CISOseries.com. We’re also available on the sub reddit CISOseries. Our sponsor for today’s episode is Sonatype, and they are also responsible for bringing our guest today. Very excited. So we’re going to hear a lot of very interesting things about mostly cybersecurity hygiene, which, by the way, talking about fundamentals in cybersecurity, is something we hammer a lot on this show. So very much looking forward to this discussion. But first I want to bring up our headline show. Mike, I’m bringing on a new reporter to the headline show.

Mike Johnson

Oh, great.

David Spark

It’s hasn’t officially been announced yet, even though you should hear that person by now, but I want you to know the show is doing unbelievably well. We have quadrupled our traffic since launch back in late August.

Mike Johnson

I love the show.

David Spark

No one’s delivering it in the format we’re delivering it, at the frequency we’re delivering it, and I think the reporters are doing an amazing job right now. Let me call out the two reporters. The regular reporters have been going strong since the beginning and that’s Steve Prentice and Rich Stroffolino who are amazing and then, if you join us on Thursday evenings at 7 pm Eastern, 4 pm Pacific, we do a weekend review show where, you’ve been on this, where we have an expert essentially providing your context for the stories of the week. It’s kind of fun, isn’t it? You sort of read the news stories, give your opinion?

Mike Johnson

Yes, it’s a lot of fun and it’s also live. There’s really that opportunity for participation from the audience, but I had a lot of fun the one that I was on, because it really is that opportunity to reflect on the week, and what are the stories that happened that week and talk about them a little bit, and be able to dive a little bit deeper into, “Hey, here’s what happened. Here’s what I thought about it.” It’s a great show. I really enjoy, it, I really encourage folks to tune in live and participate.

David Spark

It’s one thing to get the news, it’s another thing to get some context around the news and we’re trying to do just that with our Thursday show. So you can come and join us live but, if you don’t join us live, not a big deal if you subscribe to the regular podcast feed, that week in review show is part of the podcast theme. It’s just 20 minutes long as compared to our six minute shows that are daily, so that’s perfect for when you get up in the morning you just want to know what the hell’s going on in the world of cybersecurity. Boom. You’ve got it in six minutes, and I think that’s why it’s been doing so well is because it’s short and easy to consume.

Mike Johnson

I listen to it every day on my morning walks now.

David Spark

I am, by the way, stunned that people still listen to episodes from back in August and September, and I believe that has to do with how well SEO did his, being that we literally have essentially all the copy for the stories he’s put in there. By the way, I should mention, you could actually just subscribe to the daily newsletter of it, if you want. If you just subscribe to our newsletter on the site, there’s a check box to say “Oh, I want the daily version too,” which is just the headlines every day. Alright, with that said, let’s bring in our guest. That’s enough of talking about the headline show. This show is called CISO Security Vendor Relationship Podcast, which is a completely different show.

Mike Johnson

Maybe we should get to this show then?

David Spark

We’re going to stay on this show. I’m very excited to have this person on, because we’re going to talk a lot about development in general, and security run development on this episode, and this person’s perfect to talk about. It’s our sponsored guest, Brian Fox, the CTO of Sonatype. Brian, thank you so much for joining us today.

Brian Fox

Yes, thanks for having me.

Maybe you shouldn’t have done that.

00:04:17:07

David Spark

How do you know if your DevSecOps effort is going to fail? On CSO Online, Chris Hughes outlines seven warning signs for just this, and they include failure to establish a learning culture, neglecting cross-functional education, neglecting to communicate business value, being too risk averse, fearing failure, tool sprawl and fragmentation, weak security culture, thinking you can “buy” DevSecOps. So I know, Mike, that you don’t like the term DevSecOps but I just used it to simplify an explanation here. We can get into that, or not. But what I am most interested in is I want to play a what’s worse game. Besides hating the term DevSecOps, what’s the worst thing in this list?

Mike Johnson

Well, I mean, can’t we make using the term DevSecOps the eighth sign? If you’re using that, maybe it’s destined to fail. Arguments aside around that term, I do think it’s a good list. This really is a set of warning signs that you should keep an eye out for, and I think, to pick one.

David Spark

Yes, if you had to pick one, like, “Oh my God, please not this?”

Mike Johnson

I really do think the last one about buying DevSecOps. Secure DevOps is a culture. It’s not a team. It’s not a tool. You can’t buy your way into secure DevOps. So that one really is the one that’s the stand out for me for the worst from the list. If I look a little bit further along the list, a lot of these are around communication and culture and warning signs around those.

David Spark

Which is kind of what all of DevOps is, a lot of it, it’s just making sure everyone’s in the flow, right?

Mike Johnson

That’s what you need in order to move at the speed of DevOps. You have to have clear paths of communications. If you don’t have that, if you don’t have the culture that agrees upon here’s how we’re going to operate, it will fail, and I really think that DevOps security, the root of a lot of our issues around security are failures around communication and culture as well, so it’s really not unique to DevOps. It’s not unique to security. We’ve got similar challenges there.

David Spark

Alright, I’ll throw this to you, Brian, let’s begin with your least favorite from the list.

Brian Fox

I only have to pick one? I think they’re all pretty important from the perspective. I think one that might be missing though is the rest of the organization’s buy-in. You know, it’s not only about DevSecOps. It’s like trying to do Agile or Scrum in a world where the rest of the organization would like waterfall road maps. That’s bound to fail at some level as well. So I think you can extrapolate that out to a DevSecOps process and, if you don’t have that buy-in from how you manage across the board and how you’re going to respond to all the things and communicate about what’s going on and when it’s going on, it’s not going to work. So, I agree with Mike. It is mostly about culture and communication. It always is. Everything involving humans always comes back to that.

Someone has a question on the cybersecurity subreddit.

00:07:27:04

David Spark

On the cybersecurity subreddit, a redditor asks, “As an analyst, how do you justify your existence? When things are going well and as safe as one might expect what metrics do you turn to for monthly reporting to help add value slash justify the work you are doing as an analyst?” So I’ll being with you, Brian. What’s your advice here?

Brian Fox

Yes, I think there is the start of a good discussion in this thread, and I think you need to be constantly evolving your security practice. I think there was one comment in there that talked about that, that five years from now what you’re doing now is going to be obsolete. And so, to justify it, you need to be showing a continuous improvement. You’re never really going to be able to show all the times you saved things. You don’t count how many times your airbags don’t go off, you still are glad they’re there. But, in a world where everything is continuously moving forward, you need to be talking about how you are improving, and by what metrics, in what areas. I think that is what you need to be focused on.

David Spark

Do you know what metric specifically, besides the improvement metric? Is there something else that an analyst should be going on?

Brian Fox

I think that’s going to be very organization-dependent. It’s going to depend upon where is your assessment showing that you’re the weakest? What can you be doing to improve whatever element that is – whether it’s not password rotations like you guys talked about in some of the other ones – but things like that, whether it’s internal security or software security. What are the things that your audit is showing you’re deficient, and what steps are you going to take to improve it? How do you measure yourself on those steps?

David Spark

Mike, I throw the same question to you. What is your advice for this redditor?

Mike Johnson

First of all, this was a really salty thread. I was really surprised. It was a rather innocent question, but hopefully we can offer some help to this person. I liked what Brian was saying about assessing your weaknesses to help you understand where your drive your improvements. That is a really great place to start, especially when things aren’t on fire. That gives you that opportunity to look around, to survey your environment, to understand, “Hey, these are a couple of areas that I, we can work on, we could improve.” That then gives you some obvious measures and metrics. You can, first of all, depending on how rigorous you want to be, you can even have metrics about your assessment practice, you know, we’ve assessed over here and determined what we need to do in this area, but, we haven’t assessed over in this place or in this place. You have that, my go to of inventory, of these are all the areas or exposures or applications in your environment. We have done our assessments of those, we have assessed for our weaknesses, we know we’ve got ten out of a hundred weaknesses over in this place, so we now know what our level improvement is over there. We’ve got this other thing where we’ve got 99 out of 100 weaknesses. That then says, okay, we now even have our prioritization of what we’ve surveyed, and you can really attach numbers to all of these. That does give you a set of metrics that can show improvement over time. That’s the kind of thing that you really want to be looking at when things are not on fire, and when you’re needing to show improvement over time.

It’s time to play, “What’s Worse?!”

00:11:16:10

David Spark

Alright, Brian, I know you know how to play this because you’ve heard a few episodes of this very show. It is two bad scenarios, and you’re not going to like either one. So here is the scenario. I’ll make Mike answer first. By the way, Brian, just pointing out, I always like it when our guests disagree with Mike but you are not required to do that. No pressure! Here we go, this comes from Filip Gontko over at Netsuite and he says this, “You have a customer that pays you to secure their application, and, there is a potential merger which would bring a lot of money with another application, but that app is totally insecure. What’s worse, go for the merge and become very vulnerable, or do not.”

Mike Johnson

So trying to decompose this a little bit, you’ve essentially got the old business risk decision of do we want to take on more risk and potentially make more money, or do we want to sit where we are and take on less risk and take on less money? I think the reality is this is less going to be a decision for the CEO to make and more for the business to make.

David Spark

But, hold it, don’t you need to contribute also in the district. By the way, don’t try to get out of answering the What’s Worse question!

Mike Johnson

You know me David, I always answer them. Really, your role here is to educate the business. At the end of the day, you have to make a recommendation and that’s where I will pick one of these. You go to the business and say, “Look, here is our analysis. We have looked at this. This is the amount of additional risk that we’ll be taking on in order to make this additional amount of money.” In general, I am always going to lean towards the side of supporting the business and saying, “We’re here to make money, I recognize that. Here’s all of the risks, here’s the mitigation, here’s what we can do about it and there’s how much those are going to cost and that is what we should go for.” I really think, in these two, the first one, the play it safe, that one feels like the worst scenario to me.

David Spark

Good answer. I like it, and I like how you walked us all through that. Very good job. Brian, same question, for you.

Brian Fox

I feel like Mike stole my answer. My first thought was it’s not a security question, this is a business question, so, as much as you’d like me to disagree, I think I do agree. In my own experience, the job of the technologist, the security product management, whatever you are, is to as dispassionately provide the facts to the business to make the decision, because, being safe from a security perspective, and staying on an old application that might become irrelevant in a year, is that winning? I don’t think that’s winning. Clearly, getting hacked by an insecure application is also not winning. But, you need to be able to provide the analysis of how bad is this new application? How much is it going to cost to actually fix it? Can you mitigate it? Can you put compensating controls in place to help quarantine the risk? Those are conversations that the business has to undertake to decide what is the right thing to do. In so many of these things, there is not always an obvious answer.

David Spark

No. Alright, so you’re agreeing and, by the way, I’m going to agree with the two of you as well and your rationale. Excellent job both of you.

Please. Enough. No, more.

00:14:47:06

David Spark

Today’s topic is cybersecurity hygiene, but specifically in the software chain. So, Mike, I’ll start with you. What have you heard enough about with software security hygiene, and what would you like to hear a lot more?

Mike Johnson

I’ve thought about this one and the Zoolander quote of “The software supply chain is so hot right now” just came to mind. This needs a Zoolander meme associated with it. It really does seem everyone is talking about it, but what I’ve heard enough of is just the talk. People are only talking about it as problems, they’re not talking about solutions. I would really like to see less of “Hey, everything’s on fire, we don’t have solutions,” so, let’s less about, “Hey, everything is on fire,” and more around solutions. You know? How can engineering teams take advantage of the tooling needed? What is that tool? What can engineering teams do to help out in this, to make sure that folks understand it’s not just a security problem. That is what I would really like to hear more about, the engineering side of it as well as the security side.

David Spark

More answers. Alright, one person who might be able to give us an answer is our guest here, Brian Fox. Brian, let me ask you a question, let’s start with what have you heard enough about before we get to what you would like to hear a lot more of. What have you heard enough about with regard to cybersecurity in the software supply chain?

Brian Fox

Not enough, honestly. We’ve been talking about this and building tools for over a decade in this area, so I feel like its time has come. First, it was about open source security and the components there. In the early days there was a lot of denial. Security teams would say, “I don’t have to worry about security of open source, I have a security team and a firewall for that. I just have to watch out for the GPL.” It took something like several struts, vulnerabilities and, ultimately, Aquafax before people accepted, “Oh, that’s a thing.” Unfortunately, solar winds is what has taken to get everybody to talk about the rest of the supply chain, that it is a bigger problem. So, I’m not tired of talking about it yet, I talk about it all the time because I feel like so many people still do not recognize that this is what’s going on. The attacks have been evolving from the early days of just exploiting vulnerabilities, you know, the old fashioned bugs that can be exploited to do bad things, to starts of attacks on the open source developers themselves, so trying to steal their credentials so they can publish malicious components, and then the final part is using those techniques to publish malicious components. What is interesting, as I mentioned in my ten second tip, the new attacks are focused actually on the developers, and in DevSecOps, everybody likes to talk about damming and how damming helped the auto industry in Japan rebuild and make better, more efficient and cheaper cars by focusing on the supply chain. That is great, you need to do those things from your software supply chain. But those practices alone do not secure the factory. They’re about making better cars, they don’t stop a suicide bomber from blowing up the factory. That is the equivalent of what’s happening right now in software development, that the malicious attacks are not trying to find their way into the software that you distribute to attack your users, they’re trying to exploit your infrastructure, and there’s a lot of examples of that happening even just this year: solar winds; Codecov; the Verkada camera incident, all of those are focused on development infrastructure to exploit for additional gain.

David Spark

By the way, we’re going to get a lot more into that very specific issue in our next segment, but getting into the hygiene part of it, what are the hygiene elements that you’re most focused on?

Brian Fox

Helping developers make better choices about the components they’re using can help solve a lot of these problems. In the early days, they can boil down to people picking components because their buddy said so, or they saw it on Reddit. Without visibility into how often does that project have security vulnerabilities, how bad are they? Do they have a good practice? Is it one guy and a dog or is it a team? Is there a foundation? Historically, that information has been difficult for developers to get, hence they go to Google and ask a buddy. That is the first part of it. That can also help with some of the later things around the typo-squadding and the other pieces where they’re inadvertently grabbing not even real projects. So, if you can’t tell if a project is secure or not, as compared to its peers, you’re certainly not as likely to notice that you’ve actually grabbed a counterfeit one. So that is the part of the hygiene we’re really focused on.

David Spark

As I understand, Mike, this is where the term guard rails comes in. You create an approved library before any developer grabs a snippet of code, it goes through the security peer review, but, in a case like that, don’t you need your security people to have some level of coding knowledge?

Mike Johnson

First of all, it helps. There are certainly use cases where having developers on the security team ensures that the code that is being brought into the environment is more secure, because they can provide some assistance. But, at the other end of that, that is not really scalable. You cannot just have a human who goes through, line by line, every piece of code that comes in, and, even if you did, there’s nothing to say that they’re going to catch everything. Take Open SSL, for example. We had Heartbleed years ago, and that was really a wake up call, but, that vulnerability was so difficult and it lived forever. How many security teams had looked at that code? It is unlikely that many really did a deep dive. So it’s not really effective to rely solely on development experience on your security team. But it certainly helps. And the way that I think it helps is it gives them that experience of the gates that you’re erecting for your developers, the paved path that you’re developing for them, it gives you that first hand experience of how usable or painful that is so that you can go back and refine it into something that is bringing additional security to the environment, but not slowing down your development experience.

David Spark

Brian, I throw this back to you. Is there anything additional in hygiene that we should be thinking about? Let’s exclude what we’re going to be talking about in the next segment.

Brian Fox

I think the guard rails concept is right, and it is not possible to create approved lists or deny lists of components. We’re talking about millions of components in the ecosystem. The average organization is using tens to hundreds of thousands of components that change more than four times a year. Nobody is going to keep a list up-to-date. And then you’re talking about not only the direct dependencies but their transitive dependences which, depending on the ecosystem, could explode out ten times, 100 times, 1000 times in JavaScript, literally, sometimes you pull in one JavaScript NPM component and it’ll pull along a thousand more. Who is going to update that spreadsheet? You’re not. This is what we’ve been focused on, trying to create ways to do that at scale using automation. It requires a lot of precise data and a lot of data that can be generated near real time to help those guard rails be actually enforceable. Then it allows your security team to say, and by the way legal team, as a lot of licenses are problems, architecture team, to be able to define the conditions for which a component is okay or not okay. That is important too because components will age like milk, not like wine. They will change. So what was okay yesterday is not okay anymore. So if you are maintaining a manual approval list, things will be approved forever and people will keep using Struts 15 years after its at a level ten vulnerability. Why? Because it’s on the list. That is how you need to approach it, so it’s all sort of notionally part of the hygiene. It’s about focusing on these things and defining how you want to think about what you allow in your software and what you don’t.

If you’re not paranoid yet here’s your chance.

00:23:11:09

David Spark

In an eagerness to get things done, your developers are gobbling up code from code libraries as quickly as they can. Malicious intruders are aware of this behavior and they can take advantage of it by making their code available, or alter existing code. Now Consultant Alex Birsan conducted an experiment where he typosquatted popular code packages creating his own versions and made them public. In his version of the code, there would be a call that would alert him that his code was being used. Essentially, it was sending information out to a location the developer did not intend. Birsan was astonished at how many 1000+ person companies were using his tainted code. So it appears the only solution is to create guardrails, as I said here, and only let code snippets be approved by security teams but, as you said, how feasible is this? But then they need to know about coding, or have developers in the security group, and it can definitely slow down development depending on how you’re handling it. So even if you do a good job, how do you know if your third parties have done their due diligence on their code as well? I ask you first, Brian.

Brian Fox

You don’t. You touch it, you own it effectively. If you are pulling it into your application, nobody is going to care that the project you pulled in didn’t do their job and your application is vulnerable. It’s on you. That is what I was getting at in the last section about the transitive dependencies. You touch that one NPM that pulls a thousand more, guess what you just owned? That is important. What is interesting about what Alex had highlighted here, it actually wasn’t typosquatting per se, that’s been happening for a while, typosquatting being you put a package out there with a confusingly similar name and somebody downloads it and it does bad things. What this one was actually doing is it recognized a flaw in the lack of name spacing on some of the ecosystems, NPM, PiPi, Ruby, I think, were the ones that he focused on. So what he figured out is that, by looking at log files, issues and GIRAS and things like this, he could determine what the internal package names for these companies were. Not the third party ones, but their internal fix that they created, so my project, and he realized that if he published my project to the public repository with a higher version number than the internal one, the tooling would fetch that one instead. So this was actually sort of side-stepping a lot of the other hygiene things that people were talking about, trying to make sure that you’re picking the right third party. Nobody expected that you were suddenly going to download something from the public that was replacing your internal component, your internal module. That is what he highlighted in his research. We’ve been aware of this challenge in certain ecosystems for a while. We run the Maven Central Repository and our history comes from Maven. Maven has a strong name space where the group ID is the first part of the coordinate. By convention, following Java, that would be the reverse coordinates of your DNS of your company and, when we allow people to publish to Maven Central, we validate that so you cannot just show up and pretend to be Tesla unless you somehow can change Tesla’s DNS. If that’s the case, they’ve got bigger problems. We do that validation, but what Alex exploited was a bunch of ecosystems that don’t do that validation. In fact, there is no name space like the group ID that would tie to a company. It’s literally just my project. Because of that, and because of the lack of validation in these, he was able to just publish anything he wanted. Anybody can do that, whether they’re typosquatting or attacking one of these internal ones. So it’s become very difficult for companies to defend against. Fortunately, our customers, we had tooling to be able to help with that both in the ability to detect this before it was given a name, dependency confusion. We were able to pick up on the fact that many of these components were doing things that were suspicious and we were flagging them and blocking development from using them. Also, after it became known and published, we were able to create specific rules and the thing that says, “Hey, we’re going to look at all of the internal packages that you’re using and build that list automatically for you, and any time we see your development try to fetch one of the same name things from a public repository, we’re going to block it and tell you about it. It’s almost certainly a problem. Maybe it’s a name conflict, but you still need to know about it. So it is a very interesting thing. I am glad that they have shone a light on it to help people focus on it and get better.

David Spark

Amazing point. That was excellent. I’m throwing this to you now, Mike. Now, how much of a light are you shining on this issue within your own organization?

Mike Johnson

We are certainly keeping an eye on our dependencies. It is really a difficult thing to track because there’s always so much development going on. But, having an idea of what are your dependencies, knowing when they change, knowing when perhaps they are pulling from a public repository. We try very hard to not be pulling from public repositories. We try and import into an internal repository and pull from that one. That is really our way of trying to get our hands around this, by having that internal repository that we pull from. So we are not pulling from the internet.

David Spark

How significant do you think that’s cut down your issues?

Mike Johnson

Dramatically.

David Spark

That one move specifically?

Mike Johnson

Yes, but that said, it’s not easy. Not to say that maintaining an internal repository is an easy thing to do but, by having an internal repository, it is what is getting updated and maintained. It takes several steps for this kind of vulnerability to show up in your environment. Typosquatting doesn’t even exist because we don’t use it anywhere. The version that Brian was talking about, if we don’t have that in our local repository, the newer version, then we’re not having that particular issue. So that is our way of doing it, having an internal mirror of all the packages we use. And, by the way, some of the particular ecosystems that have these issues more broadly, we don’t use those particular languages. So we’re not a big Note JS shop, so MPN isn’t really that big of a deal for us for instance.

David Spark

Sometimes you can solve your problems by just staying off the code base, if that’s the issue. That was excellent. Brian, what I loved is how you set up pretty much the theme of this entire episode of your software, your supply chain, isn’t necessarily the problem, it’s your developer’s usage and their behavior and how they work is what is being attacked because, if you get at the root of the issue, correct me if I’m wrong here, the damage explodes exponentially which we’ve seen in the most recent cases.

Brian Fox

Yes, and certainly I’m not here to say that it’s necessarily intentional or negligence on the part of development. I think it is, at some level negligence on the part of the organizations for now empowering the developers with the information they need to make those proper choices. You cannot tell developers, “Go faster, do things cheaper and use existing components in open source because we don’t want to pay for them. Go get the free thing.” You cannot do those things without also compensating for the unintended consequences for it. That is the key.

David Spark

One of the things that came up, and I’m writing a much larger article on this topic, is that the level of pressure that is being put on developers is astounding. The amount they have to learn, the more packages they have to take on, the more language they have to learn and, “Oh, by the way, make sure this is all secure.” They are getting demands and requirements thrown at them at an alarming speed. It is pretty intense for them to be handling all this, yes?

Brian Fox

Yes, and the biggest challenge that I see is that a lot of the processes that are intended to protect organizations from architecture, legal, and from security are very legacy from a world where you weren’t fetching new components all the time, that had a thousand x explosion in the transit of dependency. Those processes cannot scale. That is a fundamental part of it, and then the more recent part is when everybody is just focusing on, “Well, we’ll make sure we do the scans before we ship it.” Well what happened to your developer that downloaded and NPM thing that put a back door in his system last week? You didn’t scan that, that is not showing up on the application. None of Alex Birson’s stuff showed up in the applications because they weren’t even real modules. His goal wasn’t to try and sneak it into the code, they wouldn’t have passed the unit test, but some of the blackouts we saw following within 72 hours of the research being published, they were trying to install back doors on the development machines. So the developer might say, “Oh, my build broke. Oh, something’s weird. Let me fix it.” Never even pass a thought that they just got hacked and now somebody’s using their credentials, and especially in a world with cloud native development. The development machines might actually have the keys to the production kingdom, right? So this is not just a developer machine in a corner somewhere in an office in 2021.

Close

00:32:22:00

David Spark

Excellent point. Let’s wrap up this show. Thank you so, so much Brian. That was truly excellent. I want to thank your company, Sonatype, for making this episode possible and also being a phenomenal sponsor in general. Sonatype has sponsored many, many programs on CISO Series so thank you very, very much. I’m going to let you make a final plug, and also let us know if you’re hiring. We always ask that. Mike, any final thoughts?

Mike Johnson

Brian, thank you for joining us. I really enjoyed having that conversation around developers and kind of the challenges that developers are facing, and how we as security teams can empower them to meet those expectations that are being placed upon them. I really liked that overall theme of your developers, developers, developer’s joke, but it really is important, so thank you for talking about that. I really want to specifically thank you for shining the light on developers being attacked directly. I think that is one that is still a little bit under the radar, and people are so used to the idea of you’re going to get Windows Malware and that’s how people are going to get compromised, but the reality is, developers themselves and their environments are being targeted actively, and that is a threat that we need to be paying more attention to so, specifically, thank you for shining a light on that but, in general, thank you for coming on, talking about the developer experience and how we can help developers act more securely.

David Spark

Thank you. Brian, if you would like to make any closing comments or make an offer to our audience, anything at all, let’s hear it.

Brian Fox

Thanks for having me. Yes, we are hiring. We are hiring product managers, sales people, really across the board. We’ve been growing a lot, even in a Covid world. This is a topic I speak a lot about. I have a lot of presentations and recordings that are out there. If you Google my name and open source developers, the new front line as an example, you will find lots of shocking statistics and more anecdotes to help educate other people in your organizations. This is a very real problem. Not enough people are facing it and it has real consequences. Hospitals are getting hacked, people are dying, cars are crashing. This is not just pretend, this is real world and I want everybody to have the message as much as possible. You can find a lot of it on our website, the blogs.sonatype.com. Specifically you’ll find a lot of the stuff. Again, thanks for having me.

David Spark

Well you’re very welcome and thank you Brian, and thank you Michael, and thank you audience for tuning in and listening to our show. As always, as I say, we appreciate your contributions and we appreciate you listening too. The CISO Security Vendor Relationship Podcast.

Voiceover

That wraps up another episode. If you haven’t subscribed to the podcast, please do. If you’re already a subscriber, write a review. This show thrives on your input. Head over to cisoseries.com, and you’ll see plenty of ways to participate, including recording a question or comment for the show. If you’re interested in sponsoring the podcast, contact David Spark directly at david@cisoseries.com. Thank you for listening to the “CISO/Security Vendor Relationship Podcast.” 

David Spark
David Spark is the founder of CISO Series where he produces and co-hosts many of the shows. Spark is a veteran tech journalist having appeared in dozens of media outlets for almost three decades.