Are we moving too fast?

In software, developers often don’t have a choice. Speed becomes a business imperative for survival and to stay competitive.

“Software development is a grinding environment. Forces always seem to be pulling in opposite directions, between management, client, and developer ideologies,” said Todd Cronin, CEO, Ryu Team.

“We have developed a culture of ‘agility’ without always retaining the appropriate balance with quality and security,” said Patrick Benoit (@patrickbenoit), vp, global cyber GRC /BISO, CBRE. “We need to look back to basics again and ensure fundamental steps in development, even if accelerated.”

That pretty much sums up the problem with security failures in software development. There are so many competing forces critical for business success that the need to secure the software development lifecycle simply doesn’t get the attention it requires.

I reached out to dozens of security professionals asking them about common failures in software development and how they would go about fixing them.

Got feedback? Join the conversation on LinkedIn.

Thanks to our episode sponsor, Sonatype

With security concerns around software supply chains ushered to center stage in recent months, organizations around the world are turning to Sonatype as trusted advisors. The company’s Nexus platform offers the only full-spectrum control of the cloud-native software development lifecycle including third-party open source code, first-party source code, infrastructure as code, and containerized code.

Editor’s note: While the article sponsor, Sonatype, and our editors agreed on the topic of “software hygiene for software development,” all production and editorial is fully controlled by CISO Series’ editorial staff.

1: And now, let’s add security

“Teams often fail by trying to tack security on late, which increases scope and pressure on the team while possibly decreasing quality and security. That frustrates everyone,” said Jake Payton (@JakePayton), director of engineering, Blumira. “Just realize you have to plan for things like security, quality, and documentation from the beginning.”

2: It’s old. It’s insecure. Let’s keep patching it.

Except when manufacturers announce they’re ending support, most programs don’t come pre-printed with an expiration date. At some point, you have to pull the plug on legacy technology.

“I’ve seen a vp of engineering spend a year or more trying to prove that someone somewhere still might be using TLS1.0 so they don’t have to worry about enabling TLS1.2 support let alone disabling the old protocol,” said Davi Ottenheimer (@daviottenheimer), vp of trust and digital ethics, Inrupt.

Or they support both protocols and now you’ve got dual mode support, which creates even more complications.

It’s simply not worth the headache, said Ottenheimer, who suggests collecting data to see the real usage of old protocols and/or bake in end of life requirements at the start of any project.

In line with that advice, Anatoly Chikanov, director of information security, Enel X also recommended, “Periodically scheduling a security review of composition of your application and updating it to use latest components.”

3: Development and production are a little too friendly

“Many SDLC (software development lifecycle) shops aren’t properly isolating. In other words I can traverse from that dev network into the prod network because it’s a flat network,” said Randall Frietzche (@rfrietzche), CISO, Denver Health.

Isolate the development environments across network and Active Directory. Create access control lists (ACLs) to limit traffic to and from the dev environments, said Frietzche.

4: We tested it once. Isn’t that enough?

“Often you see organizations say, ‘Before we ship, the application must go through a static scan’ or some other kind of criteria. They’re defending against shipping it to their users, when today’s battlefield is going on inside the development environment,” said Brian Fox (@brian_fox), CTO, Sonatype. “What you should be worried about is someone trying to sabotage the factory itself.”

SAST or ‘shifting left’ is not enough. You need to be scanning and testing smaller code bases continuously through the process so as to catch issues as soon as they’re being introduced.

“Test your latest code base for vulnerabilities as often as you do functionality,” said Ryu Team’s Cronin.

Imagine if every piece of functionality had an automated test to show it works successfully. It isn’t as common to have automated browser tests like Selenium Grid or Helium.

“As we move to cloud native technologies and architectures, your microservices have smaller code bases using diverse technology stacks which embrace a larger open source base,” said Steve Giguere (@_SteveGiguere_), cloud native security advocate, Bridgecrew.

“Scanning needs to happen as the code is developed, and then again once it is live/dynamic,” said Denver Health’s Frietzche who also recommends fixing issues in dev, not production. That does seem obvious, but known mistakes often do make their way to production.

5: Developers just don’t care about security

“They do care about security and many actually find it interesting,” said Bridgecrew’s Giguere. “However, they care more about meeting their feature deadlines and KPIs.”

Security needs to be added into DevOps success culture so it’s in line with the importance of feature releases.

“Start by creating out of band security automation checkpoints throughout the software lifecycle (SAST, SCA, runtime anomaly detection) which do not impact software development progress,” said Giguere.

The results of the scans will serve to both educate the developers on security issues and help them actually measure the quality of their code.

6: Coders not trained in security

Should coders be trained in security or should the security staff be trained in coding?

“It’s easier to teach security to coders than it is to teach coding to security folks,” said Johna Till Johnson (@JohnaTillJohnso), CEO, Nemertes Research.

To many, that makes the most sense. But how much pressure are we putting on developers?

“Software developers are already expected to know a lot and the industry trend seems to be expecting them to take over more and more,” said Rick Woodward, cyber security architect with Gibbs &Cox. “Adding more expectations on them to take over security planning and implementation results in a minimum viable product when it comes to security.”

But if you can show the results of insecure coding, motivations can and will change.

“Nothing catches a developer’s attention better than watching their code get hacked, in real time,” said John Overbaugh (@johnoverbaugh), vp, security, CareCentrix. “[Afterwards,] walk them back through their code, fix it, and demonstrate its new resilience.”

“In my dream world, engineers will be forbidden from checking in code until they complete secure code training,” said Debra Farber (@privacyguru), CEO, Principled LLC, who said we can reduce endless big bug bounty payouts by spending the money first on training developers on OWASP top 10 vulnerabilities.

7: Who are we making shift left?

It seems easy to just make the coders handle the security for their own code. But just announcing “we’re shifting left” doesn’t necessarily mean it’s going to be handled the way you expect.

“Managers must consider the limits of the expertise of personnel assigned to security roles early in the development process,” said Gibbs & Cox’s Woodward.

For example, just handing developers a list of requirements without a deep understanding of the security context is going to be treated like a compliance line item, which will result in a poor level of security for the code, said Woodward.

“Part of the necessity to ‘shift left’ in the SDLC, needs to include a ‘shift left’ mentality beyond that of engineers and include contracts, legal, and others within an organization,” said Mathew Biby, CISO, Satcom Direct.

Simply call out actionable and contractual deliverables around architecture and security configurations before and after implementation, said Biby. While this move isn’t going to eliminate issues, look at it as yet another method you can use to get ahead of potential and mounting SDLC risk.

8: Buying technology to make up for poor communication

There is no technology that’s going to solve a broken process. Coders and security professionals simply need to communicate. The backbone of this will require a management framework that will purposefully drive culture change.

“Mindfully build the program such that developers have a positive experience engaging with security,” said Omar Khawaja (@smallersecurity), CISO, Highmark Health.

“Organizational lines need to be blurred and security teams need to be an integrated part of the overall pipeline,” said Mike D. Kail (@mdkail), CTO, WITHIN.

Integration and education by the security team into the world of developers is critical to building a trusted relationship.

“Developers need to be able to trust that the person asking them to do things differently actually knows what they are doing,” said Craig Hurter (@ITDrummer), director security operations, Colorado Governor’s Office of Information Technology.

9: Are these the same security requirements as last week’s?

The configurations put in place today vs. tomorrow may not be from the same person or from someone who doesn’t necessarily have the right security know how.

“Don’t expect developers to have your level of security expertise, or assume they will instinctively make the right security choices,” said Brendan O’Connor, CEO, AppOmni. “If there are no upfront requirements, security reviews and assessments can feel awfully arbitrary. It can give the impression that the security team is moving the finish line, which makes people not want to work with them.”

“Configuration drift is not just a part of the standard technical debt annoyance, it is often the source of attacks to both the delivery chain and the application it ships,” said Chris Riley (@HoardingInfo), sr. technology advocate, developer relations, Splunk.

O’Connor advises documenting patching requirements for operating systems, secure coding standards for a programming language, and security configurations for cloud infrastructure and SaaS applications. Once documented, automate the inspection, so you’ll know right away if anything veers out of the norm.

10: I’ll save a lot of time reusing this code

“Over 10 percent of open source components used in applications today are known to be vulnerable,” said Sonatype’s Fox.

Malicious hackers are aware that developers are frequently dipping into these code repositories, so they’re tampering with code packages to get their hooks into various systems at the earliest stages of application development, noted Christopher Sundberg (@sundbug272), product cybersecurity engineer, Woodward.

“There are a lot of vulnerabilities which are inadvertently introduced into production applications,” said Stephen Porter, cloud solution strategist, VMware

Your developers who you hired and trust are “using third party library code written by someone you don’t know and have no control over,” said Jatinder Singh (@jatinlibra), director product security and cloud security, Informatica.

“Development tends to look at these libraries through the lens of ‘Does this solve problem X?’” said Mark Nunnikhoven (@marknca), distinguished cloud strategist, Lacework. “Developers need ask ‘Is this well built? Does this library connect to anything else? How does it handle my data?’”

“We need to do a better job of auditing and reviewing code contributions to see if something subversive was put in place,” added Mitch Parker (@mitchparkerciso), CISO, Indiana University Health.

11: Do we have enough open source software?

While many argued to be very wary of open source software, Francis Dinha (@FrancisDinha), CEO, OpenVPN, an open source solution in itself, says open source can actually be more secure.

Assuming the open source software is actively being reviewed by security experts, said Dinha, issues are isolated and resolved prior to the launch of the production version of the software.

“Software transparency is a powerful way to make sure your product is secure, tested, and of the highest caliber,” said Dinha.

12: We’ve developed a really slow process to identify and secure vulnerabilities

“The entire ‘industry standard’ for common vulnerabilities and exposures (CVE) detection process for software dependencies is broken,” said Adrian Ludwig, CISO, Atlassian, who noted a multi-step process of finding, updating, reporting, scanning, filing, testing, and production that can take weeks to implement.

During this process, a company’s software will be exposed to a known vulnerability. And delays can happen at any of these steps.

To eliminate more than half of these steps, Ludwig recommends auto-updating all dependencies and treating all libraries/dependencies/software packages that are not 100 percent up-to-date as a security issue. Just cut to the chase of what you want, because most of this tiresome work is being driven by the identification, filing, and reporting of each bug.

13: I’ll just store these secrets right here in the source code

“Good hygiene at dev stage means well understood and efficient handling of credentials throughout the SDLC, not just in production,” said Matt Thompson, head of production services and security officer, Bankable.

“We need to get out of the habit of putting private keys and credentials with code and storing them separately.  Source code controls systems are not meant to store secrets securely,” said IU Health’s Parker.

14: I can publish with no coding necessary

Steve Zalewski, CISO, Levi Strauss notes a little understood development cog, the ‘citizen developer,’ who works on the business side yet still has the power to publish.

“They are not IT developers, do not want to be IT developers and do not fall under the jurisdiction of IT,” said Zalewski. “But they upload content in many instances directly to production environments with no SDLC process.”

Security is not addressing the ‘citizen developer’ and Zalewski says they need to because it’s causing lots of outages.

“We need a customized SDLC experience for this line of business content developers to be able to provide some level of rigor in change management without the revolt that would ensue if we used traditional SDLC tools,” said Zalewski.

15: You can publish your code just after you do this, this, and this

“Security teams often believe that software teams should just listen to and do what security teams ask them to do,” said Chris Hymes, CISO, Riot Games.

“Security needs to avoid being the team that simply ‘creates work” for everyone else,” said Joshua Scott (@joshuascott94), CISO, Postman.    

“Instead, security teams need to build the tools and processes that make writing secure software easy,” advised Hymes.

“We need to do whatever we can to enable the organization to resolve issues and mitigate risk in as easy a way as possible,” added Scott.

16: We’re all running our businesses on the same broken software

One piece of insecure software can easily be fixed by one organization. But if that same software is not secured and then used by multiple organizations, the issue has magnified, becoming the problem of many organizations.

We have to stop the problem at the source so we all don’t suffer the same vulnerabilities nor all have to repeat the same tasks of securing the same software.

“Large enterprises and federal agencies must wield their leverage by holding 3rd party software accountable for flaws,” suggested Scott Scheferman (@transhackerism), principal strategist, Eclypsium.

“Secure software is just a layer in a secure application,” said Ahsan Mir (@ahsanmir), CEO, Rapticore. “It is a series of connected activities that help create and maintain a secure application. We want a secure application, not just secure software.”

17: These developers have no security guardrails

All of the aforementioned advice really adds up to needing guardrails for software development. But you also need some means, such as a metric, to know if your guardrails are actually working.

Informatica’s Singh suggests measuring the number of vulnerabilities per million lines of code going into production. He’s reduced that metric by one-third with a number of stringent steps, mostly on limiting direct access and implementation of code from third party libraries.

Brandon Greenwood, CISO, vp, security and IT, Overstock.com, says resolving bugs often just makes for KPI eye candy. To be more effective with everyone’s time, don’t treat all bugs the same.

“Reducing the number of findings resolved may provide value but considerations of engineering resources are finite and organizations should strive to prioritize remediating findings that reduce the most business risk,” said Greenwood.

18: Our audience is demanding the product, not a secure product

“Software is a market for lemons,” said Jeff Williams (@planetlevel), co-founder and CTO, Contrast Security.

Nobel prize winning economist George Akerloff said that used car buyers won’t pay full price because the car might be a lemon, thus preventing quality cars being sold in the used car market. Williams believes the same is true with software.

“Buyers won’t pay more for secure software, because there’s no way to know if it’s full of vulnerabilities. Consequently, the market is flooded with insecure software. The way out is ‘security in sunshine’ or making everything relevant to security observable. ‘Observability’ is about exposing the internal workings of complex systems,” said Williams. “Do it internally at first and then start to expose it publicly.  You’ll establish a great security culture and differentiate yourself in the marketplace.”

19: We’d developer better software if we had more experienced software developers

“Not being able to hire the right team because there aren’t enough experts or they are too expensive is a very convenient excuse for sub-par security,” said Nir Rothenberg, CISO, Rapyd.

That’s not an excuse to pass on to your customers or to other companies you do business with.

“Instead, play the long game,” said Rothenberg. “Hire juniors and train them. Utilize consultants. Application security consultants can not only help you secure your software, but train the beginners.”

20: Let’s pay someone to tell us we have problems we already know we have

One of the reasons applications have such weak security is because they’re often not being built to any specific security standard. Sadly, the ‘chosen’ security standard is a bug bounty. While you should do that, it should be done after a security standard has been put in place so the red teaming exercise doesn’t find the obvious stuff you should have caught in development.

“Utilize the OWASP Application Security Verification Standard (ASVS) to validate that your application adheres to acceptable standards,” said CareCentrix’s Overbaugh. “Don’t just pay a company to poke around looking for that $30,000 bug (the one bug that makes the pen test worthwhile). All you know after that is that your application is weak in a given area. Make sure your application is resilient by proving it, step by step.”

CONCLUSION: It’s really complex, accept it

“I think a failure in secure development is believing that there is any singular solution that can fundamentally deliver secure software. Software is too complex,” said Riot Games’ Hymes.

From this advice what I gather is the need for more of us to work together, specifically on the reused code. When third parties or code libraries become compromised, the problem for one becomes a problem for many. And when that happens repeatedly, it just becomes an untenable problem for everyone.

There are solutions out there, but for it to work for everyone, we all need to be involved, expose software’s underpinnings, and share what we discover so we don’t pass along our problems to others.