The security community is continuously changing, growing, and learning from each other to better position the world against cyber menaces. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Daniel Cuthbert, Global Head of Security Research at Banco Santander. Daniel discusses how to use application security testing and testing standards to improve security.
Natalia: What is an application security test and what does it entail?
Daniel: Let’s say I have a traditional legacy banking application. Consumers can sign in using their web browser to gain access to financial details or funds, move fund around, and receive fund. Commonly, when you are ready to an application appraisal done for that type of application, you’re looking at the authentication and authorization process, how the application architecture operates, how it handles data, and how the user interacts with it. As applications have grown from a single application that interacts with a back-end database to microservices, all the ways that data is moved around and installed–and the processes–become more important.
Generally, an application exam makes sure that at no phase can somebody gain unauthorized access to data or somebody else’s money. And we want to make sure that an approved customer can’t impersonate another consumer, gain access to somebody else’s funds, or cause a system in the architecture to do something that private developers or technologists never expected to happen.
Daniel: ASVS stands for Application Security Verification Standard1. The suggestion was to normalize how people conduct and receive application security exams. Prior to it, there was no methodology. There was a lot of ambiguity in the industry. You’d say, “I need an app exam done, ” and you’d hope that the company you chose had a methodology in place and the people doing the assessment were capable of following a methodology.
In reality, that wasn’t the case. It differed across various penetration test mansions. Those receiving consultancy for piercing tests and application tests didn’t have a structured notion of what should be tested or what constituted a secure robust application. That’s where the ASVS comes in. Now you can say, “I need an application exam done. I want a Level 2 assessment of this application.” The person receiving the test knows exactly what they’re expecting, and the person doing the test knows exactly what the client is expecting. It gets everybody on the same page, and that’s what we were missing before.
Natalia: How should corporations prioritize and navigate the ASVS levels and controls?
Daniel: When they first look at the ASVS, many people get intimidated and overwhelmed. First, stay calm. The three levels are there as a guideline. Level 1 should be the absolute bare minimum. That’s the entryway to play if you’re putting an application on the Internet, and we try to design Level 1 to be capable of being automated. As far as tools to automate Level 1, OWASP Zed Attack Proxy( ZAP) is getting there. In 2021, an application should be at Level 2, specially if we take into consideration privacy. Level 3 is unique. Most people never need Level 3, which was designed for applications that are critical and have a strong need for security–where if it goes down, there’s a loss of human life or massive impact. Level 3 is expensive and time-consuming, but you expect that if it’s, say, a power plant. You don’t want it to be quickly thrown together in a couple of hours.
With all the levels, you don’t have to go through every single control; this is where threat modeling comes in. If your application builds apply of a back-end database, and you have microservices, you take the proportions that you need from Level 2 and build your testing program. Many people think that you have to test every single control, but you don’t. You should customize it as much as you need.
Natalia: What’s the right rhythm for conducting application security exams?
Daniel: The lane we build applications has changed drastically. Ten years ago, a lot of people were doing the waterfall approach using functional specifications like, “I want to build a widget to sells shoes.” Great. Somebody commits them money and period. Developers go develop, and toward the end, they start going through functional user adoption testing( UAT) and get somebody to do a piercing test. Worst mistake ever. In my own experience, we’d go live on Monday, and the penetration exam would happen the week before.
What we’ve seen with the adoption of agile is the shifting left of the software development lifecycle( SDLC ). We’re starting to see people “ve been thinking about” security is not merely as an add-on at the end but as part of the function. We expect the app to be secure, usable, and robust. We’re adopting security standards. We’re adopting the guardrails for our continuous integration and continuous delivery pipeline. That entails developers write a function, check the code into Git, or whatever repository, and the code is checked that it’s robust, formatted correctly, and secure. In the industry, we’re moving away from relying on that final application test to constantly appearing during the course of its entire lifecycle for bugs, misconfigurations, or incorrectly utilized encryption or encoding.
Natalia: What common mistakes do companies attain that impact the results of an application security evaluation?
Daniel: The first one is corporations not embracing the lovely world of menace modeling. A threat simulate trying to save you time and give you direction. When people bypass the fundamental stage of menace modeling, they’re burning cycles. If you adopt the threat model and say, “This is every single way some bad person is going to break our favorite widget tool, ” you are able to build upon that.
The second mistake is not understanding what all the components do. We no longer build applications that are a single web server, Internet Information Services( IIS ), or NGINX that is stored in the database. It’s rare to see that today. Today’s applications are complex. Because multiple teams have the responsibility of individual parts of that process, they don’t all work together to understand simple things like the data flow. Where’s the data held? How does this application process that data? Often, everyone accepts the other team is doing it. This is a problem. Either the scrum master or product owned should own full visibility of the application, specially if it’s a large project. But it varies depending on the organization. We’re not in a mature enough stage yet for it to be a defined role.
Also, the gap between security and development is still too wide. Security didn’t stimulate many friends. We were constantly belittling developers. I was part of that, and we were wrong. At the moment, we’re trying to bridge the two teams. We want developers to see that security is trying to help them.
We should be building a way for developers to be as creative and cool as we expect them to be while defining guardrails to stop common mistakes from appearing in the code pipeline. It’s very hard to write secure code, but we can embrace the fourth generation of continuous integration and continuous delivery( CI/ CD ). Check your code in; then do a series of tests. Make sure that at that point and at that commit, the code is as robust, fasten, and proper as it should be.
Daniel: I don’t expect developers to understand all the latest vulnerabilities. That’s the role of the security or security engineering squad. As teams mature, the security engineering or security squad acts as the go-to bridge; they are aware of the vulnerabilities and how they’re exploited, and they translate that into how people are building code for their organization. They’re also looking at the various tools or processes that could be leveraged to stop those vulnerabilities from becoming an issue.
One of the really cool things that I’m starting to see with GitHub is GitHub insights. Let’s say there’s a large organization that has thousands of storehouses. You’ll probably ensure a common pattern of vulnerabilities if you looked at all those storehouses. We’re getting to the stage where we’re going to have a “Minority Report” style function for security.
On a monthly basis, I can say, “Show me the teams that are checking in bugs–let’s say deserialization.” I want to understand a problem before it becomes a major one and work with those squads to say, “Of the last 10 debates, 4 of them ought to have flagged as being vulnerable for deserialization glitches. Let’s sitting there and understand how you’re building, what you’re building toward, and what frameworks you’re trying to adopt. Can we make better tools for you to protect against the vulnerability? Do you need to understand the vulnerability itself? ” The tools, pipes, and education are in here. We can start lies in the fact that bridge.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Likewise, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.
The post Practical tips-off on how to use application security testing and testing standards appeared first on Microsoft Security Blog.