This scenario may be familiar: Your organization has hundreds or even thousands of applications, but few have received an adequate security assessment, despite a mandate to protect the enterprise against application threats.
Many security managers are thrust into the uncomfortable position of dealing with a huge portfolio of potentially insecure applications, limited resources and an overwhelming sense of urgency. Security managers should ensure applications undergo security assessments, as applications have quickly become a favorite vector of malicious attackers seeking to disrupt day-to-day business activities or infiltrate corporate defenses to steal sensitive data.
In this tip, we'll add some clarity to the enterprise application security assessment process by outlining the techniques used to review applications and comparing and contrasting strategic paradigms for application assessments.
Technical application security assessment options
Those who are new to application security may be overwhelmed by the sheer number of assessment options, each with its own aficionados touting the merits of their favoured assessment type:
- Runtime vulnerability assessment -- Runtime assessments come in three varieties: automated, manual and combined. Automated assessments are generally faster and broader than manual assessments, but often miss obscure vulnerabilities and cannot discover business logic
- flaws. Most mature application security shops lean toward a combined approach.
- Source-code review -- Source-code review allows assessors to find various vulnerabilities, but requires deep language and security expertise, and often takes longer than runtime assessments. Like runtime vulnerability assessments, source-code reviews can be automated, manual or combined: all with pros and cons analogous to their runtime assessment counterparts.
- Threat-modeling techniques -- These assess pertinent, theoretical application threats from a design perspective. Often threat modeling precedes source-code review and/or runtime vulnerability assessments.
Selecting the right mix of assessment types can be difficult. Many companies face this problem, and there are a number of approaches to finding the right application assessment process:
- The Big Bang Approach
- Purpose of the application: What is the application used for? How many people use it? A telephone directory application doesn't have the same risk profile as an accounting application.
- Data risk: Are confidentiality or integrity requirements tied to the application? Does the application or its servers need 99.999% availability? Is the application affected by any compliance drivers, such as PCI DSS, HIPAA, etc.?
- Architecture and design: Is the application a Web application, Web service, client/server, mainframe, mid-tier, desktop or something else? Is it Internet or intranet facing? What programming language and framework was it developed in? Does the application use any known high-risk components such as Ajax or PHP? Approximately how large is the application (in lines of source code)?
- Existing security features: What security features are already known to exist in the application? For example, how does the application perform authentication, authorization, input validation, etc.?
With this method, it's important to build guidelines that assign numeric risk values for each of these factors. For example, "Add 25 points for Internet-facing applications," "Subtract 5 points for applications that don't share data or interfaces with any other applications," etc. The end result should be a number that allows you to rank applications against one another. Remember that profiling applications is often time consuming and hard to perfect, so rather than forcing yourself to get all data for all applications, try to stick to a limit for how much time to spend gathering info on each app. Your scoring methodology should be tolerant of imperfect information and should be able to rank applications against each other even if you have a deeper understanding of one versus another. Don't be too rigid about the scoring system -- if a security expert sees an application as particularly high risk, but the scoring system does not backup his intuition, side with the security expert.
Applications in the high-risk bucket should undergo threat modeling, followed by manual and automated runtime vulnerability testing and source-code review. Moderate-risk applications should be subject to automated runtime vulnerability testing and source-code review with manual verification. Low-risk applications may simply need to undergo runtime vulnerability testing and, time permitting, manual verification. If the results of testing an application from the lower buckets are particularly negative, then the application should undergo more comprehensive testing.
So what's the best approach? Aligning the assessment with business risk allows for meaningful prioritization of time and money. A hybrid of approaches is ideal: Immediately identify and comprehensively test a small set of the highest risk applications (e.g. your company.com website). In parallel, start the application triaging process to determine what gets tested next. If the resources are available, begin the unauthenticated health check assessments while you're triaging. This process allows you to benefit from the broad analysis of profiling along with the objective results of a quick scan. Follow up the rest of the process like a normal Risk Triaging Approach: Start with the highest risk apps and work toward the lowest.
Assessments, of course, are only one part of the entire application security equation. The next important step involves remediation. Luckily, the triaging approach lays the ground work for prioritizing remediation: Start with the highest risk vulnerabilities in the highest risk apps, and move down from there. A good application security team will also be able to identify root causes to system findings and suggest remediation steps in the software development lifecycle to make its applications more secure from the ground up.
Regardless of the application security assessment approach you choose, remember that any of these approaches are better than turning a blind eye to the many risks posed by insecure enterprise applications.
About the authors:
Security Compass is an information security consulting and training company that specializes in secure software development. With its in-depth knowledge of information security and software engineering along with unmatched commitment to professionalism and training quality, Security Compass is repeatedly engaged by several of the world's most security conscious organizations to help build software trust from the ground up.
Nish Bhalla, the Founder of Security Compass, is a specialist in Application and Network Assessments.
Mr. Bhalla has coauthored and contributed to many books including "Buffer Overflow Attacks: Detect, Exploit & Prevent" "HackNotes: Network Security", "Writing Security Tools and Exploits" and "Hacking Exposed: Web Applications, 2nd Edition".
Nish is a frequent speaker on emerging security issues and has spoken at many reputed Security Conferences including RSA, Blackhat, RECon, HackInTheBox, ShmooCon and many others.
Rohit Sethi, Director of Professional Services, Security Compass, is a specialist in threat modeling, application security reviews, and building security controls into the software development life cycle (SDLC). Mr. Sethi is a frequent guest speaker and instructor at several national conferences. He has written articles for Security Focus and the Web Application Security Consortium (WASC), and has been quoted as an expert in application security for ITWorldCanada and Computer World.
At Security Compass, Rohit teaches hundreds of students various topics on Web application security in cities across North America. He has also managed and performed extensive threat analysis, source code reviews, and penetration testing for clients in financial services, utilities, telecommunications and healthcare. He is often consulted for his dual expertise in information security and software engineering.
This was first published in March 2009