I've read that Web application penetration tests are becoming less important. Is that because there is more of an emphasis on the secure software development lifecycle process? Do you agree, and are there, in fact, scenarios in which a pen test would have been appropriate a few years ago, but no longer is?

Requires Free Membership to View

I certainly agree that there is thankfully more emphasis being placed on the secure software development lifecycle process to building Web applications, but in my mind, penetration testing remains an important part of this process.

The Open Web Application Security Project (OWASP) is an open community focused on improving the security of application software. It, too, is a strong advocate of Web application penetration tests as part of a secure software development lifecycle. The OWASP testing guide includes a "best practice" penetration testing framework, which security professionals can implement in their own organizations, and a "low-level" penetration testing guide that describes techniques for testing the most common Web application and Web service security issues.

It's not just software security experts that see Web application penetration tests as critical. Compliance with requirement 11.3 of the Payment Card Industry Data Security Standard (PCI DSS), for example, mandates that you perform external and internal penetration testing, including network- and application-layer penetration tests, at least once a year as well as after any significant infrastructure or application upgrade or modification. Similarly, independent penetration testing of government systems in the U.K. is now a core requirement when testing protections against external attack, thanks to last year's Data Handling Review, which investigated U.K. Departmental security practices.

So why are Web application penetration tests still so important? Well, the aim of the secure software development lifecycle is to reduce the number of security-related design and coding defects, and to reduce the severity of any defects that do remain undetected. But there still may well be defects present even in the most scrutinized of applications. Web 2.0 applications, for example, are becoming so complex with increased permutations of user and service interaction that even combining manual or automated scans and assessments may not uncover an exploitable flaw.

Until scanners can harness true artificial intelligence and put the anomalies into context or make normative judgments about them, they will struggle to find certain types of vulnerabilities. A vulnerability assessment simply identifies and reports vulnerabilities, whereas a penetration test attempts to exploit vulnerabilities to determine whether unauthorized access or other malicious activity is possible.

Security holes can also be introduced when an application is deployed and interacts with other processes and the operating system itself. The interaction of multiple functions can generate unanticipated errors, which only become apparent during component-level integration, system integration or deployment. By performing a penetration test to simulate an attack, it's possible to evaluate whether an application has any potential vulnerabilities resulting from poor or improper system configuration, hardware or software flaws, or weaknesses in the perimeter defenses protecting the application.

Unfortunately, an "all clear" result from a penetration test doesn't mean that an application has no problems. Penetration tests can miss weaknesses such as session forging and brute-forcing detection. This is why security throughout an application's lifecycle is so important. Vulnerabilities are discovered continually by malicious individuals and researchers, and being introduced by new software. Scheduled penetration testing helps ensure security is maintained over time, particularly as new software or changes to system configurations alter the environment in which an application is running.

This was first published in June 2009