You've done it; your first enterprise network penetration test is now complete. The only problem is that you have what seems like a mountain of vulnerability information, but don't know how to not only parse it to identify the truly relevant weaknesses you've uncovered, but also use that information to strengthen network defenses.
While the thought of starting a network penetration test analysis may make your head spin, in this tip we'll detail a step-by-step process for analyzing and acting on penetration test results data.
During the network penetration testing design process, there are boundaries that need to be laid out to prevent scope creep, namely which devices, services and networks will be tested and which won't; this scope will depend on the objectives of the test. Be sure to document and save this plan, as after the test it will serve as your framework of what the results should (and shouldn't) encompass.
For instance, say your scope included all routers and switches within the organization and the task was to check for any vulnerability that could be associated with those devices. In the process of your scan, the data shows a Windows box that has a vulnerable FTP server on it. While the vulnerable service poses a risk, spending time evaluating a device that is out of scope can impact your penetration test analysis timeline. Such a discovery may be indicative of a minor privilege access management issue or a more serious breach, but
If time and requirements allow for it, the data can be used in an addendum to your report, presenting a high-level overview of out-of-scope vulnerabilities discovered that need further review. Either way, never destroy results that could be pertinent, but not directly part of the deliverable until given the go ahead.
Once you've identified the penetration test vulnerability results that are in scope, verify their validity using multiple resources. This is an important piece as not any one tool will always give accurate information relative to the scope of your test. These resources should generally include network discovery results from a tool such as Nmap.. Additionally, data can be correlated with results from a vulnerability assessment tool like Nessus.
During the initial penetration test results analysis, the key is to weed out vulnerabilities that may be tied to particular services but irrelevant based on platform. A good example of this may be an SMB exploit for Windows, but the platform was actually a Linux box running Samba, which enables several non-Windows platforms to interact with Windows systems using TCP/IP. Although newer scanners are always doing a better job of platform identification, networks can often obfuscate actual platform results because of dedicated security devices such as firewalls, IPS and load balancers.
From here, compare the penetration test results data to any and all network documentation. Today more penetration tests are being run in-house as a white-box test to validate that network design reflects operational implementation, which lends much insight to the correlation of data. From this perspective, a white-box test can generally be more accurate, as an actual vulnerability can be correlated against a network architecture diagram, IP subnet allocations, service lists and other documented aspects of the network. If you're running an internal test and you have access to this information, using it in this fashion will help you flush out false-positives that may burn valuable research time for critical vulnerabilities.
Once you feel the results have been whittled down to a manageable number, it is time to apply metrics to appropriately rank the risks you've identified. This should be done through security metrics. These can be general, but should take into consideration the nuances of your environment. There are a number of in-depth, free and easy-to-implement metrics frameworks out there. One is the Open Source Security Testing Methodolgy Manual (OSSTMM), and another is the Open Software Assurance Maturity Model (OpenSAMM). While the OSSTMM is geared directly at security testing, both have good examples and activities to build a metrics system for your organization.
Some of your penetration test tools will provide relative risk ratings with limited scope: that of the actual vulnerability with no considerations to any other controls in place. This is good and bad. Good because it can provide a quick overview of high-risk, remotely exploitable vulnerabilities that may allow an attacker to completely take over an endpoint device, but bad from the perspective that the vulnerability identified may be on a box so tightly tucked away into a dark part of your network protected by many other controls that other more dangerous findings may seem more concerning.
Ultimately the motivations behind the test itself will drive what the final report looks like. As an example, the report might describe the penetration testing objective and scope, the penetration testing results in a ranked order, conclusions with remediation recommendations and an optional appendix for further description or out of scope findings. Remember that realistic threat rankings are per metrics you define, and out-of-the-box high, medium and low rankings generally do not reflect that actual risk accurately.
Penetration testing remains an important method for discovering network security weaknesses. It's a lot of time and effort, and it doesn't make sense to go through the process without a strategy for making use of the results. By verifying the scope of the test, validating the results, applying metrics to classify their severity, and reporting the findings in a clear, concise way, you can provide a valuable service that truly reflects the current network security risk posture of your organization.
About the author:
David Meier is a security consultant specializing in network architecture and current (and realistic) threats. He has designed and implemented solutions for the Air Force and Fortune 100 companies. David is also a contributor at security research and analysis firm Securosis.
This was first published in February 2010