The second way to test for buffer overflows is to look at compiled code. For a long time, many security professionals believed that the only way to detect vulnerabilities was to test the source code for the program. This was (and is) just not true. Numerous tools exist that look for vulnerabilities in a compiler's low-level assembly code. They also utilize fuzzing techniques, which test software by inputting massive amounts of random data and seeing if errors occur.
To review the compiled code for patterns in the assembly code, try tools like msfpescan and mfselfscan from the Metasploit project.
For fuzzing, look at tools like SPIKE, BreakingPoint Inc.'s testing tools and MuDynamic Inc.'s analyzer products. Spike is free, while Breaking Point's and Mu's tools are commercial. If application security checking is an extensive part of your job, I would strongly suggest considering a commercial product.
Many of the fuzzing products available take good input and mangle it to attempt an application break. To do this manually, find the inputs to an application and try to put in as many characters as possible. If it crashes, you need to discuss what went wrong by reviewing the application and system logs with the developers. Obviously it's wise to make sure you have permission before attempting this.
Ultimately the issue is much wider than simply checking for buffer overflows. Buffer overflows fall under the responsibility of developers not validating the inputs to their applications. For every data type in an application, there should be a limit to the number and type of characters it can accept. Identify any inputs that accept data that should not be allowed. For example, a field asking for "State" does not need to allow *, $, @, or ^. Regardless of whether a buffer overflow exists, it should be fixed to only allow the standard Aa-zZ character set.
This was first published in October 2008