Tip

Web application security guidelines for developers

Because Web applications are often developed by large teams of developers with various  skill levels, a software company’s development lifecycle plan should include training. As even seasoned professionals tend to lack schooling in writing secure code, this should be a focus area. A shorter-term measure, though, is to ensure each piece of work submitted by a developer is reviewed against a basic list of

    Requires Free Membership to View

Web application security guidelines. This helps prevent several basic coding flaws and oversights from slipping into the final software release, requiring expensive rewrites later on.

All developers should be required to add a minimum level of in-code comments using an agreed-upon comment style, along with more verbose supporting documentation (Wikipedia has a comprehensive list of comment styles). Although time spent on commenting and documenting code slows down the initial stages of development, it is recouped several times over as the project progresses and code is reviewed and changed.

Probably the most important check developers need to complete is that all data has been validated for type, length, format and range boundary.

Probably the most important check developers need to complete is that all data has been validated for type, length, format and range boundary. Validation should work on the principle of accepting only what is explicitly allowed and discarding all other input. So, if a function is expecting a date, it should first check the data entered is indeed a valid date before using it. Likewise, if a postcode should be 6, 7 or 8 characters long, this length must be checked; an email address should also be checked to verify it matches a valid email format.

Boundary and range are probably the checks most commonly omitted. The range for a person’s age in years is going to be between 1 and generally no more than 110, but what happens if someone enters zero or 2 million? Many developers may assume the valid range for the price of a product is a positive currency value. But what if the application needs to support a promotion, such as “buy one get one free”?  Some applications calculate the total value by adding a third item with a negative value. Code should never make assumptions about what is normal.

Certainly all free-form input, such as comments, must be sanitized by encoding it to ensure special characters such as < and > can’t be misused by hackers trying to inject malicious code. Developers should never rely on client-side validation -- JavaScript code that runs in the user’s browser -- as this is easy for a malicious user to circumvent. It’s a good idea to run tests with any client-side checks turned off to ensure the server-side validation is effective.

It is important that functions receiving data passed by other functions don’t assume the data has already been validated, as the previous function may have validated it against a different set of requirements or rules. A good example is a telephone number. A function to retrieve and display a user’s telephone from a database may well accept + and () symbols, but if this function then passes the data to a function that actually calls the number, these characters could cause the function to fail if they are not removed before being processed.

URLs used to pass data to an application are a favourite attack vector, so they need to be extensively tested. A good test is to add an extremely large amount of data to the URL query string to make sure the validation checks constrain inputs before processing them. Functions intended to receive data via a POST action should also be sent data via GET to make sure it is rejected. POST action handlers should also use a ValidateAntiForgeryToken type attribute to prevent XSRF attacks. (ValidateAntiForgeryToken is an attribute within the ASP.NET Framework that is used to detect whether a server request has been tampered with.)

Next, try to access restricted pages as a user without the correct permissions to make sure permissions are explicitly tested and checked by the application. Pages handling sensitive data should only be available via HTTPS, while sensitive data should never be passed in query strings or stored in cookies.

If the application accepts uploaded files, the function handling them needs to be tested by uploading files of an incorrect size, including empty and very large, corrupted and incorrect formats, and executables with a renamed extension. The code should be able to handle all these instances gracefully without hanging or crashing the system. Also check how the application handles data such as erroneous product IDs or commands a user shouldn’t have access to.

Developers need to code so any errors and exceptions are logged, but don’t leak information in an error message to the client about the application or system, as this may help an attacker fingerprint the application. Certain security events should trigger a notification to the application administrator.

All these checks focus on data the application receives, but developers also need to ensure the application correctly displays data their code outputs. Output that includes input, even if it is coming from a trusted database, should be encoded with HtmlEncode and UrlEncode type methods in order to combat cross-site scripting attacks.

Complex applications should be tested using a vulnerability scanner and other tools. For example, to see how the application handles a denial-of-service attack, use Microsoft’s free TinyGet utility. This is a command-line HTTP client that provides detailed output and response validation. As it supports multiple threads and looping, you can use it to stress test and troubleshoot HTTP client-to-server communication.

However, by putting the onus on developers to produce secure code the first time around, you will greatly reduce the number of flaws that these tools uncover and that require fixing as the application nears its release date.

About the author:
Michael Cobb, CISSP-ISSAP, CLAS is a renowned security author with more than 15 years of experience in the IT industry. He is the founder and managing director of Cobweb Applications, a consultancy that provides data security services delivering ISO 27001 solutions. He co-authored the book IIS Security and has written numerous technical articles for leading IT publications.Cobb serves as SearchSecurity.com’s contributing expert for application and platform security topics, and has been a featured guest instructor for several of SearchSecurity.com’s Security School lessons.

This was first published in November 2011

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.