One of the most overlooked secure programming principles, "allocation of resources without limits or throttling" made the list. This is not terribly surprising, because developers often get so absorbed in getting an application up and running that they don't bother checking how it behaves when it faces a heavy usage.
In this tip, we'll explore why improper and uncontrolled resource allocation management is a security risk and how to prevent it from affecting your organisation.
What is security resource allocation management?
So what does allocating resources without limits or throttling entail? It occurs when part of an IT infrastructure (such as a server or an application) doesn't control the use of the resources it consumes. Resources can be memory, CPU cycles, socket connections, or any element of a physical or logical network infrastructure. Consumption of these resources often goes unchecked because of poor design, planning, implementation or IT management.
It can be difficult to understand this lack of control in IT systems, as we are quick to prevent it from occurring in everyday life. For example, you'd never let a single guest consume the entire buffet you'd prepared for a party of 40 people, so why allow an FTP server used by your research department of 20 people accept unlimited connections? Sometimes, though, we can be caught unawares and not realise it is happening until it is too late, which is why resource allocation awareness is important.
When the use of resources in an IT system isn't controlled, some form of denial-of-service (DoS) can occur. For example, when there are no storage limits set for network users, a single user could unwittingly fill the entire file server and prevent other users from saving or archiving their work. What's more, malicious users could exploit such a vulnerability to mount a crippling attack against an entire system.
Preventing resource allocation problems
So is the best way to limit or throttle resource allocation to reduce the likelihood of successful DoS attacks?
The easiest form of resource allocation management involves configuring the way servers handle requests and connections using the resource-limiting settings provided by the operating system or service.
Most Web server software allows for limiting the number of possible simultaneous client connections, which not only protects against overloading it, but also conserves memory and bandwidth for other uses, such as email and FTP servers running on the same machine.
If the number of connections reaches the defined maximum, all subsequent connection attempts result in an error and the connection is closed. That prevents the system from becoming unstable, but it can mean that genuine users are prevented from accessing the Web server. Therefore, ensure your system is resourced to handle an acceptable maximum and that alerts are in place for when limits are reached. Be advised, however, that your system should not return error messages to users that provide any system information that would be useful to the attacker.
Before defining your limits, obtain a baseline of how your server performs under "normal" usage. For those using Microsoft Internet Information Services (IIS), System Monitor can log the Current Connections, Maximum Connections and Total Connection Attempts counters on the Web service and FTP service objects. Apache administrators can use the Apache Module mod_status to collect their server's stats. Then make incremental changes to settings, such as connection limits and timeouts, and see how they impact performance, keeping in mind that more aggressive limits can increase protection against malicious attacks.
Under normal usage, a server should use no more than 50% of its total available bandwidth so the remaining bandwidth can be used during peak periods. Other settings that Web administrators should consider include:
- Response timeout, which prevents malicious or malfunctioning clients from consuming resources by holding a connection open with minimal data.
- Connection timeout, which helps reduce the amount of memory resources that are consumed by idle connections.
These types of resource controls provide a way of limiting the amount of resources that are accessible to anonymous, potentially malicious users.
Mitigating attacks caused by improper resource allocation management
In addition to proper resource allocation planning, it's important to be able to recognise when a malicious resource consumption event may be taking place. An attacker will normally use a script that makes repeated requests in order to exhaust a target's resources and cause a denial-of-service. Web applications must be able to recognise when they are under this type of attack and deny the script further access, typically by using longer time delays. One way to detect and prevent a DoS attack is by using velocity checks, which track the rate of requests received from a single IP address. This type of control would have helped prevent the scam that took place in late 2007, when a hacker managed to open 11,385 bogus accounts at Schwab.com from the same five IP addresses all with the same username.
Another form of resource allocation attack doesn't depend as much on the volume of requests made but on manipulating the target's functionality in order to deplete resources such as memory. Mitigating this type of resource attack requires a review of application code to locate failures to release system resources such as files, sockets, processes and memory, or if too much of a resource is requested at once, as can occur with memory.
While automated dynamic analysis techniques and fuzzing are typically geared toward finding these sorts of coding errors, they can also be used to stress-test an application by generating a large number of requests within a short time frame, similar to an automated attack. If this results in the software crashing, then a failure to limit resource allocation may be the cause.
Code reviews should also ensure that user generated inputs that affect the amount of memory that is required are within a defined limit and that any failures in resource allocation automatically place the system into a safe posture. So for example, requests should be throttled when a threshold limit is hit and code that handles resource allocation should always use structured exception handling, which enables an application to retain control even when events that normally terminate execution occur.
Security rules, such as a Web forum only allowing registered and authenticated users to post X number of messages per day, need to be enforced and any resource-intensive functions need to be able to recognise and block malicious behaviour, such as attacks by automated scripts. Access to such functionality should sit behind a strong authentication and access control model, which will reduce the likelihood of automated attacks succeeding and help to identify where the attack is coming from.
Although there's no simple way of preventing DoS and other attacks against key enterprise resources, ensuring that application code and system configuration handle resource allocation management appropriately and that all systems are adequately resourced will mean any attacker will require a greater number of resources at his disposal to mount a successful attack.
About the author:
Michael Cobb, CISSP-ISSAP is the founder and managing director of Cobweb Applications Ltd., a consultancy that offers IT training and support in data security and analysis. He co-authored the book IIS Security and has written numerous technical articles for leading IT publications.
This was first published in April 2010