Our client is a public services infrastructure company located in the Middle East. To safeguard their critical online assets, websites and data centre, the organization made an investment in cloud-based, continuous mitigation services offered and managed by their Internet service provider (ISP).
Previously, the company had encountered DNS-based attacks that they were unable to mitigate successfully. Following their standard operational protocol, they aimed to assess the effectiveness of both their mitigation service and the on-premise web application firewall within a 120-minute maintenance window.
The evaluation involved testing their defence against various Layer 3, Layer 7, and DNS-based attack types, while confirming their ability to maintain operations during real attacks.
In collaboration with the client, Babble engineers proposed a series of six tests, including application, volumetric and DNS based attacks designed to stress different aspects of their mitigation service, DNS servers, web servers and firewall.
A two-hour test window was allocated, delivering 104 minutes of attack traffic against three different targets. To simulate real-world traffic accurately, 4 botnet armies, comprising 120 bots across 21 different global locations were used.
Notably, the ISP market-leading mitigator was not informed about the DDoS test beforehand.
Starting with a mild HTTP Slow Post attack, the client’s server didn’t seem significantly affected. However, this led to the SIEM being overwhelmed with event logs and at risk of becoming unstable.
Subsequently, testing transitioned to a DNS Req Flood, causing the DNS Server to become unresponsive. The mitigator identified the attack but reported it as a false positive, and other hosts in the same subnet were impacted.
The third test featured a UDP Flood on a dedicated website, resulting in the site becoming slow and eventually unavailable. Notably, no mitigation alerts were sent to the customer and the ISP failed to detect the attack.
Tests four and five were intentionally conducted at the same time, creating a blended attack targeting multiple layers of mitigation.
In Test 4, a Dynamic HTTP Flood sent random URL requests to the primary website, which went undetected by both the ISP and WAF. We expected either to flag the increased connection rate or the returned 404s.
Lastly, a Tsunami SYN Flood completed the test sequence. In response, the client blocked all traffic from outside of the country. Although traffic was successfully blocked, it still appeared as a “successful” denial of service to the outside world.
The tests highlighted some real concerns for our client with the failure of their Internet service provider to recognise or report the attacks. In addition, the mitigation service neglected to stop several of the attacks and in some instances caused “bystander fallout” of unintended targets on their network.
Each member of the live test team was granted access to the dashboard displaying attack traffic, enabling them to document discrepancies as they occurred in real-time.
Despite our client’s investment in advanced telemetry techniques, we proposed additional enhancements in traffic log analysis and parsing. We also recommended implementing real-time alerting to promptly notify the relevant internal staff in case of emergency incidents.
Lastly, it was advised that our client initiate communication with their ISP to gain insights into the reasons behind the failure of their protection services.
Babble suggested conducting a second round of tests to effectively implement these recommended changes. Embracing security best practice, even if compliance doesn’t mandate regular DDoS testing, can serve as a strong indicator to auditors, investors, and customers of a company’s unwavering commitment to service availability.