Malicious attackers are already using fuzzing. We should too.
The history of Secure Sockets Layer (SSL) and its successor Transport Layer Security (TLS) have been marked by significant cryptographic breaks and implementation flaws with exploits wreaking havoc upon enterprises and the public. TLS v1.3 is a huge leap forward for web based encrypted communication with improved security, and performance compared to its predecessors. TLS v1.3 improves robustness of crypto support by removing obsolete and insecure features from TLS v1.2. Removal of these features result in TLS v1.3 protocol being more simplified and therefore reducing chances of exposure to the underlying vulnerabilities. Besides these and number of other security improvements, there are performance improvements compared to TLS v1.2 including optimizing the initial handshake and enhancing TLS v1.2 “Resumption” to support Zero Round Trip Time Resumption (0-RTT).
Encrypting traffic is an essential part of services offered by verticals including Internet of Things (IoT), Financials and Government. Gartner predicts that by 2019 more than 80% of enterprises’ web traffic will be encrypted(1). As a reference point, Google Transparency Report indicates that over 80% of traffic to their servers are encrypted now. Furthermore, the encrypted traffic to google servers increased by over 30% in three years (Jan 01, 2014 through Dec 31, 2016)(2).
In order to make these emerging encryption deployments effective in terms of enabling improved privacy and security, it is imperative that organizations proactively investigate viability, and robustness of encrypted transport layer technologies as well as solutions that may be based on such services. Vulnerabilities in the implementation of transport layer protocols have been exploited for malicious attacks as we have seen in recent years. For example, ‘Heartbleed’ is one of the most significant security bugs of all time that was exploited in the code for TLS extension of OpenSSL library. Given this landscape that lies ahead for TLS v1.3 deployments, what actions may be taken proactively that would improve robustness of implementation and harden the deployed services?
One of the more promising approaches to uncovering bugs missed in manual audits of code implementation is fuzz testing. Fuzz testing or fuzzing delivers invalid, unexpected, or random data to the inputs of a computer program, operating system, or hardware system while monitoring for application or program. Broadly speaking, there are two fuzzing strategies:
- Mutation based
- Smart/Generation based
Mutation based fuzzing or sometimes referred to as dumb fuzzing (since it lacks understanding of format, structure, model or protocol) modifies/mutates data inputs to generate test cases. The modifications could be done in any number of ways including flipping bits, or changing length or other similar schemes.
Smart/Generation based fuzzing creates the test cases by modeling the target protocol, file format and so on. This type of fuzzing would take the input format, structure, model, or protocol and generate new test inputs from scratch. In this type of fuzzing since the template for protocol is given, the fuzzer can dynamically parse and generate fuzz data.