A Messy State of the Union: Taming the Composite State Machines of TLS #
This paper is about the state machines of TLS implementations such as OpenSSL and NSS. When parts of TLS are formally proven to be secure, then it’s typically only specific cipher suites in isolation; the security proofs don’t consider the big picture, such as the non-trivial state machine that powers TLS’ handshake and subsequent data transfer.
The authors set out by building TLS’ state machine including valid state transitions. That by itself seems like a non-trivial task, given how complex the standard is and that it’s made up of several RFCs. Having obtained an understanding of valid state transitions, the authors then devised a set of invalid state transitions that are meant to abort a TLS connection. Then, they implemented a small tool, FlexTLS, that assists in testing a TLS implementation by feeding it invalid state transitions.
The paper tested several popular TLS implementations, all of which wrongly accepted TLS messages causing invalid state transitions. Interestingly, many of these invalid transitions had security implications, allowing client impersonation, server impersonation, and tampering with the negotiated cipher suite. Finally, the authors had a look at OpenSSL’s code and discussed how all these vulnerabilities were possible.
This was an exceptionally interesting read. The paper is a prime example of how exceeding complexity weakens security. It also makes me wonder if, in 2015, security protocol agility is still appropriate. When SSL/TLS was designed, we didn’t know how to design secure primitives. MD5, RC4, and DES have all been broken. Modern primitives such as AES and SHA-2, while not completely immune to attacks, have shown remarkable resistance to cryptanalysis. Given the quality of today’s primitives, we might get away with hard-wiring a strong set of primitives in a security protocol.
Also, the paper makes me wonder why we still develop complex network protocol parsers by hand. Adam Langley had some thoughts on that. I wish there were more practical parser generators for network protocols.
I was also wondering if the code cleanup of recent forks of OpenSSL (e.g., LibreSSL and BoringSSL) prevented any of the presented vulnerabilities.