25C3: More light shed on "denial of service" vulnerabilities in TCP
Reports of the vulnerability to DoS attacks of the TCP (Transmission Control Protocol) have been going the rounds since the autumn, though these have mostly been based on speculation. At the 25th Chaos Communication Congress (25C3) in Berlin, Fabian Yamaguchi, of Recurity Labs, presented credible and tested scenarios for these attacks on TCP, the fundamental internet protocol, as well as first tips for dealing with it. He said these bugs, which should not be ignored, are mostly contained in the implementations of TCP, but are also facilitated by the protocol's fundamental design.
The internet pioneers Robert Kahn and Vint Cerf ended up developing TCP on behalf of the US Defense Advanced Research Projects Agency (DARPA) in the 1970s, and it was first standardised in 1981. Yamaguchi said availability was a key feature. With TCP, the intelligence of applications is transferred to the network's end nodes, in accordance with the end-to-end paradigm. The protocol as a whole is based on decentralised structures. But the inventors of TCP failed to anticipate the possibility that teenagers, with access to the network from their bedrooms, might spend their time thinking up DoS attacks. Although there are functions in the protocol for identifying data sources and correctly stringing packets together, they are not security functions.
It has long been known that TCP is vulnerable to "reset" attacks, a vulnerability demonstrated by Paul Watson in 2004. To carry out such an attack by interrupting a transfer, only the packets' transmission sequence number need be known. They are easier to guess than anticipated because of performance optimisations like variable window sizes. The experts agreed that some additional randomness by choosing random source ports for every connection is the right countmeasure; if you feel reminded of the recent DNS problems you are on the right track.
However, Yamaguchi said, that still did not make TCP a secure protocol. In principle, it is desirable for the network protocol to enable as many connections as possible to run at the same time, so developers retrospectively installed a backlog function that was not in the original specification. This kicks in if there are too many connections, threatening to overload the memory of the affected server, but this artificial limit can easily be overcome. This could traditionally be done with a SYN flood, which deliberately holds connections half-open, so consuming the available resources, but there are now known remedies for this kind of DoS attack.
A similar approach, connection flooding, is on the other hand more difficult to cope with. This occurs when connections are requested at such a rate that the server can't process them fast enough. The responsibility for devising countermeasures here is not that of web administrators, but falls to the developers of TCP-based services. They should ensure that an application won't accept, for example 5000 connections at the same time. If they don't, this opens opportunities for attacks based for example on the quite long internal system timeout values during the disconnect (i.e. the states FIN_WAIT1, FIN_WAIT2, LAST_ACK, etc.). They can sum up to something like ten minutes. With very short lived connections attackers could therefore block a lot of precious system memory. Implementation errors can make such flooding attacks even easier.
Another form of attack focuses on the method for controlling how much data TCP delivers to a network, crashes having already occurred at this point in the 1980s because of blockages. An attacker can effect this, for example, by simulating a gigabit line, causing the TCP window to open wide at the receiving end to prevent the danger of a blockage and flooding the network. The reception of a packet can also be acknowledged before it actually arrives, increasing the sending frequency to a dangerous extent. Yamaguchi said that researcher Rob Sherwood has already published a study of these blockage problems and highlighted countermeasures, but these have so far been ignored.
The biggest problem with this form of attack is that, for any genuine remedy, the TCP protocol architectures will have to be changed worldwide. A recipient of data would have to provide evidence by means of a checksum that he had actually received specific packets. Yamaguchi said that although a backwards compatible solution exists, it could throw up new problems and therefore requires further investigation. In regard to the outlook for future attacks, he again pointed to data flow control that can also be thwarted by a large data transmission being sent all at once, which automatically means the next package is not processed.
FX, Yamaguchi's boss, stressed that Dan Kaminsky's partial disclosure of vulnerabilities causing serious problems with the domain name system (DNS) was not very helpful. These problems meant that security firms were now having to deal with sometimes panicky clients, and could not initially help them. Therefore, following the rumours about the TCP vulnerabilites, he had asked the Phenoelit research department to shed some light on the issues.
For more reports on the 25C3, see also: