next up previous
Next: Inefficient Reassembly Up: Fragmentation Made Friendly Previous: Computation at the Recipient

Loss of Fragments

``Loss of fragments leads to degraded performance: Reassembly of IP fragments is not very robust. Loss of a single fragment requires the higher level protocol to retransmit all of the data in the original datagram, even if most of the fragments were received correctly. '' [KM87]

The first argument that Kent and Mogul make with respect to reassembly is hard to dispute. Certainly reassembly has been a thorn in the side of stack developers since the earliest IP implementations. Several major network attacks, including those discussed in the earlier section, take advantage of less than robust reassembly routines in IP stacks. Two points can be made about this argument, however. First, the fact that implementations are not robust does not imply that they can not be made robust. The process of denial-of-service attacks and frantic stack patches in the last few years seems to be steadily increasing the robustness of the IP stacks of most major operating systems. Second, we suggest that we can drastically reduce the complexity of the reassembly routines in the IP stack.

Fragments can reach their destination endpoint in three orders. Fragments can arrive in order, in which the first part of the packet arrives first and each fragment sequentially builds the packet. Fragments can arrive in reverse order, in which the last fragment in a packet arrives first but the fragments still arrive sequentially. Finally, fragments can arrive in a random, out-of-order fashion. We propose making the reassembly of in-order fragments and the reassembly of reverse-order fragments robust and efficient and consider rejecting out-of-order fragments entirely. This can only be proposed with some real world information about the kind of fragmentation that occurs on today's Internet.

We generated a large amount of traffic to a single machine with a MTU of 576 bytes. (See Figure 1) Traffic created by this configuration during web requests primarily within the United States generated less that 1 were received in order. Additionally, NFS traffic generated by this configuration produced consistent reverse order-fragmentation (see below for details). In both the local Internet traffic configuration and the NFS traffic configuration, no fragments were received out of order. Of the web traffic generated from a broader distribution of Internet hosts, 2.4 but a fraction of a percent were received in order. A total of 6 packets out of over 700,000 packets timed out because a fragment was lost. Of the fragmented packets that were received not in order, we believe all of these were due to buffer overruns in out logging system as they were never rejected due to time-outs and all were received in order with a fragment not recorded. (See Figure 2)

Because of the extreme rarity and possible nonexistence of out-of-order fragments, we propose that as soon as a fragment is received out of order, the entire packet be rejected. This would allow implementers to concentrate on the efficiency and robustness of in-order and reverse-order reassembly routines. This suggestion can be considered highly controversial, and previous research [Pax99] found that up to 2 order. [Pax99] theorizes that out-of-order packets were the result of route changes during a session. Our data, however, supports further investigation into the idea of rejecting out-of-order fragments. Additionally, route changes are less likely to occur during the transmission of a single fragmented datagram than during network transmissions in general.

The second point made by Kent and Mogul with respect to fragment loss is that the loss of a single fragment requires the higher-level protocol to retransmit all of the data in the original datagram. This remains true. Because TCP will have to roll back its window to the last known byte received, sending larger, fragmented packets may result in larger rollbacks of the TCP window and more lost data. With the addition of selective acknowledgment and retransmission of packets, the hold of fragmentation becomes less tenuous in more lossy networks. However, in very reliable, high-speed networks where packet loss is virtually non-existent, the conditions that make fragmentation less efficient no longer exist.

Additionally, since we believe that out-of-order fragments generally only occurs when a fragment is actually missing, we propose that we can reject a datagram immediately upon receipt of an out-of-order component fragment. By rejecting a packet upon receipt of an out-of-order fragment, we can cut down on the inefficiency of the TCP sender by allowing TCP to roll back its window at the earliest possible moment thus preventing the transmission of unusable data. In addition, if the TCP receiver continuously acks the last fully received packet, the TCP sender can determine its own most efficient timeout. Neither of these two heuristics takes into consideration the tremendous variety in network speed and reliability, however.

Our possibility is the addition of a TCP algorithm to determine the most efficient datagram size. In situations in which packet loss is tremendously rare, the TCP datagram size would most likely be equal to the size of the TCP window. On the other hand, if packets were lost regularly, the TCP datagram size would most likely be equal to the size of the pathMTU. Simple experimentation or timing during transmission could tune this size on the fly. In reliable networks, we can take advantage of endpoint fragmentation, which leads to fewer upcalls and memory copies in the endpoints (see Future Research below for a full description). In unstable networks, we can take advantage of selected acknowledgements and try to reduce the amount of lost data during rollback.

The loss of data due to fragmentation does not seem to be even slightly significant during our extended testing on the Internet. We attempted to force drastic fragmentation and we visited and traveled through portions of the Internet that were highly loaded or that were considered less-reliable. Intermediate fragmentation is rare. Out-of-order fragmentation seems nonexistent. While loss of a fragment can mean more retransmission and network traffic, our data does not support this happening in fact.


next up previous
Next: Inefficient Reassembly Up: Fragmentation Made Friendly Previous: Computation at the Recipient

2000-07-01