The effect of reassembly on the recipient of a fragmented datagram is a matter of concern both from a standpoint of processor utilization as well as algorithm correctness. Algorithms for correctly reassembling packets are difficult to correctly implement. Several widespread security holes during 1997 and 1998 were related to incorrect implementations of fragment reassembly. These included the Teardrop attack (in which one fragment was completely contained by another), the Bonk/Boink attack and other variants that are more OS-specific. All of these exploit the difficulty of correctly reassembling out-of-order, overlapping, or oddly sized fragments.
Although these attacks significantly improved the correctness of the reassembly routines in several different network stacks (Linux, BSD and Microsoft among them), the issue of other, undiscovered errors remains, as does the issue of efficient reassembly. Reassembling fragments into a packet almost always involves copying the contents of the fragment in memory at least once or twice (in order to strip off the headers and align the data). As network speeds approach and exceed memory speeds, the importance of efficient reassembly increases. Improving fragment reassembly methods will be a primary concern as we suggest a new approach to fragmentation. Any suggestions that would result in larger numbers of fragments should obviously be accompanied by changes in the network stacks receiving those fragments to make them more efficient.