In association with heise online

Taking a closer look

This situation arises if, for example, a large email is sent whilst downloading. The send buffer is continuously filled with data for an extended period. Assuming a connection with a maximum upstream speed of 128kbit/s, it will take around two seconds to send 32kB of data. In this situation, if an ACK packet, required to keep the download running, is written to the end of the buffer, it will be two seconds before it even gets sent, and then further milliseconds before it reaches its destination.

Significantly better response times can be achieved by limiting the upload capacity of file-sharing applications to, for example, 75 per cent of upstream capacity. This leaves the send buffer empty sufficiently often that ACK packets from downloads, VoIP packets, online gaming, etc. are forwarded without any delay.

A traffic shaper performs this function even better because it is more comprehensive, recognising all important packets and, in the simplest case, simply writing them to the front of the send buffer, so that they overtake all other data. The optimal solution is to have multiple queues with varying priorities. These are then read at varying speeds and the buffered data forwarded at different speeds depending on priority.

Applied to a family's internet connection, a traffic shaper could thus apply highest priority to Voice over IP data, followed by gaming and IPTV packets, followed in last place by e-mails. The remaining bandwidth would then be made available to file-sharing applications. They can even be throttled back even further temporarily, so that a reasonable speed can be achieved when sending holiday photos, for example. Ultimately, what should be prioritised is the user's decision. Gamers may wish to prioritise gaming packets over VoIP traffic.

Deinterleaving

With many providers, the latency of the DSL connection can be decreased using FastPath encoding. By default, DSL connections use a more robust encoding scheme with higher interleaving at the cost of increased latency. FastPath has lower latency but less error correction. Some hardcore gamers even go so far as to order a DSL connection with FastPath just for gaming.

FastPath is only advisable on good quality connections. If you have a connection with lots of noise, latency will still be reduced, but packet losses will increase and error correction will have to be taken care of by the TCP/IP protocol itself. For TCP connections, this means resending packets. This reduces throughput for activities such as HTTP downloads considerably. For UDP connections, however, there is no error correction, meaning that in the case of VoIP applications, for instance, the user perceives lost packets as dropouts.

Delegating the workload

A little known feature of Windows XP is "task offloading", in which the operating system reduces the load on the processor by transferring specific tasks to network cards specially developed for this purpose. Microsoft has specified calculation of IP/TCP checksums, various steps in IPsec encryption and TCP packet segmentation as tasks suitable for task offloading.

When calculating IP or TCP checksums, the processor has to take a look at all transferred data - which might be expected to give rise to a certain processor load. In the case of IPSec encryption, the network card is able to take care of calculating MD5 and SHA1 hash values and triple DES encryption. 3Com's 10/100 Secure NIC and 10/100 Secure Server NIC cards do this, for example. This tangibly reduces processor load where IPSec data needs to be forwarded very rapidly.

In TCP packet segmentation, packets which are larger than the Maximum Transmission Unit (MTU) are divided into smaller TCP segments automatically and sent individually by the network card (e.g. Bigfoot Networks' KillerNic).

Some network cards, including even some onboard cards (such as the nForce4 Ultra), support just a subset of these tasks. Some tuning geeks may wish to ponder getting hold of something along these lines. But task offloading is only really worth having when substantial CPU capacity is used for network activities in heavily-used servers. For this reason such cards are more commonly encountered in the server sector at companies such as Sun and IBM.

For personal users and small office networks these functions don't affect transfer rates. Tests have shown that at current upstream bandwidths today's processors are easily able to calculate checksums as a background activity. It should also be considered that many LANs are already working at gigabit speeds, i.e. around 100MB/s, but current hard drives barely exceed 70MB/s. The bottleneck in modern LANs tends therefore to be the storage medium rather than the network card.

Throughput throttles
Download connections Upload connections Download throughput (kB/s) Upload throughput (kB/s) Ping time (ms)
Without traffic shaping
1 227.4 - 188
1 - 27,5 235
1 1 129.1 18.9 346
4 217.9 - 290
4 - 28.6 1060
4 4 153.9 18.9 544
With traffic shaping
1 218.7 - 231
1 - 27.2 186
1 1 206.5 18.6 282
4 213.7 - 283
4 - 27.8 235
4 4 221.8 16.3 378

The throughput of a TCP connection can drop dramatically depending on the number of uploads and downloads in progress. The extent to which packets can be delayed in such unfavourable conditions can be seen from the increase in ping times. Where a traffic shaper in the router or PC directs the traffic (here for example with cFos), ping times are reduced and throughput comes much closer to the theoretical maximum. An ADSL connection receives up to 2048kbit/s (downstream) and sends up to 256kbit/s (upstream).

Print Version | Permalink: http://h-online.com/-747378
  • Twitter
  • Facebook
  • submit to slashdot
  • StumbleUpon
  • submit to reddit
 


  • July's Community Calendar





The H Open

The H Security

The H Developer

The H Internet Toolkit