In association with heise online

Mixed mode operation

These problems only increase when there are many people sharing an internet connection through a router, all running different applications. All these applications compete for bandwidth in an uncoordinated fashion, resulting in excessive delays and dropouts during VoIP, sluggish downloads and disproportionate increases in response times for games.

The problems aren't with the internet backbones - they have sufficient spare capacity. The bottleneck is between the end user and the ISP - the "last mile". It is this local loop to the provider which has the lowest bandwidth. Consequently data can pile up on either side of this connection.

This effect is particularly noticeable for extremely asymmetric connections such as ADSL2+. The downstream channel achieves a throughput of up to 16Mbit/s, whilst the upstream channel manages at most 1Mbit/s. If a user downloads something at maximum speed on such a connection, 50 per cent of upstream channel capacity is already filled by ACK packets, so that as soon as a further application starts to send packets, the expected symptoms occur.

It can thus be advantageous to limit the upload speed for specific applications. Many file-sharing applications, for example, include functions for limiting bandwidth and if more than one such program is being used, e.g. eMule and BitTorrent, users need to allocate each program a fixed part of the total bandwidth. Unfortunately many other applications, such as FTP clients or email programs do not offer such options.

The needs of VoIP

For internet radio or IPTV services, data must be played back at a constant rate. Packets do not, however, reach the recipient at a constant rate, as they may take different routes or intervening routers may be temporarily at full capacity.

image 2 [300 x 152 Pixel @ 7,5 KB]
Zoom Streaming or VoIP data is not received at a constant speed. Such data is therefore buffered after being received and then replayed at a constant rate.
Media players therefore buffer the incoming packets and read them at a constant rate. The buffer size is selected by the player such that it can cope with dropouts of several seconds. If a delay is too long, the buffer will be emptied and music will drop out or video will become jerky. The obvious response, to prevent such situations from arising, would be to increase buffer size.

Streaming or VoIP data is not received at a constant speed. Such data is therefore buffered after being received and then replayed at a constant rate. This has just one slight disadvantage when streaming - it increases latency and the delay before a video starts playing is greater than with a smaller buffer.

With Voice over IP, however, low latencies are essential. Latencies greater than 200ms are considered problematic, as it becomes unclear when the other person has finished speaking, so that people end up inadvertently talking across each other.

So on the one hand, the largest possible buffer is desirable in order to avoid dropouts, on the other hand a small buffer is required in order to achieve short response times. Various procedures offer a way out of this quandary. The most elegant approach is to prioritise the VoIP packets, so that other packets delay them as little as possible.

To do this an end to end connection with defined bandwidth and latency must be negotiated, and all routers on the route must participate, a process called quality of service - QoS. This requires not just a global prioritisation scheme, but also that the routers on the backbone buffer and prioritise a fair bit of data - and therefore to have significantly more memory and computing power than is currently the case. Nevertheless, some carriers are already implementing some QoS internally and giving VoIP packets preference during periods of congestion.

Effect of RWIN
Measurement results with RAS on T-DSL 1024/128 with Fastpath
RWIN (bytes) One download (Kbyte/s) Ping time (ms) One upload (Kbyte/s) Ping time (ms) One download and one upload (Kbyte/s) Ping time (ms)
Default 122.7 105 15.7 397 33.9 / 14.4 431
28,000 122.8 196 15.6 414 50.1 / 14.0 458
65,535 122.8 476 15.7 412 94.6 / 12.8 546
26,214 122.7 639 15.6 416 39.1 / 14.5 462
52,428 122.7 638 15.8 426 105.6 / 12.6 627
10,485 122.7 636 15.9 430 37.1 / 14.7 467

Where data is being uploaded and downloaded simultaneously, the opposite direction is used for sending acknowledgements, reducing the net data transfer rate. Ping times are given in ms to the next node. It takes 34ms for reply packets from the test server to arrive (round-trip time).

Print Version | Permalink: http://h-online.com/-747378
  • Twitter
  • Facebook
  • submit to slashdot
  • StumbleUpon
  • submit to reddit
 


  • July's Community Calendar





The H Open

The H Security

The H Developer

The H Internet Toolkit