[Fwd: Re: [LARTC] RED/GRED implementation for InBound Traffic Control
Sat, 10 Jul 2004 12:21:22 +0100
Sorry all, this is the missing email that I referred to previously. I
inadvertently sent it only to the original poster rather than the list.
Sorry for the wasted b/w for those who don't care...
-------- Original Message --------
Subject: Re: [LARTC] RED/GRED implementation for InBound Traffic
Control (from ISP)
Date: Fri, 09 Jul 2004 22:04:30 +0100
From: Ed Wildgoose <email@example.com>
To: Ow Mun Heng <Ow.Mun.Heng@wdc.com>
>I also want to know, just how efficient is this Algorithm. AFAIK,
>inbound traffic control can't really be achieved without losing
>In some of the howtos' I've read, one guy had to limit his downspeed to
>1/2 his bandwidth to actually control it. And he was saying that the
>only way to actually efficiently control inbound traffic control is to
>use TCP windowshaping which there is not an oss implementation of it.
>Can anyone please shed light on this?
The issue is not as alarming as you think. The point is that there is a
buffer on the ISP end. If this buffer fills up then the algorithm is
usually first in first out. ie you have no way to prioritise important
stuff to go ahead of the regular traffic. If you can control your ISP
router then this isn't an issue of course
So the solution is to throttle incoming to 99.9% of total incoming
bandwidth. Well, actually since you have no control over who can send
you data, this only works in steady state. So perhaps you should make
it 95% or 90%. It depends whether you mind there being the odd blip
where someone starts sending you traffic, but it takes a second or so
while you instruct other senders to slow down. In the meantime you will
Now the reason for dropping to really low numbers (50%) is because most
of the throttle filters work on bandwidth consumed over a normal
ethernet LAN. However, you might be using ADSL. In this case you pay
for NOT 512Kbit/s of IP bandwidth, but 512Kb/s of ATM bandwidth. And
unfortunately the relationship between the two is slightly complicated.
First you need to add 10 bytes to every IP packet for PPP overheads,
then some other overhead if you are on PPPoE, then you have to break it
up into 48 byte chunks and add a 5 byte header. That will tell you how
much bandwidth you used.
To save you the headache of worrying about those calculations consider
sending a 49 byte packet. It will clearly need to be split into two 48
byte packets (yes?), then each packet has a 5 byte header = 53 bytes
each, so that 49 byte packet takes up 53*2 = 106 bytes of bandwitch on
your ATM line. On the other hand if you used really large IP packets
then the overhead would be less, consider the effect of a wasted 53 byte
packet when you are sending in chunks of 1500 bytes a time.
So big FTP transfers with large IP packets don't waste too much, but if
you have a load of SSH users, or some P2P users, or something else which
spits out tons of small packets then the IP bandwidth might be loads
less than the ADSL bandwidth, hence some people really throttle back to
be sure they have control of the inbound connection
However, if you scroll back a few weeks you will find an experimental
patch from me which adds the correct calculations to HTB and other
qdiscs. At some point I will code it up in a much neater way, but in
the meantime it works really well as it is. So now you can say,
throttle me to 500Kb/s and it throttles to that much ATM bandwidth,
regardless of how much IP bandwidth that equates to.
Clear as mud?