[unisog] Blocking inbound Internet traffic

John Kristoff jtk at depaul.edu
Thu May 8 23:45:36 GMT 2003

On Thu, 8 May 2003 13:17:29 -0400 (EDT)
Sara Smollett <sara at simons-rock.edu> wrote:

> We're looking into rate limits per IP (rather than port).  I'm curious
> to hear from people who are doing something like this.  How have you
> implemented this?  How effective has it been?  What do your users
> think? etc.

Rather than hard rate limits, we've setup our border router to
preferentially drop above threshold TCP packets from any source IP that
is transmitting at a set rate or above.  The preferential dropping we do
is based on the random early detection (RED) active queue management

As an example of this technique, say you have a 100 Mb/s upstream link
and want to set a threshold of 1 Mb/s TCP per IP.  When an internal host
starts generating in excess of 1 Mb/s, the excess traffic will have a
higher likelihood of being dropped when the outbound link is nearing
capacity.  In our case, this is done by setting a more aggressive RED
drop profile for those TCP packets per source IP that are in excess of
some threshold.

Note: this is only done with TCP (or potentially other congestion
friendly protocols).  The reason is that drops should signal a sending
TCP host to slow down.  With TCP usually responsible for 90% to 95% of
total traffic in most networks, this strategy appears to work well. 
Other protocols can be hard rate-limited on a per source IP basis or on

A nice property of this technique, is that as long as there is available
capacity, all traffic goes through, thereby maximizing the investment in
the link/rate you have with your upstream.  The tricky part is
configuring the equipment to do this, which includes getting the
algorithms setup just right, the thresholds set high enough and so on. 
However this technique can help avoid the contant battle to upgrade
packet shaper boxes, change filter rules to keep up with the latest
KaZaA tricks, educational/intelluctual freedom issues and additional
network complexity that tends to break things or makes troubleshooting

We considered setting up port blocks or port-based rate limiting, but we
knew this will be an effort in futility so we didn't go very far with
that. We weren't very interested in the packet shaper box technologies,
because we tend to like simple networks and they just get in the way. 
They also didn't seem like a very good long term strategy for many of
the reasons people have already mentioned.  We tried setting hard rate
limits on aggregate netblocks (e.g. residence hall networks), but this
does not work very well, especially for those who get stuck in that
always full aggregate pool.  We also tried setting hard rate limits per
source IP for a portion of our netblock (e.g. residence hall networks
again).  This works a little better, but there are still some potential
problems.  For example, any unused capacity can't be used, even if a
host legitimately needs to generate high rates of traffic.

Aggressive dropping for above threshold packets per source IP seems to
have a number of good properties.  Hosts at or below threshold get
decent performance if the threshold is at a responsible level and
latency tends to be generally be good for all since outbound queues are
being managed before filling.

If it can be implemented, its relatively easy to maintain, but getting
good statistical data with which to use for analysis and further tuning
can be difficult.  That is something we're working on improving.  This
strategy also seems to help solve the problem of high loads of traffic
more generally.  Per source IP strategies may also have nice properties
for slowing down intentional or unintentional DoS attacks as well.

There is one potential caveat to be aware of in the future.  If end
hosts begin using multiple IPs (multi-homing) or users start deploying
additional hosts (clusters) to gain an advantage over a single IP host,
then per source IP strategies may not be good enough.  I've been
wondering if smaller subnets (even as small as /32) with port security
and possibly per link layer thresholds (wireless is kind of a problem
though) will be something people start considering in the future to help
address this and other sorts of related issues (e.g. security, IP
multicast).  My guess is that per source IP strategies are probably
pretty good for awhile though.


More information about the unisog mailing list