[unisog] Blocking inbound Internet traffic

John Kristoff jtk at depaul.edu
Fri May 9 21:52:34 GMT 2003


Following up publicly to the list with permission.

On Fri, 09 May 2003 09:58:50 -0700
"Ben Curran" <bdc1 at humboldt.edu> wrote:

> Would you mind sharing some code snippets from your router? (I'm
> assuming Cisco?) This sounds like a great strategy, although I still
> like the idea of an edge traffic shaper (Packeteer) if for nothing
> else than traffic classification. Also, how in the world do you
> garner/gather stats on appropriate threshhold levels? (I think you
> said this is something you're still working on) Do you use the
> threshold filters with logging? Netflow etc.?

Sorry for leaving out some details.  The way we're currently doing it is
not the way we'd really like to do it.

Just for some background... the way we'd really like to do it is to have
thresholds defined as close to the edge host as possible, but still
within the network devices so controls cannot be subverted.

So for instance on each edge layer 2 switch port, frames would be tagged
using 802.1p based on some average transmission rate from the host to
the network.  Frames at or above the threshold get tagged with a higher
'drop preference' than those frames below the threshold.  When frames
hit their first hop layer 3 IP router, those 802.1p tags get mapped into
into layer 3 tags (IP ToS/DiffServ).  Then at any place where we want to
actively manage high-rate sending hosts (e.g. with RED drop profiles),
we could make decisions based on the tags.  Frames with a higher drop
preference (excess traffic from hosts generating higher traffic loads)
can be dropped more aggressively than those under the threshold.  This
would be *really* nice in my opinion, but I did not seem to have any
luck in generating interest for this capability in edge switches either
in vendors or in other network folk.  Although I haven't studied the
latest features of the edge switch gear recently, they may be able to do
this now.

Since the method above was impossible with existing equipment, we
instead implemented the equivalent strategy using only layer 3
information and border router(s).  It works almost as well, with
probably the biggest potential drawbacks being scaling and state issues.
That is, scaling it to a large number of unique sources may be a problem
depending on your network size.  Maintaining the average transmission
rate for each unique source at the border translates to quite a bit of
network intelligence and a potential single point of failure.  Although,
in the case of the box going away, it is a fail soft scenario (assuming
you have another border :-).

Our current implementation is done using JunOS on Juniper.  Imagine a
1000 Mb/s LAN interface to the internal network and an external OC3c
ATM-based interface to the rest of the world.  The configuration would
look something like this (I'm going to abbreviate and use wildcards,
since your config may differ slightly):

interfaces {
    at-<*> {
        atm-options {
            linear-red-profiles {
                red-queue depth 1k high-plp-threshold 10 \
                                low-plp-threshold 90;
            }
            scheduler-maps {
                egress {
                    forwarding-class assured-forwarding {
                        linear-red-profile red-queue;
                    }
                    forwarding-class best-effort {
                        linear-red-profile red-queue;
                    }
                    forwarding-class expedited-forwarding {
                        linear-red-profile red-queue;
                    }
                    forwarding-class network-control {
                        linear-red-profile red-queue;
                    }
                }
            }
        }
        unit <*> {
/* You need ATM interface traffic shaping configured, but you
   probably don't need it, so just set the peak to the maximum */
            shaping {
                vbr peak 135600000 sustained 135600000 burst 4k;
            }
            atm-scheduler-map egress;
        }
    }
}
firewall {
    policier X-bps {
        filter-specific;
        if-exceeding {
            bandwidth-limit X;
            burst-size-limit y;
        }
        then loss-priority high;
    }
    family inet {
        prefix-action threshold {
            policer X-bps;
            count;
            filter-specific;
            subnet-prefix-length n;
            source-prefix-length 32;
        }
        filter lan-ingress {
            term control-aggressive {
                from {
                    source-address {
                        x.x.x.x/n;
                    }
                    protocol tcp;
                }
                then {
                    sample;
                    next term;
                    prefix-action threshold;
                }
            }
        }
    }
}

A DS3 or Ethernet-based link to the rest of the world would change the
configuration fairly significantly (probably in good ways that is more
flexible).

One of the big keys to the config above is the prefix-action term.  It
allows the router to perform the identified action (watch for traffic at
rates X or higher and set the loss-priority to high) for each
'source-prefix-length', which can be a /32.  Juniper has a limit of a
65535 prefix specific actions it can maintain.  So if you have more than
a /16 of address space, you might have to do this on just a subset of
your space or set the source-prefix-length to include a group of hosts.

I'm not sure, but I seem to recall Cisco having a similar feature as the
Juniper prefix-action in recent versions of IOS.  It essentially makes
the police rules so much easier to write.  Otherwise, you might have up
to 65535 rules if you wanted to use a policer per source IP in a /16
netblock.

The other component of the configuration above is the red-queue-depth
setting.  This is one piece that I don't have good advice for at this
time, but you probably want your high-plp-threshold traffic to start
dropping before the low-plp-threshold stuff.

Gathering stats and analyzing the data for the configuraiton above is
hard, because  ATM interfaces on Juniper aren't as friendly for this
sort of thing as a DS3 or ethernet interface might be.  There are some
counters that tell you drops are occurring, but I don't have any good
info on how to best monitor this stuff yet.  We do cflowd-based
monitoring, which helps a lot.  How to see what's going on effectively
in order to help tune the thresholds and queue-depths will hopefully be
something I have more to say about in a few months.  Empirical evidence
shows that performance and latency is quite good even at high loads
based on a configuration like the one above. For now, that is good to
stay this course.

John



More information about the unisog mailing list