[unisog] SP*M Detection Methods & Processes

Bill Martin BMARTIN at luc.edu
Fri Sep 22 15:40:41 GMT 2006


Well, for us it is that time of year again for to experience the "Goldie
Locks" syndrome with our spam detection process again.  I'm sure you are
all familiar with it..   you know, the "it's to much", "it's not
enough", and still others saying "it's just right"....  the unfortunate
part is, for us, #1 & #2 total more than #3, so we find ourselves again,
evaluating and comparing....

Given that, we are looking to compare what we are doing w/ other
universities, bot large and small.

Our current architecture consists of multiple gateways, running Amavis,
handing off to SpamAssassin, an anti-virus package and of coarse our
MTA.

   MTA
     +-->Amavis
            + SpamAssassin
            + AntiVirus
   MTA <---------+
    +
    V
Delivery

SA is tightened up quite a bit (kill over 10, deliver 5.0-9.9 but TAG
the subject line, and pass anything below 4.5) but we have had very
limited number of false positives.   So few as a mater of fact that the
few there are justify those that are falsely tagged.  We do have Bayes
enabled as well as a number of other lists.

Additionally we adding some custom values for mail from sources we know
to be good or bad and adjust their scores accordingly. We also block
known bad hosts/senders via our MTA.

So, given the architecture, (which from what I see is very close to
what some companies are doing with their appliances) how does this
compare to what others are doing? 

Any input would be appreciated . . . .


More information about the unisog mailing list