[unisog] SP*M Detection Methods & Processes

Stasiniewicz, Adam stasinia at msoe.edu
Tue Sep 26 02:09:38 GMT 2006


At my current employer we use a commercial appliance (email me off list
for name).  To support 10,000 users, one person spends between 15-30
minutes a day on spam related support.  99% of it is investigating false
positives/negatives.  The remaining 1% is spent on general maintenance
of the box (updates, rule tweaking, checking functionality, etc).
Usually false positives/negatives investigation result in findings of
either user error or problems with external entities email system (i.e.
they have an open relay and got added to a RBL).  We recently had a bit
of problems with image spam, but our vendor quickly released an update
to deal with them, right now I see us catching about 98% of all image
spam, including the animated GIF variation.  

In terms of total email volume, 94% of all inbound mail is flagged as
spam.  Of that, the average person will see less than 5% of their
inbound spam and less than 1% of legitimate mail is filtered.

The product works on the same premise that SA does, except that a
majority of the settings and detection engines are tweaked by the vendor
automatically.  In fact we do very little adjusting of the out-of-box
settings (minus the occasional need to white-list/black-list troublesome
email servers).  

Obviously this does not come cheap, we do have to pay a considerable
amount (in both initial cost and yearly maintenance) but we see it worth
while.  Plus if we have any problems, the unlimited telephone support
(provided with the mandatory maintenance contract) is a very handy.

Hope that helps,
Adam Stasiniewicz

-----Original Message-----
From: unisog-bounces at lists.dshield.org
[mailto:unisog-bounces at lists.dshield.org] On Behalf Of Russell Fulton
Sent: Monday, September 25, 2006 7:01 PM
To: UNIversity Security Operations Group
Subject: Re: [unisog] SP*M Detection Methods & Processes

Bill Martin wrote:
> One last bit on this, given the similarities in tools, design, etc... 
> I would expect that statistics (rejected, tagged-passed, passed clean)
> would be roughly similar in percentages.  Given the recent escalation
in
> "complaints" of "excessive spam getting through", I can only
speculated
> that others are seeing similar results? 

Absolutely.  I noted some of the factors that we are struggling with in
my previous post.  This morning I found a spam in my mail box that had a
SA score of 0.0 -- the only thing that SA had picked up was that it was
an html email.

Spammers have now taken things to a new level and SA is not coping well.

One  question I have (and it is one some of our managers are asking):
Are commercial products doing any better?  If you have an army of people
 tweaking things on a hour by hour basis (like we now do with AV) you
may be able to make some progress but it is going to be very expensive
and in the end (I believe futile).

Anyway, I'd be interested in hearing from anyone who is using commercial
 products as to how they are coping with the current wave of image spam.

We are playing with fuzzyocr plugin but have not put it into production
yet.  I view this as a short term stop gap as we have already seen
images with obscured fonts...

Cheers, Russell
_______________________________________________
unisog mailing list
unisog at lists.dshield.org
http://lists.dshield.org/mailman/listinfo/unisog



More information about the unisog mailing list