[Dshield] Security in Layers
cbrenton at chrisbrenton.org
Fri Mar 26 19:47:23 GMT 2004
On Thu, 2004-03-25 at 18:21, John Holmblad wrote:
> Most of
> these opinions, from what I can tell are in reasonably close alignment
> with SANS thinking but there are others of which vary from SANS views.
I think you need to look at these tools as "starting point" and not an
absolute authority on black and white. If you don't even know where to
begin, these tools help you get started in the right direction. Once you
get your feet under you however, you should modify them as appropriate
for your environment.
> know from a conversation that I had with Jesper Johanssen a few months
> ago when I asked for his thoughts on the CIS scoring tool that he has
> reservations about attempts to provide such quantitative scores.
I think Jesper's concern, and one shared by many in the field, is that
the uninformed with look at the grading the same way many view a pretty
GUI. The thought process is "let me just click my way through to a
secure environment and I should not have to learn what's actually taking
place under the hood". Obviously there are problems with this line of
thinking when addressing the complexities of security. A pretty GUI can
never replace a properly educated security analyst.
Personally, I think they are a great starting point. I think the point
system is somewhat arbitrary in that each item will actually have a
different weight depending on the environment (however good luck
building that variable into the system), but at least it shows the
security person behind the console what they need to think about. Of
course the key is to get them thinking, not necessarily reacting blindly
to a pre-determined point system.
We seem to keep hoping that we can make security as easy as driving a
car. Sooner or later we'll learn that its more like flying a plane, it
is not going to happen without proper training.
> From a speech several
> months ago that Alan Paller, research director at SANS gave to a US
> government audience I got the clear sense that he also believes it is
> high time to develop more such quantitative and objective measures of
> security than those which are currently available.
The problem is metrics. Let me give just one example:
The Router Auditing Tool (RAT) is probably one of the best CIS tools
that have been created. It does an excellent job of flagging config
problems with a Cisco router. One of the items it looks for is "no ip
direct-broadcast", or disabling the mapping of layer 3 broadcasts to
layer 2. The RFC's state that this should be disabled on any router
interface facing a network with more than 2 IP addresses (in other
words, any subnet mask besides a /30). This is to help prevent turning
your network into a Smurf amplifier.
Now, if we are talking an exposed ISP backbone I totally concur. If we
are talking a network that is sitting behind a proper perimeter that is
using tools like the HP JetDirect software or SNMP monitoring tools.
Well, disabling this option will break all of these tools. You've just
killed your ability to monitor devices in the interest of security.
So direct-broadcast is obviously something you need to give some thought
to if RAT flags it, but at the same time you don't want to just do it
without understanding the impact it will have on your network.
Now, when it comes to trying to create an across the board scoring
system, do you score negative points against this network for leaving
direct-broadcast enabled? Do you force them to break their tools in an
attempt to meet a general standard, even if they have mitigated the
actual problem through other means (like blocking all broadcasts
originating from the Internet at the firewall)? Do you give them an
exemption, in which case you've now just created a loophole that someone
else who does actually have a problem may be able to sneak through?
I think you can see where I'm going with this. Its possible to create
this kind of an argument for just about any security line item. This
makes it pretty much impossible to develop a single canned tool that
reports appropriately in every single environment. The key is educating
the person behind the tool.
> I should add that other industry voices have expressed concern about the
> complexity of managing an IDS environment so in that sense Jesper
> Johanssen is not a lone voice although he may be in the minority.
This is another thing that I think these tools help to address, the "oh
security is just too hard so I'll just do nothing" attitude.
> Recall the following ruckus from last summer (now old news) after
> Gartner's declaration of the death of IDS:
I think the problem is many people want a "cure all" that takes care of
all security issues. When IDS did not fit the bill, people like Gartner
jumped on the "IDS sucks" bandwagon. IDS is a tool, just like any other
you would use to secure your perimeter. It has its strengths and
weaknesses, just like everything else. The idea of defense in-depth is
to leverage its strengths and augment its weaknesses with some other
tool. That, or make an informed decision to accept the additional risk.
More information about the list