Please allow me to get a bit philosophical today, about highly complex, dynamic systems. I promise, at the end there is a rather important network angle to all of this...
The Butterfly Effect
In chaos theory, the term butterfly effect
describes a particularly interesting observation: Given a sufficiently
complex and dynamic system, even the smallest variation of the starting
conditions will result in unpredictable long-term behavior. The effect
probably got its name from the example that even the gentle flapping of
a butterfly's wings will potentially influence the earth's weather and
climate (a phenomenally complex system) in the long run. In other words,
if this butterfly would not have flapped its wings, then maybe we would
not have had a storm on the other side of the world a few months later.
Incredible as this may sound, such is the behavior of highly complex, dynamic systems: Changing just a single parameter may eventually result in a completely different and unexpected outcome. The butterfly effect can be demonstrated with various mathematical formulas. Real world systems usually 'suffer' from permanent additional random input, which makes accurate predictions about their behavior even more difficult. On the most basic level of matter, the Brownian motion of particles is completely unpredictable. If you allow me to stretch the euphemism further, the Brownian motion essentially transforms every single atom into a randomly flapping little butterfly. Keeping this picture in mind, you can see that any attempt of accurate long-term prediction is doomed to failure. Our notoriously inaccurate long-term weather forecasts are a good example.
Interestingly, we are surrounded by such complex systems in our daily life. The air that flows over our cars causes chaotic vortices behind it, raindrops run down our windows in unpredictable paths. Yet, albeit often entirely unpredictable, we are very rarely surprised by these effects. Why is that? Because we have learned to deal with them on a macroscopic, rather than microscopic level.
Take the example of the raindrops on the window: We may not be able to predict their exact path, but we do know that eventually the water will flow down. The drop may zig or zag, but unless it gets stuck somewhere half way, it will eventually make it's way downward. Intuitively, we realize that any attempt to predict the exact motion of those drops is entirely futile. Our understanding of the world is sufficiently confirmed by the simple fact that the drops will eventually find their way down. That's all we need to know and all we can know.
The whole is more than the sum of the parts
Here is another interesting observation, which contributes to the unpredictable behavior of complex systems. 1+1 is 2. In math this is simple. In the real world, however, we have cases where the whole certainly is different than the sum of the parts.
For example, take two sufficiently sized lumps of weapons-grade Uranium. Put them together. What do you get? Not one big lump of Uranium, but instead an entirely surprising crater in the ground. Another good example is our brain. It is made up of a huge number of very simple cells, the Neurons, which are connected via the Synapses. Put it all together, and you don't have a blob of cells, but something rather wonderful and astonishing: A brain capable of thoughts and memories.
Again, we are surrounded by examples of this. In fact we are an example of this effect. Yet, we tend to not think about this at all. Why? Because in many cases, we see the result of the whole before we even realize that there are all those parts playing a role in it.
And what does this have to do with networks?
Glad you asked...
Consider today's networks: They are getting more complex, no doubt about it. There are more vendors, more pieces of equipment, more architectures, more people and computers, more use models and applications. So, not only are the individual networks more complex than in the past, they are also more unique. The uniqueness is not surprising, considering that there is this increasing number of variables, which defines the network. No two organizations will have the exact same network.
So, now imagine one of those overtaxed corporate or provider networks, operating more or less within acceptable boundaries. Suddenly, and quite possibly completely out of the control of the network operator, a new application is unleashed onto the network. Skype is an application like that. A worm outbreak is an extreme case of such an application. P2P traffic is another example, which has been building up and morphing and shifting over the last couple of years. So, what happens when something new is added to the network, may it be another application or another piece of equipment, or another batch of users? The truth is that very often, not even the network operator will know...
Many operators try to deal with this on a macroscopic level. As we have seen, this is the natural tendency for us. So they add large amounts of excess capacity to the network, hoping it will be able to deal with whatever comes their way. But of course, this is inefficient. Corporate network operators don't even have that option, since they need to worry about more than just bandwidth and availability - they also need to ensure the security of the network and its attached computers.
Chaotic complexity of networks
Quite often, we hear from potential customers that they don't even know anymore what exactly is happening on their network. That is how complex these systems have become. The fact that there is some excess capacity has often been the saving grace of those installations. Just like a huge number of chaotic moving water drops can be controlled on a macroscopic level, by forcing them all through a water pipe, many network operators have taken a step back, and simply hope that the access capacity will have the same effect: It all keeps flowing, even though we have no idea what is really going on within the pipe.
But at the moment a more fine-grained control is required, for example for detailed SLAs, or just for network security considerations, this approach fails. Then we are suddenly back to rules, signatures and policies. What do they represent? A microscopic approach to network and traffic management.
Remember the cases of the raindrops, running down the window? We have seen that their motion is unpredictable. If you observe a single drop, you can probably make a pretty good prediction what the motion will be like in the next tenth of a second (a short-term forecast), but after that, predictions will become inaccurate.
Networks have demonstrably reached a point where exact predictions about their behavior is not possible anymore. Change a little bit, add a little bit, and the outcome is unpredictable and often surprising.
The futility of rules
Why then, do we still rely on rules, policies and signatures in
our attempt to control those networks? It is essentially a law of
nature that a microscopic control approach is not well suited at all
for a complex, and possible chaotic system.
Firstly, it has become quite impossible to write enough rules to cover all the use cases and situations for a network. Secondly, once there is just a slight change to the network, the unpredictable nature of its behavior may render many of those rules useless. As a result, maintaining rule sets and policies for complex networks quickly becomes a never ending Sisyphean task.
Taking an intelligent macro-view
Networks are becoming more complex and more unique. The number of variables that describe those complex systems are always increasing. In light of this, I propose that an operators of complex networks should not rely on controls on the microscopic level. Instead, a macro-view of the network needs to be taken. Instead of providing rules for every network condition, the overall behavior of the network should be considered. This leads us then to the field of behavioral anomaly detection.
I am not proposing that all rule and signature-based systems should be torn out of a network installation. These systems are useful to provide some basic boundaries within which the network traffic has to operate. Almost like the water pipes for the chaotic moving water. However, it is not possible to exhaustively describe the behavior of the network with those rules. Instead of wasting time and resources in attempting the impossible, it makes much more sense to complement the network architecture with a rule-less behavioral anomaly detection system. This system will be able to detect macro-trends (comparable to an observation such as raindrops run downwards), without having to know the exact detail about every single packet.
If the anomaly detection system is good, it will manufacture detailed rules and signatures on the fly, which allow the operator to handle anomalies as they happen. This allows mitigation, without having to express in rules and signatures about what to look for, ahead of time. These on-the-fly rules apply to the behavior of the network right now. Comparable to more accurate short-term forecast vs. much less accurate long-term forecast.
Conclusion
I hope I was able to provide some food for thought and highlight some aspects of the nature of complex systems. Today's networks approach complexity levels, which already result in unpredictable, chaotic behavior. The butterfly effect and the simple statement that the whole is often more than the sum of the parts, nicely describe the situation.
Chaotic systems are inherently unpredictable in their behavior. Therefore, any attempt to express predictions is doomed from the onset. If we can agree that many networks may exhibit complex and possibly chaotic behavior, then it is instantly obvious that some aspects cannot possible be covered by rule and signature sets. Writing those sets and maintaining them can be utterly frustrating and, in the end, useless.
As a solution to this dilemma, I propose the deployment of behavioral anomaly detection systems, which are able to observe the network and provide more accurate rules on-the-fly and in realtime, taking into account the current network condition and actually observed anomalies.
So, next time your network faces a meltdown, remember the butterfly and the raindrops. This is probably a good time then to remember that pre-supplied rules and signatures are not any more accurate than long-term weather forecasts, and that something more intelligent is needed in the network.
Juergen
I enjoyed this posting very much. My question regarding anomoly detection is this. If you were to develop/deploy a system for anomoly detection in a network (be it for something like IDS or QoS or whatever) you would have to assume that at the time of deployment, your network was acting "normal" and use that as your baseline for detecting anomolies. What if your network already contained something it shouldn't or acted in a way it shouldn't? This would contaminate the baseline and the anomoly detection "tool" would consider this normal, right? So at what point of the life cycle of a network is it appropriate to install something that does anomoly detection? Does anomoly detection need to run parallel to something that does signature based review? OK, that was more than one question. Again, a nice article which I enjoyed.
Posted by: MP | November 12, 2005 at 06:34 AM
Thank you for the feedback on the article, and sorry for the late reply.
You bring up a very good point: If you have a baselining anomaly detection system, then you need very clean traffic conditions during this baselining phase. That, and other reasons, have compelled us to design an anomaly detection solution, which does not rely on baselines at all. For more information about this, see my blog entries here (http://esphion.blogs.com/esphion/2005/10/anomaly_detecti.html) and here (http://esphion.blogs.com/esphion/2005/07/anomaly_detecti.html).
As a result, our solution is more dynamic than any baselining solution. Even if during first deployment an anomaly might be in full swing, our system would not 'learn' this as normal. If the anomaly is present when our solution first gets a look at that network, then it might not alert to it. However, once the anomaly stops, and starts again, we would detected it.
In the moment you start to use baselines, you begin to rely on prior knowledge. And once that happens, you are more prone to false positives, or the kinds of problems you have mentioned.
Juergen
Posted by: Juergen Brendel | November 14, 2005 at 01:53 PM
reality sex stories
Posted by: sjsmvkmcfy | July 21, 2007 at 08:06 AM
fighting spirit is extremely tenacious vitality of a master, and now at the most negative situation must lie in bed during the first half, but also a jump.
Posted by: North Face Sale | July 11, 2011 at 08:01 PM
Incredible as this may sound, such is the behavior of highly complex, dynamic systems: Changing just a single parameter may eventually result in a completely different and unexpected outcome http://www.northface4sale.org
Posted by: cheap north face | October 24, 2011 at 09:47 PM
This is a very popular brand of products accepted by the public and welcome!
Posted by: Marc Jacobs handbags | December 06, 2011 at 08:12 PM