Back to Search
Start Over
Evaluating the Effects of Model Generalization on Intrusion Detection Performance.
- Source :
- New Approaches for Security, Privacy & Trust in Complex Environments; 2007, p421-432, 12p
- Publication Year :
- 2007
-
Abstract
- An intrusion detection system usually infers the status of an unknown behavior from limited available ones via model generalization, but the generalization is not perfect. Most existing techniques use it blindly (or only based on specific datasets at least) without considering the difference among various application scenarios. For example, signature-based ones use signatures generated from specific occurrence environments, anomaly-based ones are usually evaluated by a specific dataset. To make matters worse, various techniques have been introduced recently to exploit too stingy or too generous generalization that causes intrusion detection invalid, for example, mimicry attacks, automatic signature variation generation etc. Therefore, a critical task in intrusion detection is to evaluate the effects of model generalization. In this paper, we try to meet the task. First, we divide model generalization into several levels, which are evaluated one by one to identify their significance on intrusion detection. Among our experimental results, the significance of different levels is much different. Under-generalization will sacrifice the detection performance, but over-generalization will not lead to any benefit. Moreover, model generalization is necessary to identify more behaviors in detection, but its implications for normal behaviors are different from those for intrusive ones. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISBNs :
- 9780387723662
- Database :
- Complementary Index
- Journal :
- New Approaches for Security, Privacy & Trust in Complex Environments
- Publication Type :
- Book
- Accession number :
- 33754153
- Full Text :
- https://doi.org/10.1007/978-0-387-72367-9_36