Detail publikace
Reducing memory in high-speed packet classification
PUŠ, V. KOŘENEK, J.
Originální název
Reducing memory in high-speed packet classification
Anglický název
Reducing memory in high-speed packet classification
Jazyk
en
Originální abstrakt
Many packet classification algorithms were proposed to deal with the rapidly growing speed of computer networks. Unfortunately all of these algorithms are able to achieve high throughput only at the cost of excessively large memory and can be used only for small sets of rules. We propose new algorithm that uses four techniques to lower the memory requirements: division of rule set into subsets, removal of critical rules, prefix coloring and perfect hashing. The algorithm is designed for pipelined hardware implementation, can achieve the throughput of 266 million packets per second, which corresponds to 178 Gb/s for the shortest 64B packets, and outperforms older approaches in terms of memory requirements by 66 % in average for the rule sets available to us.
Anglický abstrakt
Many packet classification algorithms were proposed to deal with the rapidly growing speed of computer networks. Unfortunately all of these algorithms are able to achieve high throughput only at the cost of excessively large memory and can be used only for small sets of rules. We propose new algorithm that uses four techniques to lower the memory requirements: division of rule set into subsets, removal of critical rules, prefix coloring and perfect hashing. The algorithm is designed for pipelined hardware implementation, can achieve the throughput of 266 million packets per second, which corresponds to 178 Gb/s for the shortest 64B packets, and outperforms older approaches in terms of memory requirements by 66 % in average for the rule sets available to us.
Dokumenty
BibTex
@inproceedings{BUT97051,
author="Viktor {Puš} and Jan {Kořenek}",
title="Reducing memory in high-speed packet classification",
annote="Many packet classification algorithms were proposed to deal with the rapidly
growing speed of computer networks. Unfortunately all of these algorithms are
able to achieve high throughput only at the cost of excessively large memory and
can be used only for small sets of rules. We propose new algorithm that uses four
techniques to lower the memory requirements: division of rule set into subsets,
removal of critical rules, prefix coloring and perfect hashing. The algorithm is
designed for pipelined hardware implementation, can achieve the throughput of 266
million packets per second, which corresponds to 178 Gb/s for the shortest 64B
packets, and outperforms older approaches in terms of memory requirements by 66 %
in average for the rule sets available to us.",
address="Frederick University",
booktitle="Proceedings of the 8th International Wireless Communications and Mobile Computing Conference",
chapter="97051",
edition="NEUVEDEN",
howpublished="print",
institution="Frederick University",
year="2012",
month="october",
pages="437--442",
publisher="Frederick University",
type="conference paper"
}