Detail publikace

Towards Hardware Architecture for Memory Efficient IPv4/IPv6 Lookup in 100 Gbps Networks

Originální název

Towards Hardware Architecture for Memory Efficient IPv4/IPv6 Lookup in 100 Gbps Networks

Anglický název

Towards Hardware Architecture for Memory Efficient IPv4/IPv6 Lookup in 100 Gbps Networks

Jazyk

en

Originální abstrakt

With the growing speed of computer networks, core routers have to increase performance of longest prefix match (LPM) operation on IP addresses. While existing LPM algorithms are able to achieve high throughput for IPv4 addresses, the IPv6 processing speed is limited. To achieve 100 Gbps throughput, LPM operation has to be processed in dedicated hardware and a forwarding table has to fit into the on-chip memory. Current LPM algorithms need a large memory to store IPv6 forwarding tables or use compression with dynamic data structres, which can not be simply implemented in hardware. Therefore we provide analysis of available forwarding tables of core routers and propose a new representation of prefix sets. The proposed representation has very low memory demands and is suitable for high-speed pipelined processing, which is shown on new highly pipelined hardware architecture with 100 Gbps throughput.

Anglický abstrakt

With the growing speed of computer networks, core routers have to increase performance of longest prefix match (LPM) operation on IP addresses. While existing LPM algorithms are able to achieve high throughput for IPv4 addresses, the IPv6 processing speed is limited. To achieve 100 Gbps throughput, LPM operation has to be processed in dedicated hardware and a forwarding table has to fit into the on-chip memory. Current LPM algorithms need a large memory to store IPv6 forwarding tables or use compression with dynamic data structres, which can not be simply implemented in hardware. Therefore we provide analysis of available forwarding tables of core routers and propose a new representation of prefix sets. The proposed representation has very low memory demands and is suitable for high-speed pipelined processing, which is shown on new highly pipelined hardware architecture with 100 Gbps throughput.

BibTex


@inproceedings{BUT103466,
  author="Jiří {Matoušek} and Martin {Skačan} and Jan {Kořenek}",
  title="Towards Hardware Architecture for Memory Efficient IPv4/IPv6 Lookup in 100 Gbps Networks",
  annote="With the growing speed of computer networks, core routers have to increase
performance of longest prefix match (LPM) operation on IP addresses. While
existing LPM algorithms are able to achieve high throughput for IPv4 addresses,
the IPv6 processing speed is limited. To achieve 100 Gbps throughput, LPM
operation has to be processed in dedicated hardware and a forwarding table has to
fit into the on-chip memory. Current LPM algorithms need a large memory to store
IPv6 forwarding tables or use compression with dynamic data structres, which can
not be simply implemented in hardware. Therefore we provide analysis of available
forwarding tables of core routers and propose a new representation of prefix
sets. The proposed representation has very low memory demands and is suitable for
high-speed pipelined processing, which is shown on new highly pipelined hardware
architecture with 100 Gbps throughput.",
  address="IEEE Computer Society",
  booktitle="Proceedings of the 2013 IEEE 16th International Symposium on Design and Diagnostics of Electronic Circuits and Systems, DDECS 2013",
  chapter="103466",
  doi="10.1109/DDECS.2013.6549798",
  edition="NEUVEDEN",
  howpublished="print",
  institution="IEEE Computer Society",
  year="2013",
  month="april",
  pages="108--111",
  publisher="IEEE Computer Society",
  type="conference paper"
}