Detail publikace

Parallel Genetic Algorithm Solving 0/1 Knapsack Problem Running on the GPU

Originální název

Parallel Genetic Algorithm Solving 0/1 Knapsack Problem Running on the GPU

Anglický název

Parallel Genetic Algorithm Solving 0/1 Knapsack Problem Running on the GPU

Jazyk

en

Originální abstrakt

In this work, we show that consumer-level $100 GPU can be used to significantly speed-up optimization of 0/1 Knapsack problem. We identify strong and weak points of GPU architecture and propose our parallel genetic algorithm model implemented in CUDA running entirely on the GPU. We show that GPU must be utilized for sufficiently long time in order to obtain reasonable program speedup. Then we compare results quality and speed of our model with single-threaded CPU code implemented using Galib. Peak speedup of GPU GA execution performance is 1340x resp. 134x for 4-bit resp. 40-bit problem instances while maintaining reasonable results quality.

Anglický abstrakt

In this work, we show that consumer-level $100 GPU can be used to significantly speed-up optimization of 0/1 Knapsack problem. We identify strong and weak points of GPU architecture and propose our parallel genetic algorithm model implemented in CUDA running entirely on the GPU. We show that GPU must be utilized for sufficiently long time in order to obtain reasonable program speedup. Then we compare results quality and speed of our model with single-threaded CPU code implemented using Galib. Peak speedup of GPU GA execution performance is 1340x resp. 134x for 4-bit resp. 40-bit problem instances while maintaining reasonable results quality.

BibTex


@inproceedings{BUT34660,
  author="Petr {Pospíchal} and Josef {Schwarz} and Jiří {Jaroš}",
  title="Parallel Genetic Algorithm Solving 0/1 Knapsack Problem Running on the GPU",
  annote="In this work, we show that consumer-level $100 GPU can be used to significantly
speed-up optimization of 0/1 Knapsack problem. We identify strong and weak points
of GPU architecture and propose our parallel genetic algorithm model implemented
in CUDA running entirely on the GPU. We show that GPU must be utilized for
sufficiently long time in order to obtain reasonable program speedup. Then we
compare results quality and speed of our model with single-threaded CPU code
implemented using Galib. Peak speedup of GPU GA execution performance is 1340x
resp. 134x for 4-bit resp. 40-bit problem instances while maintaining reasonable
results quality.",
  address="Brno University of Technology",
  booktitle="16th International Conference on Soft Computing MENDEL 2010",
  chapter="34660",
  edition="NEUVEDEN",
  howpublished="online",
  institution="Brno University of Technology",
  year="2010",
  month="july",
  pages="64--70",
  publisher="Brno University of Technology",
  type="conference paper"
}