Detail publikace

Acceleration of grammatical evolution using graphics processing units: computational intelligence on consumer games and graphics hardware

Originální název

Acceleration of grammatical evolution using graphics processing units: computational intelligence on consumer games and graphics hardware

Anglický název

Acceleration of grammatical evolution using graphics processing units: computational intelligence on consumer games and graphics hardware

Jazyk

en

Originální abstrakt

Several papers show that symbolic regression is suitable for data analysis and prediction in financial markets. Grammatical Evolution (GE), a grammar-based form of Genetic Programming (GP), has been successfully applied in solving various tasks including symbolic regression. However, often the computational effort to calculate the fitness of a solution in GP can limit the area of possible application and/or the extent of experimentation undertaken.  This paper deals with utilizing mainstream graphics processing units (GPU) for acceleration of GE solving symbolic regression. GPU optimization details are discussed and the NVCC compiler is analyzed.  We design an effective mapping of the algorithm to the CUDA framework, and in so doing must tackle constraints of the GPU approach, such as the PCI-express bottleneck and main memory transactions.  This is the first occasion GE has been adapted for running on a GPU. We measure our implementation running on one core of CPU Core i7 and GPU GTX 480 together with a GE library written in JAVA, GEVA.  Results indicate that our algorithm offers the same convergence, and it is suitable for a larger number of regression points where GPU is able to reach speedups of up to 39 times faster when compared to GEVA on a  serial CPU code written in C. In conclusion, properly utilized, GPU can offer an interesting performance boost for GE tackling symbolic regression. 

Anglický abstrakt

Several papers show that symbolic regression is suitable for data analysis and prediction in financial markets. Grammatical Evolution (GE), a grammar-based form of Genetic Programming (GP), has been successfully applied in solving various tasks including symbolic regression. However, often the computational effort to calculate the fitness of a solution in GP can limit the area of possible application and/or the extent of experimentation undertaken.  This paper deals with utilizing mainstream graphics processing units (GPU) for acceleration of GE solving symbolic regression. GPU optimization details are discussed and the NVCC compiler is analyzed.  We design an effective mapping of the algorithm to the CUDA framework, and in so doing must tackle constraints of the GPU approach, such as the PCI-express bottleneck and main memory transactions.  This is the first occasion GE has been adapted for running on a GPU. We measure our implementation running on one core of CPU Core i7 and GPU GTX 480 together with a GE library written in JAVA, GEVA.  Results indicate that our algorithm offers the same convergence, and it is suitable for a larger number of regression points where GPU is able to reach speedups of up to 39 times faster when compared to GEVA on a  serial CPU code written in C. In conclusion, properly utilized, GPU can offer an interesting performance boost for GE tackling symbolic regression. 

BibTex


@inproceedings{BUT76469,
  author="Petr {Pospíchal} and Josef {Schwarz} and Jiří {Jaroš}",
  title="Acceleration of grammatical evolution using graphics processing units: computational intelligence on consumer games and graphics hardware",
  annote="Several papers show that symbolic regression is suitable for data analysis and
prediction in financial markets. Grammatical Evolution (GE), a grammar-based form
of Genetic Programming (GP), has been successfully applied in solving various
tasks including symbolic regression. However, often the computational effort to
calculate the fitness of a solution in GP can limit the area of possible
application and/or the extent of experimentation undertaken.  This paper deals
with utilizing mainstream graphics processing units (GPU) for acceleration of GE
solving symbolic regression. GPU optimization details are discussed and the NVCC
compiler is analyzed.  We design an effective mapping of the algorithm to the
CUDA framework, and in so doing must tackle constraints of the GPU approach, such
as the PCI-express bottleneck and main memory transactions.   This is the first
occasion GE has been adapted for running on a GPU. We measure our implementation
running on one core of CPU Core i7 and GPU GTX 480 together with a GE library
written in JAVA, GEVA.   Results indicate that our algorithm offers the same
convergence, and it is suitable for a larger number of regression points where
GPU is able to reach speedups of up to 39 times faster when compared to GEVA on
a  serial CPU code written in C. In conclusion, properly utilized, GPU can offer
an interesting performance boost for GE tackling symbolic regression. ",
  address="Association for Computing Machinery",
  booktitle="Proceedings of the 2011 GECCO conference companion on Genetic and evolutionary computation",
  chapter="76469",
  edition="NEUVEDEN",
  howpublished="print",
  institution="Association for Computing Machinery",
  year="2011",
  month="october",
  pages="431--439",
  publisher="Association for Computing Machinery",
  type="conference paper"
}