Detail publikace

Strategies for Training Large Scale Neural Network Language Models

MIKOLOV, T. DEORAS, A. POVEY, D. BURGET, L. ČERNOCKÝ, J.

Originální název

Strategies for Training Large Scale Neural Network Language Models

Typ

článek ve sborníku mimo WoS a Scopus

Jazyk

angličtina

Originální abstrakt

Techniques for effective training of recurrent neural network based language models are described, and new state-of-the-art results on standard speech recognition task are reported.

Klíčová slova

recurrent neural network, language model, speech recognition, maximum entropy

Autoři

MIKOLOV, T.; DEORAS, A.; POVEY, D.; BURGET, L.; ČERNOCKÝ, J.

Rok RIV

2011

Vydáno

11. 12. 2011

Nakladatel

IEEE Signal Processing Society

Místo

Hilton Waikoloa Village, Big Island, Hawaii

ISBN

978-1-4673-0366-8

Kniha

Proceedings of ASRU 2011

Strany od

196

Strany do

201

Strany počet

6

URL

BibTex

@inproceedings{BUT76453,
  author="Tomáš {Mikolov} and Anoop {Deoras} and Daniel {Povey} and Lukáš {Burget} and Jan {Černocký}",
  title="Strategies for Training Large Scale Neural Network Language Models",
  booktitle="Proceedings of ASRU 2011",
  year="2011",
  pages="196--201",
  publisher="IEEE Signal Processing Society",
  address="Hilton Waikoloa Village, Big Island, Hawaii",
  isbn="978-1-4673-0366-8",
  url="http://www.fit.vutbr.cz/research/groups/speech/publi/2011/mikolov_asru2011_00196.pdf"
}