Publication detail

Strategies for Training Large Scale Neural Network Language Models

MIKOLOV, T. DEORAS, A. POVEY, D. BURGET, L. ČERNOCKÝ, J.

Original Title

Strategies for Training Large Scale Neural Network Language Models

English Title

Strategies for Training Large Scale Neural Network Language Models

Type

conference paper

Language

en

Original Abstract

Techniques for effective training of recurrent neural network based language models are described, and new state-of-the-art results on standard speech recognition task are reported.

English abstract

Techniques for effective training of recurrent neural network based language models are described, and new state-of-the-art results on standard speech recognition task are reported.

Keywords

recurrent neural network, language model, speech recognition, maximum entropy

RIV year

2011

Released

11.12.2011

Publisher

IEEE Signal Processing Society

Location

Hilton Waikoloa Village, Big Island, Hawaii

ISBN

978-1-4673-0366-8

Book

Proceedings of ASRU 2011

Edition

NEUVEDEN

Edition number

NEUVEDEN

Pages from

196

Pages to

201

Pages count

6

URL

Documents

BibTex


@inproceedings{BUT76453,
  author="Tomáš {Mikolov} and Anoop {Deoras} and Daniel {Povey} and Lukáš {Burget} and Jan {Černocký}",
  title="Strategies for Training Large Scale Neural Network Language Models",
  annote="Techniques for effective training of recurrent neural network based language
models are described, and new state-of-the-art results on standard speech
recognition task are reported.",
  address="IEEE Signal Processing Society",
  booktitle="Proceedings of ASRU 2011",
  chapter="76453",
  edition="NEUVEDEN",
  howpublished="print",
  institution="IEEE Signal Processing Society",
  year="2011",
  month="december",
  pages="196--201",
  publisher="IEEE Signal Processing Society",
  type="conference paper"
}