Detail publikace

The 2005 AMI System for the Transcription of Speech in Meetings

HAIN, T. BURGET, L. DINES, J. GARAU, G. KARAFIÁT, M. LINCOLN, M. MCCOWAN, I. MOORE, D. WAN, V. ORDELMAN, R. RENALS, S.

Originální název

The 2005 AMI System for the Transcription of Speech in Meetings

Anglický název

The 2005 AMI System for the Transcription of Speech in Meetings

Jazyk

en

Originální abstrakt

In this paper we describe the 2005 AMI system for the transcription of speech in meetings used for participation in the 2005 NIST RT evaluations. The system was designed for participation in the speech to text part of the evaluations, in particular for transcription of speech recorded with multiple distant microphones and independent headset microphones. System performance was tested on both conference room and lecture style meetings. Although input sources are processed using different front-ends, the recognition process is based on a unified system architecture. The system operates in multiple passes and makes use of state of the art technologies such as discriminative training, vocal tract length normalisation, heteroscedastic linear discriminant analysis,speaker adaptation with maximum likelihood linear regression and minimum word error rate decoding. In this paper we describe the system performance on the official development and test sets for the NIST RT05s
evaluations. The system was jointly developed in less than 10 months by a multi-site team and was shown to achieve very competitive performance

Anglický abstrakt

In this paper we describe the 2005 AMI system for the transcription of speech in meetings used for participation in the 2005 NIST RT evaluations. The system was designed for participation in the speech to text part of the evaluations, in particular for transcription of speech recorded with multiple distant microphones and independent headset microphones. System performance was tested on both conference room and lecture style meetings. Although input sources are processed using different front-ends, the recognition process is based on a unified system architecture. The system operates in multiple passes and makes use of state of the art technologies such as discriminative training, vocal tract length normalisation, heteroscedastic linear discriminant analysis,speaker adaptation with maximum likelihood linear regression and minimum word error rate decoding. In this paper we describe the system performance on the official development and test sets for the NIST RT05s
evaluations. The system was jointly developed in less than 10 months by a multi-site team and was shown to achieve very competitive performance

Dokumenty

BibTex


@inproceedings{BUT18267,
  author="Thomas {Hain} and Lukáš {Burget} and John {Dines} and Giulia {Garau} and Martin {Karafiát} and Mike {Lincoln} and Iain {McCowan} and Darren {Moore} and Vincent {Wan} and Roeland {Ordelman} and Steve {Renals}",
  title="The 2005 AMI System for the Transcription of Speech in Meetings",
  annote="In this paper we describe the 2005 AMI system for the transcription of speech in meetings used for participation in the 2005 NIST RT evaluations. The system was designed for participation in the speech to text part of the evaluations, in particular for transcription of speech recorded with multiple distant microphones and independent headset microphones. System performance was tested on both conference room and lecture style meetings. Although input sources are processed using different front-ends, the recognition process is based on a unified system architecture. The system operates in multiple passes and makes use of state of the art technologies such as discriminative training, vocal tract length normalisation, heteroscedastic linear discriminant analysis,speaker adaptation with maximum likelihood linear regression and minimum word error rate decoding. In this paper we describe the system performance on the official development and test sets for the NIST RT05s
evaluations. The system was jointly developed in less than 10 months by a multi-site team and was shown to achieve very competitive performance
", address="University of Edinburgh", booktitle="Machine Learning for Multimodal Interaction, Second International Workshop, MLMI 2005, Edinburgh, UK, July 11-13, 2005, Revised Selected Papers", chapter="18267", edition="Lecture Notes in Computer Science Volume 3869, Springer 2006", institution="University of Edinburgh", year="2005", month="july", pages="450--462", publisher="University of Edinburgh", type="conference paper" }