Publication detail

Fast variational Bayes for heavy-tailed PLDA applied to i-vectors and x-vectors

SILNOVA, A. BRUMMER, J. GARCÍA-ROMERO, D. SNYDER, D. BURGET, L.

Original Title

Fast variational Bayes for heavy-tailed PLDA applied to i-vectors and x-vectors

Type

conference paper

Language

English

Original Abstract

The standard state-of-the-art backend for text-independent speaker recognizers that use i-vectors or x-vectors, is Gaussian PLDA (G-PLDA), assisted by a Gaussianization step involving length normalization. G-PLDA can be trained with both generative or discriminative methods. It has long been known that heavy-tailed PLDA (HT-PLDA), applied without length normalization, gives similar accuracy, but at considerable extra computational cost. We have recently introduced a fast scoring algorithm for a discriminatively trained HT-PLDA backend. This paper extends that work by introducing a fast, variational Bayes, generative training algorithm. We compare old and new backends, with and without length-normalization, with i-vectors and x-vectors, on SRE10, SRE16 and SITW.

Keywords

peaker recognition, variational Bayes, heavytailed PLDA

Authors

SILNOVA, A.; BRUMMER, J.; GARCÍA-ROMERO, D.; SNYDER, D.; BURGET, L.

Released

2. 9. 2018

Publisher

International Speech Communication Association

Location

Hyderabad

ISBN

1990-9772

Periodical

Proceedings of Interspeech

Year of study

2018

Number

9

State

French Republic

Pages from

72

Pages to

76

Pages count

5

URL

BibTex

@inproceedings{BUT155098,
  author="SILNOVA, A. and BRUMMER, J. and GARCÍA-ROMERO, D. and SNYDER, D. and BURGET, L.",
  title="Fast variational Bayes for heavy-tailed PLDA applied to i-vectors and x-vectors",
  booktitle="Proceedings of Interspeech 2018",
  year="2018",
  journal="Proceedings of Interspeech",
  volume="2018",
  number="9",
  pages="72--76",
  publisher="International Speech Communication Association",
  address="Hyderabad",
  doi="10.21437/Interspeech.2018-2128",
  issn="1990-9772",
  url="https://www.isca-speech.org/archive/Interspeech_2018/abstracts/2128.html"
}