Ing.

Ladislav Mošner

FIT, UPGM – vědecký pracovník

+420 54114 1271
imosner@fit.vut.cz

Odeslat VUT zprávu

Ing. Ladislav Mošner

Publikace

  • 2023

    KAKOUROS, S.; STAFYLAKIS, T.; MOŠNER, L.; BURGET, L. Speech-Based Emotion Recognition with Self-Supervised Models Using Attentive Channel-Wise Correlations and Label Smoothing. In Proceedings of ICASSP 2023. Rhodes Island: IEEE Signal Processing Society, 2023. s. 1-5. ISBN: 978-1-7281-6327-7.
    Detail | WWW

    STAFYLAKIS, T.; MOŠNER, L.; KAKOUROS, S.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Extracting speaker and emotion information from self-supervised speech models via channel-wise correlations. In 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023. s. 1136-1143. ISBN: 978-1-6654-7189-3.
    Detail | WWW

    SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; KLČO, M.; PLCHOT, O.; MATĚJKA, P.; PENG, J.; STAFYLAKIS, T.; BURGET, L. ABC System Description for NIST LRE 2022. Proceedings of NIST LRE 2022 Workshop. Washington DC: National Institute of Standards and Technology, 2023. s. 1-5.
    Detail | WWW

    PENG, J.; STAFYLAKIS, T.; GU, R.; PLCHOT, O.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. Parameter-Efficient Transfer Learning of Pre-Trained Transformer Models for Speaker Verification Using Adapters. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Rhodes Island: IEEE Signal Processing Society, 2023. s. 1-5. ISBN: 978-1-7281-6327-7.
    Detail | WWW

    PENG, J.; PLCHOT, O.; STAFYLAKIS, T.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. An attention-based backend allowing efficient fine-tuning of transformer models for speaker verification. In 2022 IEEE Spoken Language Technology Workshop, SLT 2022 - Proceedings. Doha: IEEE Signal Processing Society, 2023. s. 555-562. ISBN: 978-1-6654-7189-3.
    Detail | WWW

    MOŠNER, L.; PLCHOT, O.; PENG, J.; BURGET, L.; ČERNOCKÝ, J. Multi-Channel Speech Separation with Cross-Attention and Beamforming. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. s. 1693-1697. ISSN: 1990-9772.
    Detail | WWW

    MATĚJKA, P.; SILNOVA, A.; SLAVÍČEK, J.; MOŠNER, L.; PLCHOT, O.; KLČO, M.; PENG, J.; STAFYLAKIS, T.; BURGET, L. Description and Analysis of ABC Submission to NIST LRE 2022. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. s. 511-515. ISSN: 1990-9772.
    Detail | WWW

    PENG, J.; PLCHOT, O.; STAFYLAKIS, T.; MOŠNER, L.; BURGET, L.; ČERNOCKÝ, J. Improving Speaker Verification with Self-Pretrained Transformer Models. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Dublin: International Speech Communication Association, 2023. s. 5361-5365. ISSN: 1990-9772.
    Detail | WWW

  • 2022

    MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Multisv: Dataset for Far-Field Multi-Channel Speaker Verification. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022. s. 7977-7981. ISBN: 978-1-6654-0540-9.
    Detail | WWW

    ŠVEC, J.; ŽMOLÍKOVÁ, K.; KOCOUR, M.; DELCROIX, M.; OCHIAI, T.; MOŠNER, L.; ČERNOCKÝ, J. Analysis of impact of emotions on target speech extraction and speech separation. In Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022). Bamberg: IEEE Signal Processing Society, 2022. s. 1-5. ISBN: 978-1-6654-6867-1.
    Detail | WWW

    PENG, J.; GU, R.; MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Learnable Sparse Filterbank for Speaker Verification. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. s. 5110-5114. ISSN: 1990-9772.
    Detail | WWW

    MOŠNER, L.; PLCHOT, O.; BURGET, L.; ČERNOCKÝ, J. Multi-Channel Speaker Verification with Conv-Tasnet Based Beamformer. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Singapore: IEEE Signal Processing Society, 2022. s. 7982-7986. ISBN: 978-1-6654-0540-9.
    Detail | WWW

    SILNOVA, A.; STAFYLAKIS, T.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; MATĚJKA, P.; BURGET, L.; GLEMBEK, O.; BRUMMER, J. Analyzing speaker verification embedding extractors and back-ends under language and channel mismatch. Proceedings of The Speaker and Language Recognition Workshop (Odyssey 2022). Beijing: International Speech Communication Association, 2022. s. 9-16.
    Detail | WWW

    BRUMMER, J.; SWART, A.; MOŠNER, L.; SILNOVA, A.; PLCHOT, O.; STAFYLAKIS, T.; BURGET, L. Probabilistic Spherical Discriminant Analysis: An Alternative to PLDA for length-normalized embeddings. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. s. 1446-1450. ISSN: 1990-9772.
    Detail | WWW

    ALAM, J.; BURGET, L.; GLEMBEK, O.; MATĚJKA, P.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; STAFYLAKIS, T. Development of ABC systems for the 2021 edition of NIST Speaker Recognition evaluation. Proceedings of The Speaker and Language Recognition Workshop (Odyssey 2022). Beijing: International Speech Communication Association, 2022. s. 346-353.
    Detail | WWW

    STAFYLAKIS, T.; MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; BURGET, L.; ČERNOCKÝ, J. Training Speaker Embedding Extractors Using Multi-Speaker Audio with Unknown Speaker Boundaries. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. s. 605-609. ISSN: 1990-9772.
    Detail | WWW

  • 2020

    ALAM, J.; BOULIANNE, G.; BURGET, L.; DAHMANE, M.; DIEZ SÁNCHEZ, M.; GLEMBEK, O.; LALONDE, M.; LOZANO DÍEZ, A.; MATĚJKA, P.; MIZERA, P.; MOŠNER, L.; NOISEUX, C.; MONTEIRO, J.; NOVOTNÝ, O.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; ST-CHARLES, P.; WANG, S.; ZEINALI, H. Analysis of ABC Submission to NIST SRE 2019 CMN and VAST Challenge. In Proceedings of Odyssey 2020 The Speaker and Language Recognition Workshop. Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland. Tokyo: International Speech Communication Association, 2020. s. 289-295. ISSN: 2312-2846.
    Detail | WWW

    MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; ČERNOCKÝ, J. Utilizing VOiCES dataset for multichannel speaker verification with beamforming. Proceedings of Odyssey 2020 The Speaker and Language Recognition Workshop. Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland. Tokyo: International Speech Communication Association, 2020. s. 187-193. ISSN: 2312-2846.
    Detail | WWW

    ŽMOLÍKOVÁ, K.; KOCOUR, M.; LANDINI, F.; BENEŠ, K.; KARAFIÁT, M.; VYDANA, H.; LOZANO DÍEZ, A.; PLCHOT, O.; BASKAR, M.; ŠVEC, J.; MOŠNER, L.; MALENOVSKÝ, V.; BURGET, L.; YUSUF, B.; NOVOTNÝ, O.; GRÉZL, F.; SZŐKE, I.; ČERNOCKÝ, J. BUT System for CHiME-6 Challenge. Proceedings of CHiME 2020 Virtual Workshop. Barcelona: University of Sheffield, 2020. s. 1-3.
    Detail | WWW

    LANDINI, F.; WANG, S.; DIEZ SÁNCHEZ, M.; BURGET, L.; MATĚJKA, P.; ŽMOLÍKOVÁ, K.; MOŠNER, L.; SILNOVA, A.; PLCHOT, O.; NOVOTNÝ, O.; ZEINALI, H.; ROHDIN, J. But System for the Second Dihard Speech Diarization Challenge. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Barcelona: IEEE Signal Processing Society, 2020. s. 6529-6533. ISBN: 978-1-5090-6631-5.
    Detail | WWW

    MATĚJKA, P.; PLCHOT, O.; GLEMBEK, O.; BURGET, L.; ROHDIN, J.; ZEINALI, H.; MOŠNER, L.; SILNOVA, A.; NOVOTNÝ, O.; DIEZ SÁNCHEZ, M.; ČERNOCKÝ, J. 13 years of speaker recognition research at BUT, with longitudinal analysis of NIST SRE. COMPUTER SPEECH AND LANGUAGE, 2020, roč. 2020, č. 63, s. 1-15. ISSN: 0885-2308.
    Detail | WWW

  • 2019

    ALAM, J.; BOULIANNE, G.; GLEMBEK, O.; LOZANO DÍEZ, A.; MATĚJKA, P.; MIZERA, P.; MONTEIRO, J.; MOŠNER, L.; NOVOTNÝ, O.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; WANG, S.; ZEINALI, H. ABC NIST SRE 2019 CTS System Description. Proceedings of NIST. Sentosa, Singapore: National Institute of Standards and Technology, 2019. s. 1-6.
    Detail | WWW

    MOŠNER, L.; PLCHOT, O.; ROHDIN, J.; BURGET, L.; ČERNOCKÝ, J. Speaker Verification with Application-Aware Beamforming. In IEEE Automatic Speech Recognition and Understanding Workshop - Proceedings (ASRU). Sentosa, Singapore: IEEE Signal Processing Society, 2019. s. 411-418. ISBN: 978-1-7281-0306-8.
    Detail | WWW

    SZŐKE, I.; SKÁCEL, M.; MOŠNER, L.; PALIESEK, J.; ČERNOCKÝ, J. Building and Evaluation of a Real Room Impulse Response Dataset. IEEE J-STSP, 2019, roč. 13, č. 4, s. 863-876. ISSN: 1932-4553.
    Detail | WWW

    MATĚJKA, P.; PLCHOT, O.; ZEINALI, H.; MOŠNER, L.; SILNOVA, A.; BURGET, L.; NOVOTNÝ, O.; GLEMBEK, O. Analysis of BUT Submission in Far-Field Scenarios of VOiCES 2019 Challenge. In Proceedings of Interspeech. Proceedings of Interspeech. Graz: International Speech Communication Association, 2019. s. 2448-2452. ISSN: 1990-9772.
    Detail | WWW

    MOŠNER, L.; WU, M.; RAJU, A.; PARTHASARATHI, S.; KUMATANI, K.; SUNDARAM, S.; MAAS, R.; HOFFMEISTER, B. Improving Noise Robustness of Automatic Speech Recognition via Parallel Data and Teacher-student Learning. In Proceedings of ICASSP. Brighton: IEEE Signal Processing Society, 2019. s. 6475-6479. ISBN: 978-1-5386-4658-8.
    Detail | WWW

    ALAM, J.; BOULIANNE, G.; BURGET, L.; GLEMBEK, O.; LOZANO DÍEZ, A.; MATĚJKA, P.; MIZERA, P.; MOŠNER, L.; NOVOTNÝ, O.; PLCHOT, O.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; WANG, S.; ZEINALI, H.; DAHMANE, M.; ST-CHARLES, P.; LALONDE, M.; NOISEUX, C.; MONTEIRO, J. ABC System Description for NIST Multimedia Speaker Recognition Evaluation 2019. Proceedings of NIST 2019 SRE Workshop. Sentosa, Singapore: National Institute of Standards and Technology, 2019. s. 1-7.
    Detail | WWW

  • 2018

    MOŠNER, L.; MATĚJKA, P.; NOVOTNÝ, O.; ČERNOCKÝ, J. Dereverberation and Beamforming in Far-Field Speaker Recognition. In Proceedings of ICASSP 2018. Calgary: IEEE Signal Processing Society, 2018. s. 5254-5258. ISBN: 978-1-5386-4658-8.
    Detail | WWW

    ALAM, J.; BHATTACHARYA, G.; BRUMMER, J.; BURGET, L.; DIEZ SÁNCHEZ, M.; GLEMBEK, O.; KENNY, P.; KLČO, M.; LANDINI, F.; LOZANO DÍEZ, A.; MATĚJKA, P.; MONTEIRO, J.; MOŠNER, L.; NOVOTNÝ, O.; PLCHOT, O.; PROFANT, J.; ROHDIN, J.; SILNOVA, A.; SLAVÍČEK, J.; STAFYLAKIS, T.; ZEINALI, H. ABC NIST SRE 2018 SYSTEM DESCRIPTION. Proceedings of 2018 NIST SRE Workshop. Athens: National Institute of Standards and Technology, 2018. s. 1-10.
    Detail | WWW

    MOŠNER, L.; PLCHOT, O.; MATĚJKA, P.; NOVOTNÝ, O.; ČERNOCKÝ, J. Dereverberation and Beamforming in Robust Far-Field Speaker Recognition. In Proceedings of Interspeech 2018. Proceedings of Interspeech. Hyderabad: International Speech Communication Association, 2018. s. 1334-1338. ISSN: 1990-9772.
    Detail | WWW

    DIEZ SÁNCHEZ, M.; LANDINI, F.; BURGET, L.; ROHDIN, J.; SILNOVA, A.; ŽMOLÍKOVÁ, K.; NOVOTNÝ, O.; VESELÝ, K.; GLEMBEK, O.; PLCHOT, O.; MOŠNER, L.; MATĚJKA, P. BUT system for DIHARD Speech Diarization Challenge 2018. In Proceedings of Interspeech 2018. Proceedings of Interspeech. Hyderabad: International Speech Communication Association, 2018. s. 2798-2802. ISSN: 1990-9772.
    Detail | WWW

    NOVOTNÝ, O.; PLCHOT, O.; MATĚJKA, P.; MOŠNER, L.; GLEMBEK, O. On the use of X-vectors for Robust Speaker Recognition. Proceedings of Odyssey 2018. Proceedings of Odyssey: The Speaker and Language Recognition Workshop Odyssey 2014, Joensuu, Finland. Les Sables d´Olonne: International Speech Communication Association, 2018. s. 168-175. ISSN: 2312-2846.
    Detail | WWW

*) Citace publikací se generují jednou za 24 hodin.