Ing.

Kateřina Žmolíková

Ph.D.

FIT, VZ SPEECH – členka

izmolikova@fit.vut.cz

Odeslat VUT zprávu

Ing. Kateřina Žmolíková, Ph.D.

Publikace

  • 2023

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; OCHIAI, T.; ČERNOCKÝ, J.; KINOSHITA, K.; YU, D. Neural Target Speech Extraction: An overview. IEEE SIGNAL PROCESSING MAGAZINE, 2023, roč. 40, č. 3, s. 8-29. ISSN: 1558-0792.
    Detail | WWW

  • 2022

    KOCOUR, M.; ŽMOLÍKOVÁ, K.; ONDEL YANG, L.; ŠVEC, J.; DELCROIX, M.; OCHIAI, T.; BURGET, L.; ČERNOCKÝ, J. Revisiting joint decoding based multi-talker speech recognition with DNN acoustic model. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. s. 4955-4959. ISSN: 1990-9772.
    Detail | WWW

    ŠVEC, J.; ŽMOLÍKOVÁ, K.; KOCOUR, M.; DELCROIX, M.; OCHIAI, T.; MOŠNER, L.; ČERNOCKÝ, J. Analysis of impact of emotions on target speech extraction and speech separation. In Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022). Bamberg: IEEE Signal Processing Society, 2022. s. 1-5. ISBN: 978-1-6654-6867-1.
    Detail | WWW

    DE BENITO GORRON, D.; ŽMOLÍKOVÁ, K.; TORRE TOLEDANO, D. Source Separation for Sound Event Detection in domestic environments using jointly trained models. In Proceedings of The 17th International Workshop on Acoustic Signal Enhancement (IWAENC 2022). Bamberg: IEEE Signal Processing Society, 2022. s. 1-5. ISBN: 978-1-6654-6867-1.
    Detail | WWW

    DELCROIX, M.; KINOSHITA, K.; OCHIAI, T.; ŽMOLÍKOVÁ, K.; SATO, H.; NAKATANI, T. Listen only to me! How well can target speech extraction handle false alarms?. In Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH. Proceedings of Interspeech. Incheon: International Speech Communication Association, 2022. s. 216-220. ISSN: 1990-9772.
    Detail | WWW

  • 2021

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; RAJ, D.; WATANABE, S.; ČERNOCKÝ, J. Auxiliary Loss Function for Target Speech Extraction and Recognition with Weak Supervision Based on Speaker Characteristics. In Proceedings of 2021 Interspeech. Proceedings of Interspeech. Brno: International Speech Communication Association, 2021. s. 1464-1468. ISSN: 1990-9772.
    Detail | WWW

    VYDANA, H.; KARAFIÁT, M.; ŽMOLÍKOVÁ, K.; BURGET, L.; ČERNOCKÝ, J. Jointly Trained Transformers Models for Spoken Language Translation. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto, Ontario: IEEE Signal Processing Society, 2021. s. 7513-7517. ISBN: 978-1-7281-7605-5.
    Detail | WWW

    LANDINI, F.; LOZANO DÍEZ, A.; BURGET, L.; DIEZ SÁNCHEZ, M.; SILNOVA, A.; ŽMOLÍKOVÁ, K.; GLEMBEK, O.; MATĚJKA, P.; STAFYLAKIS, T.; BRUMMER, J. BUT System Description for The Third DIHARD Speech Diarization Challenge. Proceedings available at Dihard Challenge Github. on-line by LDC and University of Pennsylvania: 2021. s. 1-5.
    Detail | WWW

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; BURGET, L.; NAKATANI, T.; ČERNOCKÝ, J. Integration of Variational Autoencoder and Spatial Clustering for Adaptive Multi-Channel Neural Speech Separation. In 2021 IEEE Spoken Language Technology Workshop, SLT 2021 - Proceedings. Shenzhen - virtual: IEEE Signal Processing Society, 2021. s. 889-896. ISBN: 978-1-7281-7066-4.
    Detail | WWW

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; OCHIAI, T.; KINOSHITA, K.; NAKATANI, T. Speaker activity driven neural speech extraction. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Toronto: IEEE Signal Processing Society, 2021. s. 6099-6103. ISBN: 978-1-7281-7605-5.
    Detail | WWW

  • 2020

    LANDINI, F.; WANG, S.; DIEZ SÁNCHEZ, M.; BURGET, L.; MATĚJKA, P.; ŽMOLÍKOVÁ, K.; MOŠNER, L.; SILNOVA, A.; PLCHOT, O.; NOVOTNÝ, O.; ZEINALI, H.; ROHDIN, J. But System for the Second Dihard Speech Diarization Challenge. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Barcelona: IEEE Signal Processing Society, 2020. s. 6529-6533. ISBN: 978-1-5090-6631-5.
    Detail | WWW

    DELCROIX, M.; OCHIAI, T.; ŽMOLÍKOVÁ, K.; KINOSHITA, K.; TAWARA, N.; NAKATANI, T.; ARAKI, S. Improving Speaker Discrimination of Target Speech Extraction With Time-Domain Speakerbeam. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. Barcelona: IEEE Signal Processing Society, 2020. s. 691-695. ISBN: 978-1-5090-6631-5.
    Detail | WWW

    ŽMOLÍKOVÁ, K.; KOCOUR, M.; LANDINI, F.; BENEŠ, K.; KARAFIÁT, M.; VYDANA, H.; LOZANO DÍEZ, A.; PLCHOT, O.; BASKAR, M.; ŠVEC, J.; MOŠNER, L.; MALENOVSKÝ, V.; BURGET, L.; YUSUF, B.; NOVOTNÝ, O.; GRÉZL, F.; SZŐKE, I.; ČERNOCKÝ, J. BUT System for CHiME-6 Challenge. Proceedings of CHiME 2020 Virtual Workshop. Barcelona: University of Sheffield, 2020. s. 1-3.
    Detail | WWW

  • 2019

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; OCHIAI, T.; KINOSHITA, K.; ARAKI, S.; NAKATANI, T. Evaluation of SpeakerBeam target speech extraction in real noisy and reverberant conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF JAPAN, 2019, roč. 2019, č. 2, s. 1-2. ISSN: 0369-4232.
    Detail | WWW

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; OCHIAI, T.; NAKATANI, T.; BURGET, L.; ČERNOCKÝ, J. SpeakerBeam: Speaker Aware Neural Network for Target Speaker Extraction in Speech Mixtures. IEEE J-STSP, 2019, roč. 13, č. 4, s. 800-814. ISSN: 1932-4553.
    Detail | WWW

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; OCHIAI, T.; KINOSHITA, K.; ARAKI, S.; NAKATANI, T. Compact Network for Speakerbeam Target Speaker Extraction. In Proceedings of ICASSP. Brighton: IEEE Signal Processing Society, 2019. s. 6965-6969. ISBN: 978-1-5386-4658-8.
    Detail | WWW

  • 2018

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; KINOSHITA, K.; ARAKI, S.; OGAWA, A.; NAKATANI, T. SpeakerBeam: A New Deep Learning Technology for Extracting Speech of a Target Speaker Based on the Speaker's Voice Characteristics. NTT Technical Review, 2018, roč. 16, č. 11, s. 19-24. ISSN: 1348-3447.
    Detail | WWW

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; HIGUCHI, T.; NAKATANI, T.; ČERNOCKÝ, J. Optimization of Speaker-aware Multichannel Speech Extraction with ASR Criterion. In Proceedings of ICASSP 2018. Calgary: IEEE Signal Processing Society, 2018. s. 6702-6706. ISBN: 978-1-5386-4658-8.
    Detail | WWW

    DIEZ SÁNCHEZ, M.; LANDINI, F.; BURGET, L.; ROHDIN, J.; SILNOVA, A.; ŽMOLÍKOVÁ, K.; NOVOTNÝ, O.; VESELÝ, K.; GLEMBEK, O.; PLCHOT, O.; MOŠNER, L.; MATĚJKA, P. BUT system for DIHARD Speech Diarization Challenge 2018. In Proceedings of Interspeech 2018. Proceedings of Interspeech. Hyderabad: International Speech Communication Association, 2018. s. 2798-2802. ISSN: 1990-9772.
    Detail | WWW

    DELCROIX, M.; ŽMOLÍKOVÁ, K.; KINOSHITA, K.; OGAWA, A.; NAKATANI, T. Single Channel Target Speaker Extraction and Recognition with Speaker Beam. In Proceedings of ICASSP 2018. Calgary: IEEE Signal Processing Society, 2018. s. 5554-5558. ISBN: 978-1-5386-4658-8.
    Detail | WWW

  • 2017

    HIGUCHI, T.; KINOSHITA, K.; DELCROIX, M.; ŽMOLÍKOVÁ, K.; NAKATANI, T. Deep clustering-based beamforming for separation with unknown number of sources. In Proceedings of Interspeech 2017. Proceedings of Interspeech. Stockholm: International Speech Communication Association, 2017. s. 1183-1187. ISSN: 1990-9772.
    Detail | WWW

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; HIGUCHI, T.; OGAWA, A.; NAKATANI, T. Learning Speaker Representation for Neural Network Based Multichannel Speaker Extraction. In Proceedings of ASRU 2017. Okinawa: IEEE Signal Processing Society, 2017. s. 8-15. ISBN: 978-1-5090-4788-8.
    Detail | WWW

    ŽMOLÍKOVÁ, K.; DELCROIX, M.; KINOSHITA, K.; HIGUCHI, T.; OGAWA, A.; NAKATANI, T. Speaker-aware neural network based beamformer for speaker extraction in speech mixtures. In Proceedings of Interspeech 2017. Proceedings of Interspeech. Stocholm: International Speech Communication Association, 2017. s. 2655-2659. ISSN: 1990-9772.
    Detail | WWW

    KARAFIÁT, M.; VESELÝ, K.; ŽMOLÍKOVÁ, K.; DELCROIX, M.; WATANABE, S.; BURGET, L.; ČERNOCKÝ, J.; SZŐKE, I. Training Data Augmentation and Data Selection. In New Era for Robust Speech Recognition: Exploiting Deep Learning. Computer Science, Artificial Intelligence. Heidelberg: Springer International Publishing, 2017. s. 245-260. ISBN: 978-3-319-64679-4.
    Detail | WWW

  • 2016

    ŽMOLÍKOVÁ, K.; KARAFIÁT, M.; VESELÝ, K.; DELCROIX, M.; WATANABE, S.; BURGET, L.; ČERNOCKÝ, J. Data selection by sequence summarizing neural network in mismatch condition training. In Proceedings of Interspeech 2016. San Francisco: International Speech Communication Association, 2016. s. 2354-2358. ISBN: 978-1-5108-3313-5.
    Detail | WWW

    VESELÝ, K.; WATANABE, S.; ŽMOLÍKOVÁ, K.; KARAFIÁT, M.; BURGET, L.; ČERNOCKÝ, J. Sequence Summarizing Neural Network for Speaker Adaptation. In Proceedings of the 41th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2016), 2016. Shanghai: IEEE Signal Processing Society, 2016. s. 5315-5319. ISBN: 978-1-4799-9988-0.
    Detail | WWW

*) Citace publikací se generují jednou za 24 hodin.