Detail publikace

BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual statements with deep pre-trained language representation models

FAJČÍK, M. DOČEKAL, M. JON, J. SMRŽ, P.

Originální název

BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual statements with deep pre-trained language representation models

Typ

článek ve sborníku ve WoS nebo Scopus

Jazyk

angličtina

Originální abstrakt

This paper describes BUT-FITs submission at SemEval-2020 Task 5: Modelling Causal Reasoning in Language: Detecting Counterfactuals. The challenge focused on detecting whether a given statement contains a  counterfactual (Subtask 1) and extracting both antecedent and consequent parts of the counterfactual from the text (Subtask 2). We experimented with various state-of-the-art language representation models (LRMs). We found RoBERTa LRM to perform the best in both subtasks. We achieved the first place in both exact match and F1 for Subtask 2 and ranked second for Subtask 1.

Klíčová slova

counterfactual, counterfactual reasoning, BERT, RoBERTa, ALBERT, causal reasoning, what-if, semeval, classification, extraction

Autoři

FAJČÍK, M.; DOČEKAL, M.; JON, J.; SMRŽ, P.

Vydáno

16. 12. 2020

Nakladatel

Association for Computational Linguistics

Místo

Barcelona (online)

ISBN

978-1-952148-31-6

Kniha

Proceedings of the Fourteenth Workshop on Semantic Evaluation

Strany od

437

Strany do

444

Strany počet

8

URL

BibTex

@inproceedings{BUT168151,
  author="Martin {Fajčík} and Martin {Dočekal} and Josef {Jon} and Pavel {Smrž}",
  title="BUT-FIT at SemEval-2020 Task 5: Automatic detection of counterfactual statements with deep pre-trained language representation models",
  booktitle="Proceedings of the Fourteenth Workshop on Semantic Evaluation",
  year="2020",
  pages="437--444",
  publisher="Association for Computational Linguistics",
  address="Barcelona (online)",
  isbn="978-1-952148-31-6",
  url="https://www.aclweb.org/anthology/2020.semeval-1.53/"
}