Detail publikace

The AMI Meeting Corpus: A Pre-Announcement

ASHBY, S., BOURBAN, S., CARLETTA, J., FLYNN, M., GUILLEMOT, M., HAIN, T., KARAISKOS, V., KRAAIJ, W., KRONENTHAL, M., LATHOUD, G., LINCOLN, M., LISOWSKA, A., MCCOWAN, I., POST, W., REIDSMA, D., WELLNER, P., KADLEC, J.

Originální název

The AMI Meeting Corpus: A Pre-Announcement

Anglický název

The AMI Meeting Corpus: A Pre-Announcement

Jazyk

en

Originální abstrakt

The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated.

Anglický abstrakt

The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated.

Dokumenty

BibTex


@inproceedings{BUT18280,
  author="Simone {Ashby} and Sebastien {Bourban} and Jean {Carletta} and Mike {Flynn} and Mael {Guillemot} and Thomas {Hain} and Vasilis {Karaiskos} and Wessel {Kraaij} and Melissa {Kronenthal} and Guillaume {lathoud} and Mike {Lincoln} and Agnes {Lisowska} and Iain {McCowan} and Wilfried {Post} and Dennis {Reidsma} and Pierre {Wellner} and Jaroslav {Kadlec}",
  title="The AMI Meeting Corpus: A Pre-Announcement",
  annote="The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It is being created in the context of a project that is developing meeting browsing technology and will eventually be released publicly. Some of the meetings it contains are naturally occurring, and some are elicited, particularly using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The corpus is being recorded using a wide range of devices including close-talking and far-field microphones, individual and room-view video cameras, projection, a whiteboard, and individual pens, all of which produce output signals that are synchronized with each other. It is also being hand-annotated for many different phenomena, including orthographic transcription, discourse properties such as named entities and dialogue acts, summaries, emotions, and some head and hand gestures. We describe the data set, including the rationale behind using elicited material, and explain how the material is being recorded, transcribed and annotated.",
  booktitle="Workshop on Multimodal Interaction and Related Machine Learning Algorithms (MLMI)",
  chapter="18280",
  year="2005",
  month="july",
  pages="1",
  type="conference paper"
}