Publication detail
TRECVID 2007 by the Brno Group
HEROUT, A. BERAN, V. HRADIŠ, M. POTÚČEK, I. ZEMČÍK, P. CHMELAŘ, P.
Original Title
TRECVID 2007 by the Brno Group
English Title
TRECVID 2007 by the Brno Group
Type
conference paper
Language
en
Original Abstract
High Level Feature Extraction 1. The runs: - A_brU_1 - features extracted from each frame; SVM per-frame classifier trained on frames in each shot; simple decision tree judging shots based on per-frame results - A_brV_2 - same as A_brU_1, but SVM trained on all training data (the first run divided the training data to training and cross-validation datasets), with SVM configured from the previous run 2. Significant differences between the runs: - As expected, the second run performed generally better, in some cases notably better (which is slightly surprising, because besides the amount of training data, there was no change) 3. Contribution of each component: - The low-level features appear to be good enough, though their number is relatively large and having more time we would experiment with reduction of the feature vector size (now 572 low level features) - We considered using some mid-level features based on existing solutions the group has, such as face detection, car detection, etc., but for time constraints did not employ these in the feature vector - The per-frame classification seems to suffer greatly from mis-annotated frames (whole shots are considered to share the same annotation information in our system) and could be the weakest point of the system - The per-shot decision making seems to be sufficient given the data coming from the per-shot classification 4. Overall comments: - see further in the paper Shot Boundary Detection We describe our approach to cut detection where we use AdaBoost boosting algorithm to create a detection classifier from a large set of features which are based on few simple frame distance measures. First, we introduce the reasons which led us to use AdaBoost algorithm, then we describe the set of features and we also discuss the achieved result. Finally, we present the possible future improvements to the current approach.
English abstract
High Level Feature Extraction 1. The runs: - A_brU_1 - features extracted from each frame; SVM per-frame classifier trained on frames in each shot; simple decision tree judging shots based on per-frame results - A_brV_2 - same as A_brU_1, but SVM trained on all training data (the first run divided the training data to training and cross-validation datasets), with SVM configured from the previous run 2. Significant differences between the runs: - As expected, the second run performed generally better, in some cases notably better (which is slightly surprising, because besides the amount of training data, there was no change) 3. Contribution of each component: - The low-level features appear to be good enough, though their number is relatively large and having more time we would experiment with reduction of the feature vector size (now 572 low level features) - We considered using some mid-level features based on existing solutions the group has, such as face detection, car detection, etc., but for time constraints did not employ these in the feature vector - The per-frame classification seems to suffer greatly from mis-annotated frames (whole shots are considered to share the same annotation information in our system) and could be the weakest point of the system - The per-shot decision making seems to be sufficient given the data coming from the per-shot classification 4. Overall comments: - see further in the paper Shot Boundary Detection We describe our approach to cut detection where we use AdaBoost boosting algorithm to create a detection classifier from a large set of features which are based on few simple frame distance measures. First, we introduce the reasons which led us to use AdaBoost algorithm, then we describe the set of features and we also discuss the achieved result. Finally, we present the possible future improvements to the current approach.
Keywords
TRECVID 2007, Brno, High Level Feature Extraction, Shot Boundary Detection
RIV year
2008
Released
01.03.2008
Publisher
National Institute of Standards and Technology
Location
Gaithersburg
ISBN
978-1-59593-780-3
Book
Proceedings of TRECVID 2007
Edition
NEUVEDEN
Edition number
NEUVEDEN
Pages from
1
Pages to
6
Pages count
6
Documents
BibTex
@inproceedings{BUT32590,
author="Adam {Herout} and Vítězslav {Beran} and Michal {Hradiš} and Igor {Potúček} and Pavel {Zemčík} and Petr {Chmelař}",
title="TRECVID 2007 by the Brno Group",
annote="High Level Feature Extraction
1. The runs:
- A_brU_1 - features extracted from each frame; SVM per-frame
classifier trained on frames in each
shot; simple decision tree judging shots based on per-frame
results
- A_brV_2 - same as A_brU_1, but SVM trained on all training data
(the first run divided the training
data to training and cross-validation datasets), with SVM
configured from the previous run
2. Significant differences between the runs:
- As expected, the second run performed generally better, in some
cases notably better (which is slightly
surprising, because besides the amount of training data, there
was no change)
3. Contribution of each component:
- The low-level features appear to be good enough, though their
number is relatively large and having
more time we would experiment with reduction of the feature
vector size (now 572 low level features)
- We considered using some mid-level features based on existing
solutions the group has, such as face
detection, car detection, etc., but for time constraints did not
employ these in the feature vector
- The per-frame classification seems to suffer greatly from
mis-annotated frames (whole shots are
considered to share the same annotation information in our
system) and could be the weakest point of
the system
- The per-shot decision making seems to be sufficient given the data
coming from the per-shot
classification
4. Overall comments:
- see further in the paper
Shot Boundary Detection
We describe our approach to cut detection where we use AdaBoost boosting
algorithm to create a detection classifier
from a large set of features which are based on few simple frame distance
measures. First, we introduce the reasons
which led us to use AdaBoost algorithm, then we describe the set of features and
we also discuss the achieved result.
Finally, we present the possible future improvements to the current approach.",
address="National Institute of Standards and Technology",
booktitle="Proceedings of TRECVID 2007",
chapter="32590",
edition="NEUVEDEN",
howpublished="print",
institution="National Institute of Standards and Technology",
year="2008",
month="march",
pages="1--6",
publisher="National Institute of Standards and Technology",
type="conference paper"
}