Detail publikace

Unsupervised Processing of Vehicle Appearance for Automatic Understanding in Traffic Surveillance

Originální název

Unsupervised Processing of Vehicle Appearance for Automatic Understanding in Traffic Surveillance

Anglický název

Unsupervised Processing of Vehicle Appearance for Automatic Understanding in Traffic Surveillance

Jazyk

en

Originální abstrakt

This paper deals with unsupervised collection of information from traffic surveillance video streams. Deployment of usable traffic surveillance systems requires minimizing of efforts per installed camera - our goal is to enroll a new view on the street without any human operator input. We propose a method of automatically collecting vehicle samples from surveillance cameras, analyze their appearance and fully automatically collect a fine-grained dataset. This dataset can be used in multiple ways, we are explicitly showcasing the following ones: fine-grained recognition of vehicles and camera calibration including the scale. The experiments show that based on the automatically collected data, make&model vehicle recognition in the wild can be done accurately: average precision 0.890. The camera scale calibration (directly enabling automatic speed and size measurement) is twice as precise as the previous existing method. Our work leads to automatic collection of traffic statistics without the costly need for manual calibration or make&model annotation of vehicle samples. Unlike most previous approaches, our method is not limited to a small range of viewpoints (such as eye-level cameras shots), which is crucial for surveillance applications.

Anglický abstrakt

This paper deals with unsupervised collection of information from traffic surveillance video streams. Deployment of usable traffic surveillance systems requires minimizing of efforts per installed camera - our goal is to enroll a new view on the street without any human operator input. We propose a method of automatically collecting vehicle samples from surveillance cameras, analyze their appearance and fully automatically collect a fine-grained dataset. This dataset can be used in multiple ways, we are explicitly showcasing the following ones: fine-grained recognition of vehicles and camera calibration including the scale. The experiments show that based on the automatically collected data, make&model vehicle recognition in the wild can be done accurately: average precision 0.890. The camera scale calibration (directly enabling automatic speed and size measurement) is twice as precise as the previous existing method. Our work leads to automatic collection of traffic statistics without the costly need for manual calibration or make&model annotation of vehicle samples. Unlike most previous approaches, our method is not limited to a small range of viewpoints (such as eye-level cameras shots), which is crucial for surveillance applications.

BibTex


@inproceedings{BUT119896,
  author="Jakub {Sochor} and Adam {Herout}",
  title="Unsupervised Processing of Vehicle Appearance for Automatic Understanding in Traffic Surveillance",
  annote="
This paper deals with unsupervised collection of
information from traffic surveillance video streams. Deployment
of usable traffic surveillance systems requires minimizing of
efforts per installed camera - our goal is to enroll a new
view on the street without any human operator input. We
propose a method of automatically collecting vehicle samples
from surveillance cameras, analyze their appearance and fully
automatically collect a fine-grained dataset. This dataset can be
used in multiple ways, we are explicitly showcasing the following
ones: fine-grained recognition of vehicles and camera calibration
including the scale. The experiments show that based on the
automatically collected data, make&model vehicle recognition in
the wild can be done accurately: average precision 0.890. The
camera scale calibration (directly enabling automatic speed and
size measurement) is twice as precise as the previous existing
method. Our work leads to automatic collection of traffic statistics
without the costly need for manual calibration or make&model
annotation of vehicle samples. Unlike most previous approaches,
our method is not limited to a small range of viewpoints (such
as eye-level cameras shots), which is crucial for surveillance
applications.",
  address="Australian Pattern Recognition Society",
  booktitle="Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on",
  chapter="119896",
  doi="10.1109/DICTA.2015.7371318",
  edition="NEUVEDEN",
  howpublished="online",
  institution="Australian Pattern Recognition Society",
  year="2015",
  month="september",
  pages="1--8",
  publisher="Australian Pattern Recognition Society",
  type="conference paper"
}