Publication detail

Automatic Camera Calibration by Landmarks on Rigid Objects

BARTL, V. ŠPAŇHEL, J. DOBEŠ, P. JURÁNEK, R. HEROUT, A.

Original Title

Automatic Camera Calibration by Landmarks on Rigid Objects

English Title

Automatic Camera Calibration by Landmarks on Rigid Objects

Type

journal article in Web of Science

Language

en

Original Abstract

This article presents a new method for automatic calibration of surveillance cameras. We are dealing with traffic surveillance and therefore the camera is calibrated by observing vehicles; however, other rigid objects can be used instead. The proposed method is using keypoints or landmarks automatically detected on the observed objects by a convolutional neural network. By using fine-grained recognition of the vehicles (calibration objects), and by knowing the 3D positions of the landmarks for the (very limited) set of known objects, the extracted keypoints are used for calibration of the camera, resulting in internal (focal length) and external (rotation, translation) parameters and scene scale of the surveillance camera. We collected a dataset in two parking lots and equipped it with a calibration ground truth by measuring multiple distances in the ground plane. This dataset seems to be more accurate than the existing comparable data (GT calibration error reduced from 4.62% to 0.99%). Also, the experiments show that our method overcomes the best existing alternative in terms of accuracy (error reduced from 6.56% to 4.03%) and our solution is also more flexible in terms of viewpoint change and other.

English abstract

This article presents a new method for automatic calibration of surveillance cameras. We are dealing with traffic surveillance and therefore the camera is calibrated by observing vehicles; however, other rigid objects can be used instead. The proposed method is using keypoints or landmarks automatically detected on the observed objects by a convolutional neural network. By using fine-grained recognition of the vehicles (calibration objects), and by knowing the 3D positions of the landmarks for the (very limited) set of known objects, the extracted keypoints are used for calibration of the camera, resulting in internal (focal length) and external (rotation, translation) parameters and scene scale of the surveillance camera. We collected a dataset in two parking lots and equipped it with a calibration ground truth by measuring multiple distances in the ground plane. This dataset seems to be more accurate than the existing comparable data (GT calibration error reduced from 4.62% to 0.99%). Also, the experiments show that our method overcomes the best existing alternative in terms of accuracy (error reduced from 6.56% to 4.03%) and our solution is also more flexible in terms of viewpoint change and other.

Keywords

camera calibration, optimization, surveillance

Released

08.09.2020

Publisher

Springer International Publishing

Location

NEUVEDEN

ISBN

1432-1769

Periodical

Machine Vision and Applications

Year of study

32

Number

1

State

US

Pages from

2

Pages to

15

Pages count

13

URL

Documents

BibTex


@article{BUT168175,
  author="Vojtěch {Bartl} and Jakub {Špaňhel} and Petr {Dobeš} and Roman {Juránek} and Adam {Herout}",
  title="Automatic Camera Calibration by Landmarks on Rigid Objects",
  annote="This article presents a new method for automatic calibration of surveillance
cameras. We are dealing with traffic surveillance and therefore the camera is
calibrated by observing vehicles; however, other rigid objects can be used
instead. The proposed method is using keypoints or landmarks automatically
detected on the observed objects by a convolutional neural network. By using
fine-grained recognition of the vehicles (calibration objects), and by knowing
the 3D positions of the landmarks for the (very limited) set of known objects,
the extracted keypoints are used for calibration of the camera, resulting in
internal (focal length) and external (rotation, translation) parameters and scene
scale of the surveillance camera. We collected a dataset in two parking lots and
equipped it with a calibration ground truth by measuring multiple distances in
the ground plane. This dataset seems to be more accurate than the existing
comparable data (GT calibration error reduced from 4.62% to 0.99%). Also, the
experiments show that our method overcomes the best existing alternative in terms
of accuracy (error reduced from 6.56% to 4.03%) and our solution is also more
flexible in terms of viewpoint change and other.",
  address="Springer International Publishing",
  booktitle="Machine Vision and Applications",
  chapter="168175",
  doi="10.1007/s00138-020-01125-x",
  edition="NEUVEDEN",
  howpublished="online",
  institution="Springer International Publishing",
  number="1",
  volume="32",
  year="2020",
  month="september",
  pages="2--15",
  publisher="Springer International Publishing",
  type="journal article in Web of Science"
}