Dense Disparity Estimation from Omnidirectional Images

Omnidirectional imaging certainly represents important advantages for the representation and processing of the plenoptic function in 3D scenes for applications in localization, or depth estimation for example. In this context, we propose to perform disparity estimation directly in a spherical framework, in order to avoid discrepancies due to inexact projections of omnidirectional images onto planes. We first perform rectification of the omnidirectional images in the spherical domain. Then we develop a global energy minimization algorithm based on the graph-cut algorithm, in order to perform disparity estimation on the sphere. Experimental results show that the proposed algorithm outperforms typical methods as the ones based on block matching, for both a simple synthetic scene, and complex natural scenes. The proposed method shows promising performances for dense disparity estimation and can be extended efficiently to networks of several camera sensors.

rectification disparity estimation

For more information see:

  • Z. Arican and P. Frossard, “Dense Disparity Estimation from Omnidirectional Images” Proc. of AVSS 2007, September 2007, London, UK

Super-resolution from Unregistered Omnidirectional Images

Super-resolution is reconstructing a high-resolution image from multiple low-resolution images with different transformations. This project addresses the problem of super-resolution from low resolution spherical images that are not perfectly registered. Such a problem is typically encountered in omnidirectional vision scenarios with reduced resolution sensors in imperfect settings. Several spherical images with arbitrary rotations in the SO(3) rotation group are used for the reconstruction of higher resolution images. We first describe the impact of the registration error on the Spherical Fourier Transform coefficients. Then, we formulate the joint registration and reconstruction problem as a least squares norm minimization problem in the transform domain. Experimental results show that the proposed scheme leads to effective approximations of the high resolution images, even with large registration errors. The quality of the reconstructed images also increases rapidly with the number of low resolution images, which demonstrates the benefits of the proposed solution in super-resolution schemes.

registration Joint Registration and Reconstruction

For more information see:

  • Z. Arican and P. Frossard, “Super-resolution from Unregistered Omnidirectional Images” Proc. of ICPR 2008, December 2008, Florida, USA.
  • Z. Arican and P. Frossard, “L1 Regularized Super-resolution from Unregistered Omnidirectional Images”, Proc. of ICASSP 2009, April 2009, Taipei, Taiwan

Scale Invariant Features in Omnidirectional Images

Applications such as camera calibration, object detection, recognition or tracking generally rely on the localization and matching of particular visual features in multiple images. Scale invariance is an important characteristic of visual features that permit to be less sensitive to imperfect camera settings. The most popular scale invariant feature detection algorithm is certainly the SIFT framework for perspective camera images. However, omnidirectional images have generally a specific geometry due to the sensor characteristics, which typically cause partial scale changes in different regions of the images. For example, a scene captured with a catadioptric camera using a paraboloid mirror is sampled more densely in the outer parts of the image than in the center. Classical feature detection algorithms do not take into account the implicit geometry of the mirrors, which penalizes the performance of the image analysis applications when they are applied directly on the sensor images. We propose in this project a novel framework for the computation of scale invariant features on omnidirectional images from sensors with particular geometries. In particular we build on Riemannian geometry to define differential operators on non-Euclidian manifolds, such that the images can be processed in their native geometry. In addition, we propose a descriptor which is adaptive to different sampling densities and geometry of these images.

OmniSIFT matching

For more information see:

  • Z. Arican and P. Frossard, “OmniSIFT: Scale Invariant Features in Omnidirectional Images”, Proc. of ICIP 2010, September, 2010, Hong Kong
  • Z. Arican and P. Frossard, “Sampling-aware Polar Descriptors on the Sphere”, Proc. of ICIP 2010, September 2010, Hong Kong