Conferences related to Visual Odometry

Back to Top

2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    CVPR is the premier annual computer vision event comprising the main conference and severalco-located workshops and short courses. With its high quality and low cost, it provides anexceptional value for students, academics and industry researchers.

  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.

  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    CVPR is the premiere annual Computer Vision event comprising the main CVPR conferenceand 27co-located workshops and short courses. With its high quality and low cost, it provides anexceptional value for students,academics and industry.

  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    CVPR is the premiere annual Computer Vision event comprising the main CVPR conference and 27 co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry.

  • 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    computer, vision, pattern, cvpr, machine, learning

  • 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    CVPR is the premiere annual Computer Vision event comprising the main CVPR conference and 27 co-located workshops and short courses. Main conference plus 50 workshop only attendees and approximately 50 exhibitors and volunteers.

  • 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    CVPR is the premiere annual Computer Vision event comprising the main CVPR conference and 27 co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry.

  • 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Topics of interest include all aspects of computer vision and pattern recognition including motion and tracking,stereo, object recognition, object detection, color detection plus many more

  • 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Sensors Early and Biologically-Biologically-inspired Vision, Color and Texture, Segmentation and Grouping, Computational Photography and Video

  • 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Concerned with all aspects of computer vision and pattern recognition. Issues of interest include pattern, analysis, image, and video libraries, vision and graphics, motion analysis and physics-based vision.

  • 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    Concerned with all aspects of computer vision and pattern recognition. Issues of interest include pattern, analysis, image, and video libraries, vision and graphics,motion analysis and physics-based vision.

  • 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  • 2007 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  • 2006 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

  • 2005 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)


2020 IEEE International Conference on Image Processing (ICIP)

The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.


2020 IEEE International Conference on Robotics and Automation (ICRA)

The International Conference on Robotics and Automation (ICRA) is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.


2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC)

The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.


2020 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM)

The scope of the 2020 IEEE/ASME AIM includes the following topics: Actuators, Automotive Systems, Bioengineering, Data Storage Systems, Electronic Packaging, Fault Diagnosis, Human-Machine Interfaces, Industry Applications, Information Technology, Intelligent Systems, Machine Vision, Manufacturing, Micro-Electro-Mechanical Systems, Micro/Nano Technology, Modeling and Design, System Identification and Adaptive Control, Motion Control, Vibration and Noise Control, Neural and Fuzzy Control, Opto-Electronic Systems, Optomechatronics, Prototyping, Real-Time and Hardware-in-the-Loop Simulation, Robotics, Sensors, System Integration, Transportation Systems, Smart Materials and Structures, Energy Harvesting and other frontier fields.


More Conferences

Periodicals related to Visual Odometry

Back to Top

Circuits and Systems for Video Technology, IEEE Transactions on

Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...


Image Processing, IEEE Transactions on

Signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing. Includes theory, algorithms, and architectures for image coding, filtering, enhancement, restoration, segmentation, and motion estimation; image formation in tomography, radar, sonar, geophysics, astronomy, microscopy, and crystallography; image scanning, digital half-toning and display, andcolor reproduction.


Industrial Electronics, IEEE Transactions on

Theory and applications of industrial electronics and control instrumentation science and engineering, including microprocessor control systems, high-power controls, process control, programmable controllers, numerical and program control systems, flow meters, and identification systems.


Instrumentation and Measurement, IEEE Transactions on

Measurements and instrumentation utilizing electrical and electronic techniques.


Intelligent Transportation Systems, IEEE Transactions on

The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical ...


More Periodicals

Most published Xplore authors for Visual Odometry

Back to Top

Xplore Articles related to Visual Odometry

Back to Top

Novel qualitative visual odometry for a ground: Vehicle based on funnel lane concept

2017 10th Iranian Conference on Machine Vision and Image Processing (MVIP), 2017

Visual odometry is the process of estimating the position of a robot from visual information. Visual odometry methods are classified as: appearance- based methods which use the intensity information of the image and feature- based methods which use the image features information. Feature-based methods are more common in visual odometry. Visual odometry methods usually need calibration, essential matrix calculations and ...


DOVO: Mixed Visual Odometry Based on Direct Method and Orb Feature

2018 International Conference on Audio, Language and Image Processing (ICALIP), 2018

In this paper, in order to get real-time environment information and pose estimation of robot, a novel visual odometry method called DOVO is proposed. First, the ORB feature of image frame is computed. Then, based on the number of key point that ORB feature gets, we set a threshold K to determine the reliability of pose estimation using ORB feature. ...


PVO:Panoramic Visual Odometry

2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), 2018

Accurate visual odometry is essential for many fields such as robot navigation and autonomous driving .In this paper, we propose a novel panoramic visual odometry algorithm that is real-time, precise and robust .The main contributions of our work are a series of innovations that address the challenge of effective initialization and robust feature tracking based on panoramic camera. Wide field ...


Moving object detection from a moving stereo camera via depth information and visual odometry

2018 IEEE International Conference on Applied System Invention (ICASI), 2018

It is important for driving safety to know whether surrounding objects are moving or static. Most existing methods use pre-trained object detectors to detect vehicles and humans before determine whether they are moving or not. However, there are not only these two type objects on the real road. This study presents a system using depth information and visual odometry to ...


Image Preprocessing for Stereoscopic Visual Odometry at Night

2018 IEEE International Conference on Electro/Information Technology (EIT), 2018

Cameras are a primary sensor for many robotics perception tasks. However, cameras' performance degrades in low-light conditions. In this paper, we propose a simple preprocessing pipeline to enhance captured images in low- light conditions. This pipeline is developed with visual odometry application in outdoor environments in mind. The pipeline is evaluated on a manually collected dataset and compared to the ...


More Xplore Articles

Educational Resources on Visual Odometry

Back to Top

IEEE-USA E-Books

  • Novel qualitative visual odometry for a ground: Vehicle based on funnel lane concept

    Visual odometry is the process of estimating the position of a robot from visual information. Visual odometry methods are classified as: appearance- based methods which use the intensity information of the image and feature- based methods which use the image features information. Feature-based methods are more common in visual odometry. Visual odometry methods usually need calibration, essential matrix calculations and finding minimum argument solution. In this paper, we present qualitative visual odometry estimation for a ground vehicle. The method requires a camera with no calibration and there is no need for essential matrix or either any minimum argument calculations. The proposed method is based on funnel lane concept that was presented to control a robot to move from a current image to a destination image. A funnel lane is created based on features geometric information and the robot is controlled in such a way that does not go outside the funnel lane until reaching the destination image. The idea in this paper is to do a reverse thing and instead of creating a funnel lane to control the robot to reach an image destination, we suppose that the robot is moving straight until it is satisfying funnel lane constraints and a turning angle is calculated when the constraints are unsatisfied. We show that this approach is giving significant results in indoor and outdoor environments and it can be a simple way to visual odometry.

  • DOVO: Mixed Visual Odometry Based on Direct Method and Orb Feature

    In this paper, in order to get real-time environment information and pose estimation of robot, a novel visual odometry method called DOVO is proposed. First, the ORB feature of image frame is computed. Then, based on the number of key point that ORB feature gets, we set a threshold K to determine the reliability of pose estimation using ORB feature. If the number of key point is smaller than the threshold K, direct method is used to keep trace of the camera and estimate the camera pose by optimizing the photometric error based on luminosity of the scene is constant. If the number of key point is larger than threshold K, then pose estimation is computed by optimizing reprojection error. We use TUM dataset to make experiments to show this method guarantees pose accuracy and real-time performance.

  • PVO:Panoramic Visual Odometry

    Accurate visual odometry is essential for many fields such as robot navigation and autonomous driving .In this paper, we propose a novel panoramic visual odometry algorithm that is real-time, precise and robust .The main contributions of our work are a series of innovations that address the challenge of effective initialization and robust feature tracking based on panoramic camera. Wide field of view (FOV) is very important for robotic perception. Our algorithm takes advantage of the 360° FOV of a panoramic camera, which results in high accuracy of camera pose estimation and feature tracking stability. We use panoramic images directly without converting them to pinhole images. Through GPU acceleration, our implementation runs at an average 30 frames per second on a consumer laptop. In addition, we have done both indoor and outdoor experiments to validate proposed algorithm, the results show that ... we call our approach PVO (panoramic visual odometry).

  • Moving object detection from a moving stereo camera via depth information and visual odometry

    It is important for driving safety to know whether surrounding objects are moving or static. Most existing methods use pre-trained object detectors to detect vehicles and humans before determine whether they are moving or not. However, there are not only these two type objects on the real road. This study presents a system using depth information and visual odometry to detect moving object. It also adopts an adaptive thresholding method to enhance performance on detection. The proposed system can accurately detect moving objects according to the experimental results.

  • Image Preprocessing for Stereoscopic Visual Odometry at Night

    Cameras are a primary sensor for many robotics perception tasks. However, cameras' performance degrades in low-light conditions. In this paper, we propose a simple preprocessing pipeline to enhance captured images in low- light conditions. This pipeline is developed with visual odometry application in outdoor environments in mind. The pipeline is evaluated on a manually collected dataset and compared to the base case with no preprocessing applied.

  • CUDA-Based Computation for Visual Odometry

    An enhanced visual odometry (VO) system is proposed to improve the accuracy of pose estimation based on a corrected model, and the matching algorithm is implemented on graphical processing units (GPUs) so that the computation can be accelerated in parallel and in real-time using the compute unified device architecture (CUDA) programming model. To evaluate the performance of the proposed approach, an ASUS Xtion 3D camera, laptop, and NVIDIA TX2 are employed to conduct extensive experiments. The experimental results show that compared with the traditional VO algorithm, the proposed approach gives better results over the traditional VO algorithm.

  • LiDAR data interpolation algorithm for visual odometry based on 3D-2D motion estimation

    This paper proposes two interpolation methods (i.e. selective bilinear method and plane method) which can be applied to interpolate sparse LiDAR data. Experimental results obtained by applying them to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance than the plane method in the view point of computation speed and accuracy.

  • A Novel Relative Camera Motion Estimation Algorithm with Applications to Visual Odometry

    In this paper, we propose a novel method to estimate the relative camera motions of three consecutive images. Given a set of point correspondences in three views, the proposed method determines the fundamental matrix representing the geometrical relationship between the first two views by using the eight-point algorithm. Then, by minimizing the proposed cost function with the fundamental matrix, the relative camera motions over three views are precisely estimated. The experimental results show that the proposed method outperforms the conventional two-view and three-view geometry-based method in terms of the accuracy.

  • Hybrid-Residual-Based RGBD Visual Odometry

    Visual odometry has greatly progressed since non-linear optimization methods were introduced for pose estimation. Furthermore, RGBD visual odometry has become a hot research topic in the robotic and computer vision field with the introduction of RGBD cameras. However, most RGBD-camera-based visual odometry methods are designed by extending monocular visual odometry methods, therein not paying much attention to the integration of the different types of information provided by RGBD images. In this paper, we propose a novel hybrid- residual-based-RGBD visual odometry, where three types of complementary information are integrated into a joint optimization model. The reprojection residuals, the photometric residuals and the depth residuals are minimized together in the non-linear optimization process, where a robust cost function and outlier filtering are employed in the iterative optimization to enhance the robustness of the iteration while simultaneously maintaining the optimality. Experiments on publicly available RGBD data sets validate the advantages of the integration of multiple types of information for RGBD visual odometry. The accuracy and robustness are greatly improved by our method.

  • Feature Regions Segmentation Based RGB-D Visual Odometry in Dynamic Environment

    A novel RGB-D visual odometry method for dynamic environment is proposed. Majority of visual odometry systems can only work in static environments, which limits their applications in real world. In order to improve the accuracy and robustness of visual odometry in dynamic environment, a Feature Regions Segmentation algorithm is proposed to resist the disturbance caused by the moving objects. The matched features are divided into different regions to separate the moving objects from the static background. The features in the largest region which belong to the static background are used to estimate the camera pose finally. The effectiveness of our visual odometry method is verified in a dynamic environment of our lab. Furthermore, an exhaustive experimental evaluation is conducted on benchmark datasets including static environments and dynamic environments compared with the state-of-art visual odometry systems. The accuracy comparison results show that the proposed algorithm outperforms those systems in large scale dynamic environments. Our method tracks the camera movement correctly while others failed. In addition, our method can give the same good performances in static environment. Experiments demonstrate that the proposed RGB-D visual odometry can obtain accurate and robust estimation results in dynamic environments.



Standards related to Visual Odometry

Back to Top

No standards are currently tagged "Visual Odometry"


Jobs related to Visual Odometry

Back to Top