944 resources related to Visual Odometry
- Topics related to Visual Odometry
- IEEE Organizations related to Visual Odometry
- Conferences related to Visual Odometry
- Periodicals related to Visual Odometry
- Most published Xplore authors for Visual Odometry
OCEANS 2020 - SINGAPORE
An OCEANS conference is a major forum for scientists, engineers, and end-users throughout the world to present and discuss the latest research results, ideas, developments, and applications in all areas of oceanic science and engineering. Each conference has a specific theme chosen by the conference technical program committee. All papers presented at the conference are subsequently archived in the IEEE Xplore online database. The OCEANS conference comprises a scientific program with oral and poster presentations, and a state of the art exhibition in the field of ocean engineering and marine technology. In addition, each conference can have tutorials, workshops, panel discussions, technical tours, awards ceremonies, receptions, and other professional and social activities.
The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.
The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.
The International Conference on Robotics and Automation (ICRA) is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.
The scope of the 2020 IEEE/ASME AIM includes the following topics: Actuators, Automotive Systems, Bioengineering, Data Storage Systems, Electronic Packaging, Fault Diagnosis, Human-Machine Interfaces, Industry Applications, Information Technology, Intelligent Systems, Machine Vision, Manufacturing, Micro-Electro-Mechanical Systems, Micro/Nano Technology, Modeling and Design, System Identification and Adaptive Control, Motion Control, Vibration and Noise Control, Neural and Fuzzy Control, Opto-Electronic Systems, Optomechatronics, Prototyping, Real-Time and Hardware-in-the-Loop Simulation, Robotics, Sensors, System Integration, Transportation Systems, Smart Materials and Structures, Energy Harvesting and other frontier fields.
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
Signal-processing aspects of image processing, imaging systems, and image scanning, display, and printing. Includes theory, algorithms, and architectures for image coding, filtering, enhancement, restoration, segmentation, and motion estimation; image formation in tomography, radar, sonar, geophysics, astronomy, microscopy, and crystallography; image scanning, digital half-toning and display, andcolor reproduction.
Theory and applications of industrial electronics and control instrumentation science and engineering, including microprocessor control systems, high-power controls, process control, programmable controllers, numerical and program control systems, flow meters, and identification systems.
Measurements and instrumentation utilizing electrical and electronic techniques.
The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical ...
2017 10th Iranian Conference on Machine Vision and Image Processing (MVIP), 2017
Visual odometry is the process of estimating the position of a robot from visual information. Visual odometry methods are classified as: appearance- based methods which use the intensity information of the image and feature- based methods which use the image features information. Feature-based methods are more common in visual odometry. Visual odometry methods usually need calibration, essential matrix calculations and ...
2018 International Conference on Audio, Language and Image Processing (ICALIP), 2018
In this paper, in order to get real-time environment information and pose estimation of robot, a novel visual odometry method called DOVO is proposed. First, the ORB feature of image frame is computed. Then, based on the number of key point that ORB feature gets, we set a threshold K to determine the reliability of pose estimation using ORB feature. ...
2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), 2018
Accurate visual odometry is essential for many fields such as robot navigation and autonomous driving .In this paper, we propose a novel panoramic visual odometry algorithm that is real-time, precise and robust .The main contributions of our work are a series of innovations that address the challenge of effective initialization and robust feature tracking based on panoramic camera. Wide field ...
2018 IEEE International Conference on Applied System Invention (ICASI), 2018
It is important for driving safety to know whether surrounding objects are moving or static. Most existing methods use pre-trained object detectors to detect vehicles and humans before determine whether they are moving or not. However, there are not only these two type objects on the real road. This study presents a system using depth information and visual odometry to ...
2018 IEEE International Conference on Electro/Information Technology (EIT), 2018
Cameras are a primary sensor for many robotics perception tasks. However, cameras' performance degrades in low-light conditions. In this paper, we propose a simple preprocessing pipeline to enhance captured images in low- light conditions. This pipeline is developed with visual odometry application in outdoor environments in mind. The pipeline is evaluated on a manually collected dataset and compared to the ...
Life Sciences: Visual Prosthetics Bioengineering, Nigel Lovell
Visual Encoding Models of Human Field Potential Recordings
V-Big Data: An Introduction
Modeling Intermediate Visual Features in fMRI - Mark Lescroart (2015-HKN-SLC)
Q&A with Yung-Hsiang Lu: IEEE Rebooting Computing Podcast, Episode 5
Skillful Manipulation Based on High-Speed Sensory-Motor Fusion
Helping Your Members Building Writing and Presentation Skills - Ryan Boettger - Sections Congress 2017
CB: Exploring Neuroscience with a Humanoid Research Platform
ICRA Exhibitor Overview 2010
High-Resolution Earthquake Simulations
Visual Wake Words Challenge - Aakanksha Chowdhery - LPIRC 2019
Robotics History: Narratives and Networks Oral Histories: Gary Bradsky
REACH (Raising Engineering Awareness through the Conduit of History)
Geoffrey Hinton receives the IEEE/RSE James Clerk Maxwell Medal - Honors Ceremony 2016
VisualDx Augmented Intelligence Project - Arthur Papier - IEEE EMBS at NIH, 2019
IROS TV 2019- Pohang University of Science and Technology- Haptics and Virtual Reality Laboratory
UPDATED: IEEE Collabratec User Essential Overview Part 2: Next Steps
Risto Miikkilainen - Multiagent Learning Through Neuroevolution
Q&A with Ryan Dailey: IEEE Rebooting Computing Podcast, Episode 12
Visual odometry is the process of estimating the position of a robot from visual information. Visual odometry methods are classified as: appearance- based methods which use the intensity information of the image and feature- based methods which use the image features information. Feature-based methods are more common in visual odometry. Visual odometry methods usually need calibration, essential matrix calculations and finding minimum argument solution. In this paper, we present qualitative visual odometry estimation for a ground vehicle. The method requires a camera with no calibration and there is no need for essential matrix or either any minimum argument calculations. The proposed method is based on funnel lane concept that was presented to control a robot to move from a current image to a destination image. A funnel lane is created based on features geometric information and the robot is controlled in such a way that does not go outside the funnel lane until reaching the destination image. The idea in this paper is to do a reverse thing and instead of creating a funnel lane to control the robot to reach an image destination, we suppose that the robot is moving straight until it is satisfying funnel lane constraints and a turning angle is calculated when the constraints are unsatisfied. We show that this approach is giving significant results in indoor and outdoor environments and it can be a simple way to visual odometry.
In this paper, in order to get real-time environment information and pose estimation of robot, a novel visual odometry method called DOVO is proposed. First, the ORB feature of image frame is computed. Then, based on the number of key point that ORB feature gets, we set a threshold K to determine the reliability of pose estimation using ORB feature. If the number of key point is smaller than the threshold K, direct method is used to keep trace of the camera and estimate the camera pose by optimizing the photometric error based on luminosity of the scene is constant. If the number of key point is larger than threshold K, then pose estimation is computed by optimizing reprojection error. We use TUM dataset to make experiments to show this method guarantees pose accuracy and real-time performance.
Accurate visual odometry is essential for many fields such as robot navigation and autonomous driving .In this paper, we propose a novel panoramic visual odometry algorithm that is real-time, precise and robust .The main contributions of our work are a series of innovations that address the challenge of effective initialization and robust feature tracking based on panoramic camera. Wide field of view (FOV) is very important for robotic perception. Our algorithm takes advantage of the 360° FOV of a panoramic camera, which results in high accuracy of camera pose estimation and feature tracking stability. We use panoramic images directly without converting them to pinhole images. Through GPU acceleration, our implementation runs at an average 30 frames per second on a consumer laptop. In addition, we have done both indoor and outdoor experiments to validate proposed algorithm, the results show that ... we call our approach PVO (panoramic visual odometry).
It is important for driving safety to know whether surrounding objects are moving or static. Most existing methods use pre-trained object detectors to detect vehicles and humans before determine whether they are moving or not. However, there are not only these two type objects on the real road. This study presents a system using depth information and visual odometry to detect moving object. It also adopts an adaptive thresholding method to enhance performance on detection. The proposed system can accurately detect moving objects according to the experimental results.
Cameras are a primary sensor for many robotics perception tasks. However, cameras' performance degrades in low-light conditions. In this paper, we propose a simple preprocessing pipeline to enhance captured images in low- light conditions. This pipeline is developed with visual odometry application in outdoor environments in mind. The pipeline is evaluated on a manually collected dataset and compared to the base case with no preprocessing applied.
An enhanced visual odometry (VO) system is proposed to improve the accuracy of pose estimation based on a corrected model, and the matching algorithm is implemented on graphical processing units (GPUs) so that the computation can be accelerated in parallel and in real-time using the compute unified device architecture (CUDA) programming model. To evaluate the performance of the proposed approach, an ASUS Xtion 3D camera, laptop, and NVIDIA TX2 are employed to conduct extensive experiments. The experimental results show that compared with the traditional VO algorithm, the proposed approach gives better results over the traditional VO algorithm.
This paper proposes two interpolation methods (i.e. selective bilinear method and plane method) which can be applied to interpolate sparse LiDAR data. Experimental results obtained by applying them to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance than the plane method in the view point of computation speed and accuracy.
In this paper, we propose a novel method to estimate the relative camera motions of three consecutive images. Given a set of point correspondences in three views, the proposed method determines the fundamental matrix representing the geometrical relationship between the first two views by using the eight-point algorithm. Then, by minimizing the proposed cost function with the fundamental matrix, the relative camera motions over three views are precisely estimated. The experimental results show that the proposed method outperforms the conventional two-view and three-view geometry-based method in terms of the accuracy.
Visual odometry has greatly progressed since non-linear optimization methods were introduced for pose estimation. Furthermore, RGBD visual odometry has become a hot research topic in the robotic and computer vision field with the introduction of RGBD cameras. However, most RGBD-camera-based visual odometry methods are designed by extending monocular visual odometry methods, therein not paying much attention to the integration of the different types of information provided by RGBD images. In this paper, we propose a novel hybrid- residual-based-RGBD visual odometry, where three types of complementary information are integrated into a joint optimization model. The reprojection residuals, the photometric residuals and the depth residuals are minimized together in the non-linear optimization process, where a robust cost function and outlier filtering are employed in the iterative optimization to enhance the robustness of the iteration while simultaneously maintaining the optimality. Experiments on publicly available RGBD data sets validate the advantages of the integration of multiple types of information for RGBD visual odometry. The accuracy and robustness are greatly improved by our method.
A novel RGB-D visual odometry method for dynamic environment is proposed. Majority of visual odometry systems can only work in static environments, which limits their applications in real world. In order to improve the accuracy and robustness of visual odometry in dynamic environment, a Feature Regions Segmentation algorithm is proposed to resist the disturbance caused by the moving objects. The matched features are divided into different regions to separate the moving objects from the static background. The features in the largest region which belong to the static background are used to estimate the camera pose finally. The effectiveness of our visual odometry method is verified in a dynamic environment of our lab. Furthermore, an exhaustive experimental evaluation is conducted on benchmark datasets including static environments and dynamic environments compared with the state-of-art visual odometry systems. The accuracy comparison results show that the proposed algorithm outperforms those systems in large scale dynamic environments. Our method tracks the camera movement correctly while others failed. In addition, our method can give the same good performances in static environment. Experiments demonstrate that the proposed RGB-D visual odometry can obtain accurate and robust estimation results in dynamic environments.
No standards are currently tagged "Visual Odometry"