IEEE Organizations related to Vision Sensors

Back to Top

No organizations are currently tagged "Vision Sensors"



Conferences related to Vision Sensors

Back to Top

No conferences are currently tagged "Vision Sensors"


Periodicals related to Vision Sensors

Back to Top

No periodicals are currently tagged "Vision Sensors"


Most published Xplore authors for Vision Sensors

Back to Top

Xplore Articles related to Vision Sensors

Back to Top

Live demonstration: In-vivo imaging of neural activity with dynamic vision sensors

2017 IEEE Biomedical Circuits and Systems Conference (BioCAS), 2017

The demonstration shows the comparison of two novel Dynamic and Active Pixel Vision Sensors (DAVIS) in the context of a simulated neural imaging experiment. The first sensor, the SDAVIS, has, although a lower resolution (188×192) with respect to the previous generation of DAVIS sensors, 10X higher temporal contrast sensitivity. The second sensor, BSIDAVIS, combines a higher resolution (346×260) with a ...


A Compact 3D Camera Suited for Mobile and Embedded Vision Applications

2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014

Recent years have seen the widespread diffusion of 3D sensors, mainly based on active technologies such as structured light and Time-of-Flight, enabling the development of very interesting 3D vision applications. This paper describes a compact 3D camera based on passive stereo vision technology suited for mobile/embedded vision applications. Our 3D camera is very compact, the overall area of the processing ...


A 35μW 64 × 64 Pixels Vision Sensor Embedding Local Binary Pattern Code Computation

2018 IEEE International Symposium on Circuits and Systems (ISCAS), 2018

This paper presents a 64 × 64 pixels vision sensor embedding pixel-wise computation of the Local Binary Pattern (LBP) code, which is an oriented, binary, vector contrast that is widely used for texture description and retrieval. For each pixel, the sensor estimates the LBP code over four neighbors in a 3×3 pixel kernel. The image processing is performed inside each ...


Vanishing Points in Road Recognition: A Review

2018 11th International Symposium on Computational Intelligence and Design (ISCID), 2018

Road recognition is the key technology of autonomous vehicles, and is a research hotspot in computer vision (CV). Road detection methods based on texture and vanishing point has become a research focus of various scientific researchers due to strong robustness in complex road environment. However, textured-based algorithms have disadvantages of high calculation complexity and poor realtime performance. There is a ...


Spike Coding: Towards Lossy Compression for Dynamic Vision Sensor

2019 Data Compression Conference (DCC), 2019

Dynamic vision sensor (DVS) as a bio-inspired camera, has shown great advantages in high dynamic range (HDR) and high temporal resolution (us) in vision tasks. However, how to lossy compress asynchronous spikes for meeting the demand of large-scale transmission and storage meanwhile maintaining the analysis performance still remains open. Towards this end, this paper proposes a lossy spike coding framework ...


More Xplore Articles

Educational Resources on Vision Sensors

Back to Top

IEEE-USA E-Books

  • Live demonstration: In-vivo imaging of neural activity with dynamic vision sensors

    The demonstration shows the comparison of two novel Dynamic and Active Pixel Vision Sensors (DAVIS) in the context of a simulated neural imaging experiment. The first sensor, the SDAVIS, has, although a lower resolution (188×192) with respect to the previous generation of DAVIS sensors, 10X higher temporal contrast sensitivity. The second sensor, BSIDAVIS, combines a higher resolution (346×260) with a higher light sensitivity (quantum efficiency) because of its Back Side Illumination (BSI) manufacturing.

  • A Compact 3D Camera Suited for Mobile and Embedded Vision Applications

    Recent years have seen the widespread diffusion of 3D sensors, mainly based on active technologies such as structured light and Time-of-Flight, enabling the development of very interesting 3D vision applications. This paper describes a compact 3D camera based on passive stereo vision technology suited for mobile/embedded vision applications. Our 3D camera is very compact, the overall area of the processing unit is smaller than a business card, lightweight, it weights less than 100 g including lenses, has a reduced power consumption, about 2 Watt processing stereo pairs at 30+ fps, and can be easily configured with different baselines and processing units according to specific application requirements. The overall design is mapped on a low cost FPGA, making the hardware design easily portable to other reconfigurable devices, and allows us to obtain in real-time accurate and dense depth maps according to state-of-the-art stereo vision algorithms.

  • A 35μW 64 × 64 Pixels Vision Sensor Embedding Local Binary Pattern Code Computation

    This paper presents a 64 × 64 pixels vision sensor embedding pixel-wise computation of the Local Binary Pattern (LBP) code, which is an oriented, binary, vector contrast that is widely used for texture description and retrieval. For each pixel, the sensor estimates the LBP code over four neighbors in a 3×3 pixel kernel. The image processing is performed inside each pixel during the integration time over a dynamic range up to 98dB, thanks to a pixel-level auto-exposure control. The contrast detection relies on the estimation of the time difference between two pixels thresholded against two reference voltages. The four binary signed contrast vectors are delivered to the output, coded into 4-bit/pixel. The 0.35μm CMOS sensor features a power consumption of 35μW at 3.3V and 15fps.

  • Vanishing Points in Road Recognition: A Review

    Road recognition is the key technology of autonomous vehicles, and is a research hotspot in computer vision (CV). Road detection methods based on texture and vanishing point has become a research focus of various scientific researchers due to strong robustness in complex road environment. However, textured-based algorithms have disadvantages of high calculation complexity and poor realtime performance. There is a balance between real-time and robustness. In this paper, we present a comparative review of vanishing points in road recognition.

  • Spike Coding: Towards Lossy Compression for Dynamic Vision Sensor

    Dynamic vision sensor (DVS) as a bio-inspired camera, has shown great advantages in high dynamic range (HDR) and high temporal resolution (us) in vision tasks. However, how to lossy compress asynchronous spikes for meeting the demand of large-scale transmission and storage meanwhile maintaining the analysis performance still remains open. Towards this end, this paper proposes a lossy spike coding framework for DVS.

  • An Algorithm for Feature Extraction of Weld Groove based on Laser Vision

    The extraction of weld groove geometry feature information is the premise of weld tracking technology. The existing seam feature information extraction technology has a good effect on thin plate, but its extraction accuracy is low when dealing with thick plate and deep weld groove. Based on the point scanning laser sensor, a method of image information preprocessing and feature extraction for thick plate is proposed, and the related algorithms are optimized. Aiming at the common right angle and oblique inflection point in the weld groove, the slope extreme value method and the oblique intercept method are used to extract the weld seam feature points respectively. Finally, the effectiveness of the proposed algorithm is verified by experiments.

  • Towards a grid based sensor fusion for visually impaired navigation using sonar and vision measurements

    This work presents an integrated approach on daily mobile devices, sensors, and resources available by Cloud computing providers to improve the navigation of visually impaired people. Input taken from the heterogeneous sensors such as sonar, vision, orientation and inertial sensors are used for object detection and recognition. The proposed approach entails grid based obstacle localization using sonar sensors and precise identification of that obstacle by vision sensors. This is coarse to fine obstacle identification process which increases the accuracy and decreases computational processing overhead. This improves the walking experience of the visually impaired by making them more independent. This framework facilitates both audio and tactile feedback on the obstacles in the close proximity of the visually impaired people. Improvement of the independent navigation of the visually impaired people by using sensors fusion is the key contribution of this research.

  • Simulation Of A Human Following Robot With Object Avoidance Function

    This study aims to propose an object avoidance algorithm for use in a human- following robot. The algorithm is tested in a well-known robot simulator called virtual robot experiment platform (V-REP), which includes built-in models for realistic sensors and robot platforms. This paper describes a basic path-planning algorithm for following a human target and then describes algorithms for obstacle avoidance and target reacquisition. Simulations are run for each situation that the human-following robot could encounter. Simulation results show that the algorithm is ready to be implemented in a physical robot prototype.

  • A Novel Cognitive Neuromorphic Polarimetric Dynamic Vision System (pDVS) with Enhanced Discrimination and Temporal Contrast

    In this preliminary study, a new cognitive vision architecture of a Polarimetric Dynamic Vision Sensor (pDVS), is presented. The system consists of a neuromorphic camera coupled to polarization filters; a spinning wheel, namely, a light modulating wheel, operating at different speeds, is placed in front a static object. The detector system performance against different modulating speeds, under unpolarized and polarized conditions, has been tested, data were acquired and then analyzed. The outcome of this study indicates that enhanced temporal contrast resolution can be achieved, while offering unique discrimination capabilities, depending upon the light polarization states.

  • Automation of a wheelchair mounted robotic arm using computer vision interface

    Assistive robotic devices have great potential to improve the quality of life for individuals suffering with movement disorders. One such device is a robot- arm which helps people with upper body mobility to perform daily tasks. Manual control of robot arms can be challenging for wheelchair users with upper extremity disorders. This research presents an autonomous wheelchair mounted robotic arm built using a computer vision interface. The design utilizes a robotic arm with six degrees of freedom, an electric wheelchair, computer system and two vision sensors. One vision sensor detects the coarse position of the colored objects placed randomly on a shelf located in front of the wheelchair by using a computer vision algorithm. The other vision sensor provides fine localization by ensuring the object is correctly positioned in front of the gripper. The arm is then controlled automatically to pick up the object and return it to the user. Tests have been conducted by placing objects at different locations and the performance of the robotic arm is tabulated. An average task completion time of 37.52 seconds is achieved.



Standards related to Vision Sensors

Back to Top

No standards are currently tagged "Vision Sensors"


Jobs related to Vision Sensors

Back to Top