381 resources related to Night vision
- Topics related to Night vision
- IEEE Organizations related to Night vision
- Conferences related to Night vision
- Periodicals related to Night vision
- Most published Xplore authors for Night vision
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.
2020 IEEE 18th International Conference on Industrial Informatics (INDIN)
INDIN focuses on recent developments, deployments, technology trends, and research results in Industrial Informatics-related fields from both industry and academia
The International Conference on Information Fusion is the premier forum for interchange of the latest research in data and information fusion, and its impacts on our society. The conference brings together researchers and practitioners from academia and industry to report on the latest scientific and technical advances.
The aim of the conference will be to bring together the majority of leading expert scientists, thought leaders and forward looking professionals from all domains of Intelligent Transportation Systems, to share ongoing research achievements, to exchange views and knowledge and to contribute to the advances in the field. The main theme of the conference will be “ITS within connected, automated and electric multimodal mobility systems and services”.
2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
The IEEE Aerospace and Electronic Systems Magazine publishes articles concerned with the various aspects of systems for space, air, ocean, or ground environments.
Experimental and theoretical advances in antennas including design and development, and in the propagation of electromagnetic waves including scattering, diffraction and interaction with continuous media; and applications pertinent to antennas and propagation, such as remote sensing, applied optics, and millimeter and submillimeter wave techniques.
Broad coverage of concepts and methods of the physical and engineering sciences applied in biology and medicine, ranging from formalized mathematical theory through experimental science and technological development to practical clinical applications.
Broadcast technology, including devices, equipment, techniques, and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.
IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...
2008 IEEE Conference on Technologies for Homeland Security, 2008
NoblePeak Vision is developing a new night vision technology based on visible to short-wave infrared (SWIR) imaging. Imaging in this band is important because of the "night glow", light emitted by the night sky between 1μm and 2μm wavelength. The night glow provides sufficient illumination to allow passive imaging even under moonless overcast conditions, but it cannot be detected with ...
2010 13th International Conference on Information Fusion, 2010
Image fusion is used to improve target detection and identification. In human- observer applications it is useful to rank fusion methods according to how well they assist the observer in a decision task. Two images (medium- and long-wave infrared), acquired for each of a number of outdoor scenes, were fused by each of nine methods. For each scene, a set ...
2015 Second International Conference on Advances in Computing and Communication Engineering, 2015
This paper is based on proposing the system for automatic detection of alphanumeric characters from license plates of vehicles. The algorithm is designed for extraction of characters from low contrast, night vision and normal images of license plates. National Instruments Vision builder software for automated inspection is used for real time implementation of an algorithm. The algorithm in vision builder ...
2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009), 2009
Making the transition between digital video imagery acquired by a focal plane array and imagery useful to a human operator is not a simple process. The focal plane array ?sees? the world in a fundamentally different way than the human eye. Gamma correction has been historically used to help bridge the gap. The gamma correction process is a non-linear mapping ...
2010 Sixth International Conference on Natural Computation, 2010
The single channel dual-spectrum low light level (LLL) instrument uses stripe light filter in the sole channel to obtain the stripe images which contain two LLL wave bands information. This paper specifies the simulation of the LLL stripe image, the method of separating and compensating the stripe image to the entire LLL short wave image and the LLL full wave ...
LPIRC: Developing Mobile Computer Vision Models
LPIRC: On Device Vision, Google AI-Style
DARPA's Vision for the Future of Computing: IEEE Rebooting Computing 2017
Vision Research Video
DOE Vision and Programmatic Activities in Advanced Computing Technologies: IEEE Rebooting Computing 2017
IEEE 5G Podcast with the Experts: What is your boldest vision of what 5G can bring us?
Fusion here, there and almost everywhere in computer vision - driving new advances in fuzzy integrals
Brooklyn 5G Summit 2014: CMRI Vision on 5G by Dr. Chih-Lin I
Robotics History: Narratives and Networks Oral Histories: Ruzena Bajcsy
Large-scale Neural Systems for Vision and Cognition
Robotics History: Narratives and Networks Oral Histories:Herman Bruyninckx
Tapping the Computing Power of the Unconscious Brain
Robotics History: Narratives and Networks Oral Histories: Bob Bolles
Interview with Sarah Audet - IEEE VIC Summit 2017
Brooklyn 5G Summit 2014: Dr. Seizo Onoe presents DOCOMO's 5G vision
Q&A with Connor Russomanno: IEEE Digital Reality Podcast, Episode 1
Q&A with Dr. Atilla Elci: IEEE Big Data Podcast, Episode 10
ICRA Keynote: Dr. Takeo Kanade
Robotics History: Narratives and Networks Oral Histories: Ernst Dickmanns
NoblePeak Vision is developing a new night vision technology based on visible to short-wave infrared (SWIR) imaging. Imaging in this band is important because of the "night glow", light emitted by the night sky between 1μm and 2μm wavelength. The night glow provides sufficient illumination to allow passive imaging even under moonless overcast conditions, but it cannot be detected with conventional imagers such as silicon CCDs or CMOS imagers, or with image intensifier tubes. NoblePeak's camera cores are based on novel, monolithic, visible-to-SWIR imaging arrays which incorporate germanium photodetectors. An innovative growth technique exploits dislocation trapping at the germanium/silicon interface to grow high-quality single- crystal germanium islands on a silicon substrate. NoblePeak has implemented a modified CMOS process at a high volume silicon foundry. Photodiodes formed in the germanium islands are integrated with the silicon transistors in the substrate and metal layers of the CMOS process. Imaging arrays at a 10μm pitch have been designed and fabricated, with the silicon photodiodes of a conventional CMOS imager replaced by germanium photodiodes. Dielectric isolation between the detectors eliminates electronic blooming. Over five hundred imagers of a planned PAL/NTSC/VGA format product will fit on a single 200 mm silicon wafer, allowing high volume production. Broad-band response from 400 nm to 1650 nm has been measured. Quantum efficiency (QE) greater than 40% is seen from 450 nm to 1450 nm with a peak QE of 75%, even without an anti-reflective coating. A noise floor of 10 nW/cm2 has been measured in early imagers, with continuing improvements in progress. Imaging die have been packaged with a Peltier cooler and built into a camera evaluation kit. Features incorporated include 30 fps video capture, 12 bit readout, and exposure times from 150μs to 30 ms. Imaging arrays at 128×128 have been demonstrated in the camera kit. A 744×576 imager has been designed and is in fabrication.
Image fusion is used to improve target detection and identification. In human- observer applications it is useful to rank fusion methods according to how well they assist the observer in a decision task. Two images (medium- and long-wave infrared), acquired for each of a number of outdoor scenes, were fused by each of nine methods. For each scene, a set of observers assessed each of the 36 pairwise combinations of fused images, choosing from each pair the one that was deemed best for target identification. We used that set of preferences to rank the fusion methods for their effectiveness in the identification task. A classical technique for ranking these “discriminal processes” is Thurstone's Law of Comparative Judgment and its implementation as the Thurstone-Mosteller (TM) Method of Paired Comparisons, which is reviewed briefly here. To make meaningful statements about preferences, one should have a measure of uncertainty for each rank. The TM method, however, cannot readily provide such a measure. An alternative, the Bradley-Terry (BT) method, does permit calculation of confidence intervals for ranks. To our knowledge, BT has not previously been applied in the evaluation of fusion methods. We present results from a multi-observer, multi-view trial, evaluated using TM and BT. The methods yield similar rankings of the fusion methods. But the additional information provided by BT - that is, whether there are significant differences between the ranks - can have a substantial impact on the implementation of fusion in real systems. There could be meaningful tradeoffs among fusion methods - e.g., performance vs. computation time - that may not be exploited in the absence of those insights.
This paper is based on proposing the system for automatic detection of alphanumeric characters from license plates of vehicles. The algorithm is designed for extraction of characters from low contrast, night vision and normal images of license plates. National Instruments Vision builder software for automated inspection is used for real time implementation of an algorithm. The algorithm in vision builder is interfaced with Arduino. When the license plate of vehicle is recognized then the servo motor connected with Arduino will open the gate otherwise gate will remain closed. The system can be applied in traffic monitoring systems, parking areas, Border areas etc.
Making the transition between digital video imagery acquired by a focal plane array and imagery useful to a human operator is not a simple process. The focal plane array ?sees? the world in a fundamentally different way than the human eye. Gamma correction has been historically used to help bridge the gap. The gamma correction process is a non-linear mapping of intensity from input to output where the parameter gamma can be adjusted to improve the imagery's visual appeal. In analog video systems, gamma correction is performed with analog circuitry and is adjusted manually. With a digital video stream, gamma correction can be provided using mathematical operations in a digital circuit. In addition to manual control, gamma correction can also be automatically adjusted to compensate for changes in the scene. We are interested in applying automatic gamma correction in systems such as night vision goggles where both low latency and power efficiency are important design parameters. We present our results in developing an automatic gamma correction algorithm to meet these requirements. The algorithm is comprised of two parts, determination of the desired value for gamma and the application of the correction. The calculation of the gamma value update is performed based upon statistical metrics of the imagery's intensity. HDL code implementing the measurement of the statistical metrics has been developed and tested in hardware. Both the computation of a gamma update and the application of the gamma correction were simplified to basic arithmetic operations and two specialized functions, logarithm and exponentiation of a constant base by a variable exponent. We present approximation methods for both specialized functions simplifying their implementation into basic arithmetic operations. The hardware implementations of the approximations allow the above requirements to be met. We evaluate the accuracy of the approximations as compared to full resolution double-precision floating point mathematical operations. We present the final results for visual judging to evaluate the impact of the approximations.
The single channel dual-spectrum low light level (LLL) instrument uses stripe light filter in the sole channel to obtain the stripe images which contain two LLL wave bands information. This paper specifies the simulation of the LLL stripe image, the method of separating and compensating the stripe image to the entire LLL short wave image and the LLL full wave image. The LLL stripe image is simulated by software from collecting information of dual-channel for the same scenery. The simulated LLL stripe image is separated and compensated by the method of inter-frame and in-frame, and the entire LLL images are rebuilt. The entire LLL short wave image and the LLL full wave image are fused to simulate the result of the single channel dual-spectrum low light level (LLL) instrument. The simulation experimental results indicate that the single channel dual-spectrum color low light level instrument has applied effectively, and has achieved the goal of the dual-band low light level image fusion and objects enhancement.
The PM N-V/RSTA is procuring the Driver's Vision Enhancer (DVE) thermal imaging system for use in combat and tactical wheeled vehicles. The DVE uses uncooled forward looking infrared technology compared to the I/sup 2/ technology currently in the field. During the development of the DVE several issues were raised regarding how specific aspects of system design were related to driver performance. As a result, DCS Corporation developed a data collection effort to provide the DVE project leader with needed performance data that could act as a foundation for making program decisions.
We propose the enhancement algorithm for thermal vision system based upon providing the driver with thermal images of the forward scene under night and adverse day conditions. A thermal vision system has been developed to reduce the vehicle-pedestrian and the hazard accidents. To express thermal images in the condition of road well, we propose the enhancement algorithms using driver visual behavior quantification. The experimental results are provided to demonstrate the effectiveness of our algorithm than conventional contrast enhancement method.
In this paper we demonstrate an intersubband quantum dot detector coupled to an avalanche photodiode to improve the signal-to-noise ratio and increase the operating temperature of the device. Mid-infrared photodetectors, operating in the 50-400 meV (3-25 mum) regime, have a variety of potential applications in medical diagnostics, thermal imaging, night vision cameras for battlefield recognition systems, and chip-based detection of chemical warfare agents. Intersubband quantum dot detectors have been proposed as a promising technology due to their normal incidence excitation and lower dark currents. However, the low quantum efficiency leads to a lower detectivity, responsivity and limits their operating temperature to about 70-80 K
The authors describe an airborne laser system that provides a pilot with a display of obstacles such as power cables ahead of his aircraft. This system, LOCUS (Laser Obstacle and Cable Unmasking System), was flight-tested by the Naval Air Test Center at Patuxent River, where cable detection ranges of over 2 km were measured. It is found that real-time processing of the laser returns can achieve a real-time display of obstacles in the flight path sufficiently early to enable safe evasive action. In addition, real-time processing of the laser returns can provide unmapped obstacle data to enhance a digital terrain system in low-level covert terrain-following mode.<<ETX>>
No standards are currently tagged "Night vision"
Optical / RF System Analyst (Rx I) - CIPHER
Georgia Tech Research Institute (GTRI)
Optical / RF System Analyst (Rx II / SRx) - CIPHER
Georgia Tech Research Institute (GTRI)