1,646 resources related to Object segmentation
- Topics related to Object segmentation
- IEEE Organizations related to Object segmentation
- Conferences related to Object segmentation
- Periodicals related to Object segmentation
- Most published Xplore authors for Object segmentation
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.
The International Conference on Information Fusion is the premier forum for interchange of the latest research in data and information fusion, and its impacts on our society. The conference brings together researchers and practitioners from academia and industry to report on the latest scientific and technical advances.
2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
The International Conference on Consumer Electronics (ICCE) is soliciting technical papersfor oral and poster presentation at ICCE 2018. ICCE has a strong conference history coupledwith a tradition of attracting leading authors and delegates from around the world.Papers reporting new developments in all areas of consumer electronics are invited. Topics around the major theme will be the content ofspecial sessions and tutorials.
The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.
The IEEE Transactions on Automation Sciences and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. We welcome results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, ...
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
The design and manufacture of consumer electronics products, components, and related activities, particularly those used for entertainment, leisure, and educational purposes
Both general and technical articles on current technologies and methods used in biomedical and clinical engineering; societal implications of medical technologies; current news items; book reviews; patent descriptions; and correspondence. Special interest departments, students, law, clinical engineering, ethics, new products, society news, historical features and government.
It is expected that GRS Letters will apply to a wide range of remote sensing activities looking to publish shorter, high-impact papers. Topics covered will remain within the IEEE Geoscience and Remote Sensing Societys field of interest: the theory, concepts, and techniques of science and engineering as they apply to the sensing of the earth, oceans, atmosphere, and space; and ...
International Multi Topic Conference, 2002. Abstracts. INMIC 2002., 2002
2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), 2018
In this paper, we propose 3D Environment Measurement and Reconstruction Based on LiDAR. Human life is surrounded by a variety of sensors. Through the combination of different sensors, a designer can carry out environmental testing and then replace the human perception system. The principle of Velodyne LiDAR adopts the time of flight (ToF) technology based on infrared laser light. According ...
Proceedings of the International Joint Conference on Neural Networks, 2003., 2003
Summary form only given. Vision and manipulation are inextricably intertwined in the primate brain. Tantalizing results from neuroscience are shedding light on the mixed motor and sensory representations used by the brain during reaching, grasping, and object recognition. We now know a great deal about what happens in the brain during these activities, but not necessarily why. Is the integration ...
Proceedings 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1992
The problems of object segmentation and binding are addressed within a biologically based network model capable of determining depth from occlusion. In particular, the authors discuss two subprocesses most relevant to segmentation and binding: contour binding and figure direction. They propose that these two subprocesses have intrinsic constraints that allow several underdetermined problems in occlusion processing and object segmentation to ...
Proceedings of International Conference on Image Processing, 1997
Automatic recognition of various road distresses is of considerable interest since it facilitates preventive road maintenance before cracks and potholes become too severe, leading to economic benefits. The current approach of using human operators to categorize road distresses is both labor-intensive and time consuming. We describe a two-step algorithm that automates road-distress identification with high accuracy. After constant-false-alarm-rate (CFAR) detection ...
Robotics History: Narratives and Networks Oral Histories: Gary Bradsky
Recording and Using 3D Object Models with RoboEarth
A 12-b, 1-GS/s 6.1mW Current-Steering DAC in 14nm FinFET with 80dB SFDR for 2G/3G/4G Cellular Application: RFIC Industry Showcase 2017
Virtual Reality Support for Teleoperation Using Online Grasp Planning
Computing Conversations: Bertrand Meyer: Eiffel Programming Language
Critical use cases for video capturing systems in autonomous driving applications
CB: Exploring Neuroscience with a Humanoid Research Platform
Michele Nitti: Searching the Social Internet of Things by Exploiting Object Similarity - Special Session on SIoT: WF-IoT 2016
Finger Mechanism Equipped with Omnidirectional Driving Roller (Omni-Finger)
Control of a Fully-Actuated Airship for Satellite Emulation
IROS TV 2019- Pohang University of Science and Technology- Haptics and Virtual Reality Laboratory
Anticipating Human Activities for Reactive Robotic Response
Robotics History: Narratives and Networks Oral Histories: Alicia Casals
Geoffrey Hinton receives the IEEE/RSE James Clerk Maxwell Medal - Honors Ceremony 2016
EDOC 2010 - Prof. Dr. David Harel Presentation
Low Power Image Recognition: The Challenge Continues
EDOC 2010 - Dr. Benjamin Grosof Keynote
Experience ICRA 2015: Robot Challenges
Welcome to ICRA 2015: Robot Challenges
In this paper, we propose 3D Environment Measurement and Reconstruction Based on LiDAR. Human life is surrounded by a variety of sensors. Through the combination of different sensors, a designer can carry out environmental testing and then replace the human perception system. The principle of Velodyne LiDAR adopts the time of flight (ToF) technology based on infrared laser light. According to the known of the speed of light, the relative distance between the object and LiDAR can be calculated. Therefore, we proposed a system design for the static object with segmentation technology based on 3D LiDAR. At first, we decode the 3D LiDAR information, which transmits the UDP format of data packets that include the information of distance, angle, reflectivity, and time stamp. Above of all results can be performed the point cloud of the coordinate system in the 3D environment. Then we define the Euclidean clustering threshold to do the object segmentation. Finally, we design a digital chip with additional CORDIC circuit based on the lookup table to achieve the sine and cosine value conversion of angle.
Summary form only given. Vision and manipulation are inextricably intertwined in the primate brain. Tantalizing results from neuroscience are shedding light on the mixed motor and sensory representations used by the brain during reaching, grasping, and object recognition. We now know a great deal about what happens in the brain during these activities, but not necessarily why. Is the integration we see functionally important, or just a reflection of evolution's lack of enthusiasm for sharp modularity? We wish to instantiate these results in robotic form to probe their technical advantages and to find any lacunae in existing models. We believe it would be missing the point to investigate this on a platform where dextrous manipulation and sophisticated machine vision are already implemented in their mature form, and instead follow a developmental approach from simpler primitives. We begin with a precursor to manipulation, simple poking and prodding, and show how it facilitates object segmentation, a long-standing problem in machine vision. The robot can familiarize itself with the objects in its environment by acting upon them. It can then recognize other actors (such as humans) in the environment through their effect on the objects it has learned about. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience.
The problems of object segmentation and binding are addressed within a biologically based network model capable of determining depth from occlusion. In particular, the authors discuss two subprocesses most relevant to segmentation and binding: contour binding and figure direction. They propose that these two subprocesses have intrinsic constraints that allow several underdetermined problems in occlusion processing and object segmentation to be uniquely solved. Simulations that demonstrate the role these subprocesses play in discriminating objects and stratifying them in depth are reported. The network is tested on illusory stimuli, with the network's response indicating the existence of robust psychological properties in the system.<<ETX>>
Automatic recognition of various road distresses is of considerable interest since it facilitates preventive road maintenance before cracks and potholes become too severe, leading to economic benefits. The current approach of using human operators to categorize road distresses is both labor-intensive and time consuming. We describe a two-step algorithm that automates road-distress identification with high accuracy. After constant-false-alarm-rate (CFAR) detection at the pixel level, subimage processing classifies each subimage of 64/spl times/64 pixels (each pixel is 1 mm by 1 mm) into crack, patch/pothole (P2), sealed crack, and false alarm. Object processing performs spatial clustering and object segmentation prior to final distress identification. The major challenge is integrating a number of signal and image processing algorithms to effectively deal with false alarms, film artifacts, and nonstationary distress characteristics and background. We explore how various signal and image processing concepts in signal projection, nonlinear filtering, feature optimization, image coding, and pattern recognition can be judiciously combined for computationally efficient and robust identification of road distresses. Our data analysis of 112 image frames (each frame contains 6144/spl times/4095 pixels) shows that the overall system performance at the object level is as follows: a P/sub D/ of 0.90 (average of 74 subimages per detected object), probability of correct distress identification of 0.96, and a P/sub FA/ of 0.79 false objects (average of 11 subimages per false object) per image frame.
Human action recognition using the 3D camera for surveillance applications is a promising alternative approach to the conventional 2D camera based surveillance. We propose a depth image-based object segmentation scheme for improving human action recognition. Experimental results show that the average accuracy of the dangerous event detection is improved by about 15% when using the proposed object segmentation scheme.
As we all know an image is an artifact that depicts visual perception. In order to extract information or modify those images we have to perform some operation on it. In this paper we present a methodology to segment hand images using modified k-means clustering with value of threshold and analysis of histogram. Experimental results show 97% accurate results so we can say proposed methodology is better then previous.
This paper presents a real-time object segmentation approach for visual object detection in dynamic scenes. This object segmentation approach is based on a novel general object feature which is defined subtly combining multiple low- level features and the uniqueness of the target object. Then the object segmentation approach is applied to detect vehicle and lane marking in dynamic scenes. Experiment results with test dataset extracted from real traffic scenes on highways and urban roads show that the approach proposed in this paper can achieve a high detection rate with an extreme low time cost.
In many situations, we have to read characters on the object we are inspecting. Typically, the character text is horizontal, in this case, we can recognize the character easily, and the recognition methods are very mature. But, sometimes the characters distribute on the circle or ellipse, and the characters are distortion, which bring us huge trouble to recognize. This paper main about how to read the characters on ellipse that distortion, instructing polar transformation, rotate transformation, feature extraction, character segmentation and character recognition and MLP related algorithm. At the end of this paper, the results analysis are given.
Typical segmentation algorithms are challenged by background noise and the variation of object sizes and object positions in video frames. In this paper, we propose a new object segmentation method based on both motion and distance information to increase segmentation reliability and to suppress background noise. Two new concepts are described in this paper. First proposed is a new distance-based background detection algorithm to remove the impact of noisy background without using reference frames. The second proposed is a new depth /motion-based segmentation that can accurately capture objects of different sizes. The algorithm introduced successfully increases the accuracy and reliability of object segmentation and motion detection.
No standards are currently tagged "Object segmentation"