127 resources related to Cognitive Object
- Topics related to Cognitive Object
- IEEE Organizations related to Cognitive Object
- Conferences related to Cognitive Object
- Periodicals related to Cognitive Object
- Most published Xplore authors for Cognitive Object
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.
2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
FUZZ-IEEE 2021 will represent a unique meeting point for scientists and engineers, both from academia and industry, to interact and discuss the latest enhancements and innovations in the field. The topics of the conference will cover all the aspects of theory and applications of fuzzy sets, fuzzy logic and associated approaches (e.g. aggregation operators such as the Fuzzy Integral), as well as their hybridizations with other artificial and computational intelligence techniques.
2020 IEEE 18th International Conference on Industrial Informatics (INDIN)
INDIN focuses on recent developments, deployments, technology trends, and research results in Industrial Informatics-related fields from both industry and academia
The International Conference on Information Fusion is the premier forum for interchange of the latest research in data and information fusion, and its impacts on our society. The conference brings together researchers and practitioners from academia and industry to report on the latest scientific and technical advances.
The International Conference on Robotics and Automation (ICRA) is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.
Broad coverage of concepts and methods of the physical and engineering sciences applied in biology and medicine, ranging from formalized mathematical theory through experimental science and technological development to practical clinical applications.
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.
Educational methods, technology, and programs; history of technology; impact of evolving research on education.
Papers on application, design, and theory of evolutionary computation, with emphasis given to engineering systems and scientific applications. Evolutionary optimization, machine learning, intelligent systems design, image processing and machine vision, pattern recognition, evolutionary neurocomputing, evolutionary fuzzy systems, applications in biomedicine and biochemistry, robotics and control, mathematical modelling, civil, chemical, aeronautical, and industrial engineering applications.
 Proceedings. 11th IAPR International Conference on Pattern Recognition, 1992
Proposes an active vision system for saccadic camera gaze shifts and explorative scene analysis as a new integral approach to image understanding. The model consists of two sensory subsystems: preattentive peripheral feature detection and high resolution foveal image identification based on a hypercolumnar representation. Visual objects are non-explicitly stored in two sparsely coded associative memories separating fixation locations for identities ...
19th International Workshop on Robotics in Alpe-Adria-Danube Region (RAAD 2010), 2010
Building knowledge for robots can be tedious, especially if focused on object class recognition in home environments where hundreds of everyday-objects - some with a huge intra class variability - can be found. Object recognition and especially object class recognition is a key capability in home-robotics. Achieving deployable results from state-of-the-art algorithms is not yet achievable when the number of ...
2011 15th International Conference on Advanced Robotics (ICAR), 2011
Recognition by Components (RBC) has been one of the most conceptually significant frameworks for modeling human visual object recognition. Extension of the model to practical robotic applications have been traditionally limited by the lack of good response in textureless areas in the case of conventional inexpensive stereo cameras as well as by the need for expensive laser based sensor systems ...
2015 6th International Conference on Automation, Robotics and Applications (ICARA), 2015
Affordance features are being increasingly used for a number of robotic applications. An open affordance framework called AfNet defines over 250 objects in terms of 35 affordance features that are grounded in visual perception algorithms. While AfNet is intended for usage with cognitive visual recognition systems, an extension to the framework, called AfRob delivers an affordance based ontology targeted at ...
2016 49th Hawaii International Conference on System Sciences (HICSS), 2016
In this paper we advance the "Personal Equation of Interaction", examining individual differences between users that can be used to predict the accuracy of analysts' reasoning during visual analysis. We report 2 studies which expand the use of the Personal Equation of Interaction (PEI) beyond its current scope of predicting the accuracy of interface learning tasks to predicting the outcome ...
The Largest Cognitive Systems will be Optoelectronic: an ICRC 2018 Talk
Handling of a Single Object by Multiple Mobile Robots based on Caster-Like Dynamics
Recording and Using 3D Object Models with RoboEarth
Robotics History: Narratives and Networks Oral Histories: Raja Chatila
IEEE 125th Anniversary Media Event: Cognitive Computing
Virtual Reality Support for Teleoperation Using Online Grasp Planning
Computing Conversations: Bertrand Meyer: Eiffel Programming Language
Critical use cases for video capturing systems in autonomous driving applications
Robotics History: Narratives and Networks Oral Histories: Barbara Hayes Roth
Self-Organization with Information Theoretic Learning
Active Space-Body Perception and Body Enhancement using Dynamical Neural Systems
TryEngineering Careers with Impact: Mataric
CB: Exploring Neuroscience with a Humanoid Research Platform
Neuromorphic Mixed-Signal Circuitry for Asynchronous Pulse Processing Neuromorphic Mixed-Signal Circuitry for Asynchronous Pulse Processing - Peter Petre: 2016 International Conference on Rebooting Computing
Neural Cognitive Robot: Learning, Memory and Intelligence
Michele Nitti: Searching the Social Internet of Things by Exploiting Object Similarity - Special Session on SIoT: WF-IoT 2016
Computing Paradigms: The Largest Cognitive Systems Will Be Optoelectronic - Jeff Shainline - ICRC 2018
Control of a Fully-Actuated Airship for Satellite Emulation
Finger Mechanism Equipped with Omnidirectional Driving Roller (Omni-Finger)
Proposes an active vision system for saccadic camera gaze shifts and explorative scene analysis as a new integral approach to image understanding. The model consists of two sensory subsystems: preattentive peripheral feature detection and high resolution foveal image identification based on a hypercolumnar representation. Visual objects are non-explicitly stored in two sparsely coded associative memories separating fixation locations for identities of foveal views. An egocentric interest map integrates bottom-up and top-down information sources and decides when to generate a camera movement. A selective masking of preattentive processes supports a cooperation with cognitive object recognition. The system is easily extendible, copes with occlusions and distortions and can be driven in different modes for exploration tasks. This model is able to perform visual search and reproduce findings in the human visual system.<<ETX>>
Building knowledge for robots can be tedious, especially if focused on object class recognition in home environments where hundreds of everyday-objects - some with a huge intra class variability - can be found. Object recognition and especially object class recognition is a key capability in home-robotics. Achieving deployable results from state-of-the-art algorithms is not yet achievable when the number of classes increases and near real-time is the goal. Hence, we propose to exploit contextual knowledge by using sensor and hardware constraints from the robotics and home domains and show how to use the internet as a source for obtaining the required data for building a fast, vision based object categorization system for robotics. In this paper, we give an overview of the available constraints and advantages of using a robot to set priors for object classification and propose a system which covers automated model acquisition from the web, domain simulation, descriptor generation, 3D data processing from dense stereo and classification for a - not too far - robot scenario in an internet-connected home-environment. In this work we show that this system is capable of being used in home robotics in a fast and robust way for recognition of object classes commonly found in such environments, including but not limited to chairs and mugs. We also discuss challenges and missing pieces in the framework and useful extensions.
Recognition by Components (RBC) has been one of the most conceptually significant frameworks for modeling human visual object recognition. Extension of the model to practical robotic applications have been traditionally limited by the lack of good response in textureless areas in the case of conventional inexpensive stereo cameras as well as by the need for expensive laser based sensor systems to compensate for this deficiency. The recent availability of RGB-D sensors such as the PrimeSense sensor has opened new avenues for practical usage of these sensors for robotic applications such as grasping. In this paper, we present novel algorithms for segmentation of objects and parts from range images with extensions based on semantic cues to yield robust part detection. The detected parts are then parameterized using a superquadric based fitting framework and classified into one of different generic shapes. The categorization of the parts enables rules for grasping the object. This Grasping by Components (GBC) scheme is a natural extension of the RBC framework and provides a scalable framework for grasping of objects. This scheme also permits the grasping of novel objects in the scene, with at least one known grasp affordance.
Affordance features are being increasingly used for a number of robotic applications. An open affordance framework called AfNet defines over 250 objects in terms of 35 affordance features that are grounded in visual perception algorithms. While AfNet is intended for usage with cognitive visual recognition systems, an extension to the framework, called AfRob delivers an affordance based ontology targeted at robotic applications. Applications in which AfRob has been used include (a) top down task driven saliency detection (b) cognitive object recognition (c) task based object grasping and manipulation. In this paper, we use AfRob as base for building topological maps intended for robotic navigation. Traditional approaches to robotic navigation use metric maps or topological maps or hybrid systems that combine the two approaches at different levels of resolution or granularity. While metric and grid based maps provide high accuracy results for optimal path planning schemes, they require high space-time requirements for computation and storage, reducing real-time applicability. On the other hand, topological maps being graph based abstract structures are extremely light and convenient for goal driven navigation, but suffer from lack of resolution, poor self- localization and loop closing. Both approaches show severe restrictions in the case of dynamic environments in which objects which serve as features for the map building procedure are moved or removed from the scene across the time period of usage of the robot. This paper presents a novel approach to topological map building that takes into account affordance features that can help build lightweight, high-resolution, holistic and cognitive maps by predicting positional and functional characteristics of unseen objects. In addition, these features enable a cognitive approach to handling dynamic scene content, providing for enhanced loop closing and self-localization over traditional topological map building. These features also offer cues to place learning and functional room unit classification thereby providing for superior task based path planning. Since these features are easy to detect, fast building of maps is possible. Results on synthetic and real scenes demonstrate the benefits of the proposed approach.
In this paper we advance the "Personal Equation of Interaction", examining individual differences between users that can be used to predict the accuracy of analysts' reasoning during visual analysis. We report 2 studies which expand the use of the Personal Equation of Interaction (PEI) beyond its current scope of predicting the accuracy of interface learning tasks to predicting the outcome of visio-cognitive object categorization. These studies extend the research of Yamauchi & Markman  using the dual learning theory of Ashby & Maddox . Because visual reasoning is ubiquitous to the human experience and integral to big data visualization analysis, this research bridges the domain divide between psychological categorization theory and visual data analysis using composite glyphs. We define a psychometric measure that predicts the accuracy of composite glyph categorization and discuss its impact going forward.
Cognitive sharing of objects is fundamental in a heterogeneous robot system composed of a Unmanned Aerial Vehicle and a ground robot. Since the viewpoint of UAV is greatly different from ground robot, they may have different perceptions about the same objects. That makes it difficult to realize cognitive sharing. In this paper, we proposed a cognitive sharing method between UAV and ground robot by sharing Geometric Relation-based Triangle Representations(GRTR). This paper discribes a robust method for UAV and ground robot to identify the same object among similar objects without sharing appearance information. To copy with the problem of increasing computational cost for the recognition of objects in the ROI, entropy evaluation is employed to evaluate and select unique representations. Finally, we illustrated the proposed method with robots in real world.
Visual recognition in multi-robot systems is afflicted with a peculiar problem that observations made from different viewpoints bring different perspectives. Due to the lack of a discriminable representation between a target and its surroundings in different viewpoints, realizing cognitive sharing of the object among the robots in an unconstructed environment is a challenging issue. In this paper, we propose novel description algorithms of the target representation based on ambiguity minimization of peripheral context. The target is represented by a labeled-graph and its structure is determined by minimizing a metric of representational ambiguity using an entropy evaluation with metaheuristics. Experimental results show the significantly improvement of cognitive sharing by the proposed method.
Intelligent space is a space of distributed sensory intelligence and actuators. The basic component of intelligent space is the distributed intelligent network device (DIND), responsible for intelligent sensing and estimation. It is important for intelligent space to be aware of state of its internal environment, such as the class and position of objects it contains. The ability of object recognition is thus necessary in intelligent space. Existing cognitive computational systems like the mammalian cortex perform extremely well when it comes to visual object recognition. Psychological experiments have shown that image features such as edges and especially corners are of great importance in cognitive object recognition. Inspired by cognitive recognition systems, the proposed cognitive informatics model addresses the problem of vertex and corner detection. The model integrates the previously developed visual feature array (VFA), which is a cognitive model of oriented edge detection. The ultimate goal of the presented model is to provide suitable input for a cognitive object recognition system
The randomness is an objective attribute, but the fuzziness and the roughness are related to human's cognitive activities so that are collectively known as the imprecision. The imprecision is the cost and defects which human have to bear when cognizing the uncertain objects, in detail, the fuzziness is the loss of clear understanding of a cognitive object in order to obtain the possibility and efficiency of cognition; the roughness is lack of description of a cognitive object as to be explained approximatively with two subsets of the existing knowledge set. The fuzziness is derived from the human's classifying cognitive objects, and the roughness from the imperfection of human's existing knowledge. The imprecision has three primary properties: subjectivity, classifying-based, and relying on existing knowledge. To decrease the imprecision should start with the existing knowledge, increasing the quantity and improving the orderliness.
Pointing gestures are used for sharing recognition to an object. Users would like to point out an object intuitively. We propose spotlighting system named Spotlighting for displaying a focus area. It consists of a pointing system by pointing gesture and an operation interface of focus area. A user can control movement of focus area position by moving your one hand with holding it and you can fix focus area on pointing position by opening up your hand. A user can point out in whole or in part of an object by using the proposal system. Therefore Users can share cognitive object with other people easily by using it. We experimented at an actual museum to confirm the effect of the spotlighting. We had a questionnaire about the feature compared with other methods. All the visitors answered "easy to understand" and "Relatively easy to understand" by using spotlighting.
No standards are currently tagged "Cognitive Object"