899 resources related to Virtual Object
- Topics related to Virtual Object
- IEEE Organizations related to Virtual Object
- Conferences related to Virtual Object
- Periodicals related to Virtual Object
- Most published Xplore authors for Virtual Object
2020 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
CVPR is the premier annual computer vision event comprising the main conference and several co-located workshops and short courses. With its high quality and low cost, it provides an exceptional value for students, academics and industry researchers.
The International Conference on Image Processing (ICIP), sponsored by the IEEE SignalProcessing Society, is the premier forum for the presentation of technological advances andresearch results in the fields of theoretical, experimental, and applied image and videoprocessing. ICIP 2020, the 27th in the series that has been held annually since 1994, bringstogether leading engineers and scientists in image and video processing from around the world.
The International Conference on Robotics and Automation (ICRA) is the IEEE Robotics and Automation Society’s biggest conference and one of the leading international forums for robotics researchers to present their work.
The Conference focuses on all aspects of instrumentation and measurement science andtechnology research development and applications. The list of program topics includes but isnot limited to: Measurement Science & Education, Measurement Systems, Measurement DataAcquisition, Measurements of Physical Quantities, and Measurement Applications.
The Annual IEEE PES General Meeting will bring together over 2900 attendees for technical sessions, administrative sessions, super sessions, poster sessions, student programs, awards ceremonies, committee meetings, tutorials and more
The theory, design and application of Control Systems. It shall encompass components, and the integration of these components, as are necessary for the construction of such systems. The word `systems' as used herein shall be interpreted to include physical, biological, organizational and other entities and combinations thereof, which can be represented through a mathematical symbolism. The Field of Interest: shall ...
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
Telephone, telegraphy, facsimile, and point-to-point television, by electromagnetic propagation, including radio; wire; aerial, underground, coaxial, and submarine cables; waveguides, communication satellites, and lasers; in marine, aeronautical, space and fixed station services; repeaters, radio relaying, signal storage, and regeneration; telecommunication error detection and correction; multiplexing and carrier techniques; communication switching systems; data communications; and communication theory. In addition to the above, ...
Computer, the flagship publication of the IEEE Computer Society, publishes peer-reviewed technical content that covers all aspects of computer science, computer engineering, technology, and applications. Computer is a resource that practitioners, researchers, and managers can rely on to provide timely information about current research developments, trends, best practices, and changes in the profession.
IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...
2016 International Conference on Information Networking (ICOIN), 2016
Intelligent service provisioning in the Internet of Things infrastructure can be achieved by discovering and composing most relevant virtual objects and autonomic modification of context information that require learning the system behavior through steady association between the virtual object and the real world object. To meet these challenges, Web-of-Objects enables the objects to be deployed, maintained and operated in the ...
2010 International Conference on High Performance Computing & Simulation, 2010
This paper presents an enhanced virtual object management scheme for a personalized ubiquitous computing service, Virtual Personal World (VPW), at peer-to-peer. In the proposed scheme, more detailed descriptions for virtual objects are added in addition to the existing definitions on communications and data of them, and then two phases of virtual object discovery scheme are applied. Therefore, the management scheme ...
2008 International Conference on Computer Science and Software Engineering, 2008
A new virtual object generation method for AR applications based on image vector recognition is introduced in this paper. During the whole process, data acquiring and virtual object generation are completed automatically, including image smoothing, image edge detection, target silhouette tracking, vector curve compression and real-time rendering. This method improved the degree of automation of virtual object generation in AR ...
2013 International Conference on Cyberworlds, 2013
This paper discusses virtual object manipulation using a human hand in an AR environment. For this purpose, we need to measure the configuration, which includes the position, pose and posture, of the human hand. However, if we employ a data glove for the measurement, the glove appears in the video images synthesized by AR. If we otherwise employ a vision-based ...
2009 ICCAS-SICE, 2009
In order for robots to be able to manipulate the proper objects, robots firstly need visual ability to precisely recognize and identify objects. One of the most basic problems with robot vision is that environments can change under various weather conditions (various illuminations). Furthermore, each object's category consists of many objects with various poses. In order to obtain the best ...
IROS TV 2019- Pohang University of Science and Technology- Haptics and Virtual Reality Laboratory
Virtual World Symposium 2011 - DLP at Virtual World
Virtual World Symposium - Virtual World at Intel
Virtual World Symposium - Educational Work
Handling of a Single Object by Multiple Mobile Robots based on Caster-Like Dynamics
How to Become a Virtual Speaker
Virtual World Symposium - Welcome Address
Virtual World Symposium - IEEE Islands
Virtual World Symposium - Moon World
Recording and Using 3D Object Models with RoboEarth
Virtual World Symposium 2011 - Collaborative Work
Virtual World Symposium - Second Life Tech Tour
Virtual World Symposium 2011 - CoLab and Mars
Virtual World Symposium - Project Direct
Computing Conversations: Bertrand Meyer: Eiffel Programming Language
Critical use cases for video capturing systems in autonomous driving applications
IMS 2012 Microapps - Virtual Flight Testing of Radar System Performance Daren McClearnon, Agilent EEsof
Reconstructed Brain Models for Virtual Bodies and Robots
EMBC 2011-Course-Virtual Reality and Robotics in Neurorehabilitation-William Zev Rymer
Intelligent service provisioning in the Internet of Things infrastructure can be achieved by discovering and composing most relevant virtual objects and autonomic modification of context information that require learning the system behavior through steady association between the virtual object and the real world object. To meet these challenges, Web-of-Objects enables the objects to be deployed, maintained and operated in the Internet of Things through virtualizing the real world objects with the use of semantic ontology. Learning model can be represented in terms of composite virtual object that is formed by combining multiple functionalities of virtual objects. Different functionalities at the virtual object level and their iteration enable the cognition in virtual object. The creation and functional architecture of virtual object in Web-of-Objects have been discussed. For the efficient discovery and composition of virtual object, a Web-of-Objects based learning model has been proposed in this paper. Finally, to realize the service composition, a use-case scenario has been studied.
This paper presents an enhanced virtual object management scheme for a personalized ubiquitous computing service, Virtual Personal World (VPW), at peer-to-peer. In the proposed scheme, more detailed descriptions for virtual objects are added in addition to the existing definitions on communications and data of them, and then two phases of virtual object discovery scheme are applied. Therefore, the management scheme can be individualized more effectively according to users' preferences and behaviors and provide a fast way of finding virtual objects. For performance verification, we have implemented a prototype of the proposed scheme as a part of VPW and compare the speed of finding virtual objects in the proposed scheme with those in other DHT P2P networks. Simulation results shows that the proposed scheme can achieve performance improvements compared with other P2P networks.
A new virtual object generation method for AR applications based on image vector recognition is introduced in this paper. During the whole process, data acquiring and virtual object generation are completed automatically, including image smoothing, image edge detection, target silhouette tracking, vector curve compression and real-time rendering. This method improved the degree of automation of virtual object generation in AR applications rather than using pre-built models with digital content creation software. The key steps of the approach are described in detail and a practical application is also given.
This paper discusses virtual object manipulation using a human hand in an AR environment. For this purpose, we need to measure the configuration, which includes the position, pose and posture, of the human hand. However, if we employ a data glove for the measurement, the glove appears in the video images synthesized by AR. If we otherwise employ a vision-based approach, the obtained configuration often includes larger errors due to occlusions among fingers. Moreover, the configuration of the human hand in anyway often includes some cognitive errors in manipulating the virtual object, which is only visible in the synthesized videos. To cope with these problems, we propose to replace the human hand with a data glove with a virtual hand in the video and adjust its configuration so that it properly grasps the virtual objects without errors.
In order for robots to be able to manipulate the proper objects, robots firstly need visual ability to precisely recognize and identify objects. One of the most basic problems with robot vision is that environments can change under various weather conditions (various illuminations). Furthermore, each object's category consists of many objects with various poses. In order to obtain the best performance in term of accuracy and efficiency, we compared three feature extraction approaches that have been widely used to solve this problem: principal components analysis (PCA), linear discriminant analysis (LDA), and contour matching with log polar histogram (LPH). We also introduced an improved algorithm called adaptable k-nearest neighbor (AK-NN) that allows the object recognition system to use an automatic adaptable K value to improve the accuracy of classification. To evaluate the object recognition system, we generated virtual objects with various conditions for realistic testing.
In this paper, we propose duplication based distance-free freehand virtual object manipulation system for AR. Importance of freehand interaction with virtual objects is getting more attention as research on AR environment is extending its technological basis and AR applications are rising fast. However, a proper freehand virtual object manipulation system with overwhelming performances has not been established yet. In order to achieve both accurate selection and intuitive manipulation, we implement simple idea that user can always manipulate the object by simple grabbing whether the object is within reach or not. Our system utilizes ray-casting based selection of remote object followed by duplication of selected remote object which is located in front of user. Duplicate object can be directly manipulated using user's own hand. Our method suggests delicate manipulation of remote object regardless of distance from the user as well as consistent and flexible operations of both selection and manipulation mode.
Rendering virtual objects into real scenes with real illumination can greatly increase the realism of virtual objects and the consistency between the virtual and the real. The main challenge lies in illumination estimation from a single image. This article proposes a novel method of single image based illumination estimation for lighting virtual object in real scene. Only a single image, without any knowledge of the 3D geometry or reflectance, is needed, which greatly increases the applicability of the method. We first estimate coarse scene geometry and intrinsic components including shading image and reflectance image. Then the sparse radiance map of the scene is inferred based on the scene geometry and intrinsic components. Finally, the virtual objects are illuminated by the estimated sparse radiance map. Some experimental results show that this method can convincingly light virtual objects into a single real image, without any pre-recorded 3D geometry and reflectance, illumination acquisition equipments or imaging information of the image.
A simulation procedure founded on SGI Onxy4 visualization system and SGI OpenGL Performertrade visualization development toolkit was discussed; a scene graph in accordance with its node configuration motion and man-machine interaction simulation were designed; based on OpenGL Performer, a transforming algorithm for scene graph object moving simulation was studied; the interaction rules between scene objects, the event triggering and feedback method of the simulation procedure was discussed and designed; and finally, separately took examples by a virtual man interaction module, a numerical lathing-milling machining center and a mud-water balanced tunnel boring machine, three operation instances for moving simulation and interaction were programmed and realized.
This paper proposes a distributed haptic control architecture whose coordination gain at each user site is independent of the number of cooperating peers. In the proposed architecture, users interact through manipulating a shared virtual object (SVO) together. The distributed copies of the SVO are controlled through virtual couplers. At each peer, the gain of the force feedback loop is maintained constant regardless of the number of interacting users by coordinating the local SVO copy to the averaged motion of the other SVO copies. The motion of the SVO representative is computed by averaging the motion of all other SVO copies. A preliminary investigation contrasts the proposed controller to traditional distributed virtual coupling control. The comparison is performed via MATLAB simulations of an exemplary cooperative manipulation performed by three users. The results illustrate that the proposed controller: (1) can render a lighter SVO with decreased position coherence among the distributed SVO copies for the same stiffness of coordination; (2) achieves similar position coherence among the distributed SVO copies for the same SVO mass.
In order to treat the virtual object in the same way as the real object, the authors propose the new technique for the prediction of the visually perceived location. In mixed/augmented reality, the real and virtual objects coexist. Because the real object is directly treated with the observer's body, if the virtual object is not, the observer is confused. Although the best way for direct treatment of the virtual object is that a system of mixed/augmented reality detects the visually perceived location, it is impossible. In the authors' proposal, the visually perceived location is predicted using the observer's action. Such proposal is evaluated by examination of the prediction of visually perceived depth using action that the observer reaches out for the virtual object. The results suggest that the visually perceived depth is predicted by fitting the depth of the hand as a function of action time into the logistic function.
The scope of this trial-use standard is the definition of an exchange format, utilizing XML, for exchanging the static description of an Instrument. Instances of InstrumentDescription will be utilized in conjunction with instances of other InstrumentDescription in support of the execution of test programs in an automatic test environment.