1,235 resources related to Mixed Reality
- Topics related to Mixed Reality
- IEEE Organizations related to Mixed Reality
- Conferences related to Mixed Reality
- Periodicals related to Mixed Reality
- Most published Xplore authors for Mixed Reality
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted full papers will be peer reviewed. Accepted high quality papers will be presented in oral and poster sessions,will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE.
Multimedia technologies, systems and applications for both research and development of communications, circuits and systems, computer, and signal processing communities.
The 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2020) will be held in Metro Toronto Convention Centre (MTCC), Toronto, Ontario, Canada. SMC 2020 is the flagship conference of the IEEE Systems, Man, and Cybernetics Society. It provides an international forum for researchers and practitioners to report most recent innovations and developments, summarize state-of-the-art, and exchange ideas and advances in all aspects of systems science and engineering, human machine systems, and cybernetics. Advances in these fields have increasing importance in the creation of intelligent environments involving technologies interacting with humans to provide an enriching experience and thereby improve quality of life. Papers related to the conference theme are solicited, including theories, methodologies, and emerging applications. Contributions to theory and practice, including but not limited to the following technical areas, are invited.
All fields of satellite, airborne and ground remote sensing.
Industrial information technologies
Broad coverage of concepts and methods of the physical and engineering sciences applied in biology and medicine, ranging from formalized mathematical theory through experimental science and technological development to practical clinical applications.
Video A/D and D/A, display technology, image analysis and processing, video signal characterization and representation, video compression techniques and signal processing, multidimensional filters and transforms, analog video signal processing, neural networks for video applications, nonlinear video signal processing, video storage and retrieval, computer vision, packet video, high-speed real-time circuits, VLSI architecture and implementation for video technology, multiprocessor systems--hardware and software-- ...
IEEE Computer Graphics and Applications (CG&A) bridges the theory and practice of computer graphics. From specific algorithms to full system implementations, CG&A offers a strong combination of peer-reviewed feature articles and refereed departments, including news and product announcements. Special Applications sidebars relate research stories to commercial development. Cover stories focus on creative applications of the technology by an artist or ...
The theoretical, experimental and operational aspects of electrical and electronics engineering and information technologies as applied to Intelligent Transportation Systems (ITS). Intelligent Transportation Systems are defined as those systems utilizing synergistic technologies and systems engineering concepts to develop and improve transportation systems of all kinds. The scope of this interdisciplinary activity includes the promotion, consolidation and coordination of ITS technical ...
Imaging methods applied to living organisms with emphasis on innovative approaches that use emerging technologies supported by rigorous physical and mathematical analysis and quantitative evaluation of performance.
2016 IEEE International Workshop on Mixed Reality Art (MRA), 2016
Node Kara is an audiovisual mixed reality installation by the artist Anil Camci. It offers a body-based interaction using a 3D imaging of the exhibition space. With a blurred audiovisual scene in its steady state, it invites visitors to interact with the multiplicity of layers it holds within. By moving in the exhibition space, the visitorscandeblurboththesoundandthevisualsastheygetimmersed in the piece. Rather ...
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2018
Hand-based interaction is one of the most widely-used interaction modes in the applications based on optical see-through head-mounted displays (OST-HMDs). In this paper, such interaction modes as gesture-based interaction (GBI) and physics-based interaction (PBI) are developed to construct a mixed reality system to evaluate the advantages and disadvantages of different interaction modes for near-field mixed reality. The experimental results show ...
2018 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2018
Calibration accuracy is one of the most important factors to affect the user experience in mixed reality applications. For a typical mixed reality system built with the optical see-through head-mounted display (OST-HMD), a key problem is how to guarantee the accuracy of hand-eye coordination by decreasing the instability of the eye and the HMD in long-term use. In this paper, ...
2018 IEEE 9th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), 2018
Mixed reality platform is the mixing of real and virtual world where objects from both worlds co-exist and are able to interact with each other. The goal is to design a Linux based Mixed Reality Platform with the capability to place X Windows and three dimensional virtual objects in the Virtual/Augmented environment. This means that, unlike some of the mixed ...
2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), 2018
Most of the existing AR applications utilize the builtin camera on a mobile device for pose estimation to display 3D objects in a virtual space. In this paper, an environment 3D infomation is scanned by a depth camera with an object positioning capability for pose estimation, providing reliable 3D object displaying results between a virtual space and a real space. ...
Part 1: Mixed Reality Panel - TTM 2018
Part 2: Mixed Reality Panel - TTM 2018
Mixed Reality - The Future of Our World
IEEE @ SXSW 2015 - Mixed Reality Habitats: The New Wired Frontier
Q&A with Raj Tiwari: IEEE Digital Reality Podcast, Episode 3
Sean White: Distinguished Experts Panel - TTM 2018
New Immersive Mediums - The Search For Egg Yolk: IEEE VICS 2018
Q&A: Mixed Reality Panel - TTM 2018
IEEE @ SXSW 2015 - Future of Identity Series Overview
IEEE Entrepreneurship @ SXSW 2017: LOOX VR
IMS 2011 Microapps - Mixed-Signal Active Load Pull - The Fast Track to 3G/4G Amplifier Design
Neuromorphic Mixed-Signal Circuitry for Asynchronous Pulse Processing Neuromorphic Mixed-Signal Circuitry for Asynchronous Pulse Processing - Peter Petre: 2016 International Conference on Rebooting Computing
An Energy-Efficient Mixed-Signal Neuron for Inherently Error Resilient Neuromorphic Systems - IEEE Rebooting Computing 2017
Augmented Reality in Operating Rooms
Q&A with Jeewika Ranaweera: IEEE Digital Reality Podcast, Episode 8
Q&A with Connor Russomanno: IEEE Digital Reality Podcast, Episode 1
Pt. 2: Limits & Prospects of Mixed-Signal Electronics - Tomislav Drenski - Industry Panel 2, IEEE Globecom, 2019
Q&A with Kathy Grise: IEEE Digital Reality Podcast, Episode 4
Augmented Reality at the Natural History Museum, London
Node Kara is an audiovisual mixed reality installation by the artist Anil Camci. It offers a body-based interaction using a 3D imaging of the exhibition space. With a blurred audiovisual scene in its steady state, it invites visitors to interact with the multiplicity of layers it holds within. By moving in the exhibition space, the visitorscandeblurboththesoundandthevisualsastheygetimmersed in the piece. Rather than a detection of discrete events or gestures, Node Kara relies on fluid interactions with continuous visual layers and stochastic audio. This way, the piece reacts to each visitor in a unique way. A video preview of Node Kara can be seen at https://vimeo.com/anilcamci/nodekara(password: nodekara).
Hand-based interaction is one of the most widely-used interaction modes in the applications based on optical see-through head-mounted displays (OST-HMDs). In this paper, such interaction modes as gesture-based interaction (GBI) and physics-based interaction (PBI) are developed to construct a mixed reality system to evaluate the advantages and disadvantages of different interaction modes for near-field mixed reality. The experimental results show that PBI leads to a better performance of users regarding their work efficiency in the proposed tasks. The statistical analysis of T-test has been adopted to prove that the difference of efficiency between different interaction modes is significant.
Calibration accuracy is one of the most important factors to affect the user experience in mixed reality applications. For a typical mixed reality system built with the optical see-through head-mounted display (OST-HMD), a key problem is how to guarantee the accuracy of hand-eye coordination by decreasing the instability of the eye and the HMD in long-term use. In this paper, we propose a real-time latent active correction (LAC) algorithm to decrease hand-eye calibration errors accumulated over time. Experimental results show that we can successfully use the LAC algorithm to physics- inspired virtual input methods.
Mixed reality platform is the mixing of real and virtual world where objects from both worlds co-exist and are able to interact with each other. The goal is to design a Linux based Mixed Reality Platform with the capability to place X Windows and three dimensional virtual objects in the Virtual/Augmented environment. This means that, unlike some of the mixed reality platforms existing in the current generation of Mixed Reality devices, any application program that draws its GUI on the X-Windowing System can be viewed and interacted with in mixed reality. This can be achieved without making any changes to the source code by the third-party developer. The platform will also allow special “3D Applications” (applications with actual three dimensional representation) to be made and run using a new Application Programming Interface. The platform will mainly consist of a Window Manager and a Mixed Reality Compositing Manager alongside other programs to form a desktop environment. This environment will universally enable Linux GUI applications to work in mixed reality. A major contribution of this concept is towards extending a user's computing environment so that the user can interact with the applications beyond the bounds of a physical screen. An adaptive Linux based platform is to be constructed which can run on a variety of hardware form factors like phones, personal computers and mixed reality headsets.
Most of the existing AR applications utilize the builtin camera on a mobile device for pose estimation to display 3D objects in a virtual space. In this paper, an environment 3D infomation is scanned by a depth camera with an object positioning capability for pose estimation, providing reliable 3D object displaying results between a virtual space and a real space. A prototype mixed reality (MR) basketball game called “O-Shooting” is proposed to demonstrate the displaying effect in a mixed reality scenario on a mobile device.
This research implements Mixed Reality to develop an innovative application in the virtual room arrangement in order to plan, design, and arrange the furniture at home. Mixed reality is a technology that combines the Virtual Reality (VR) and Augmented Reality (AR). Mixed Reality is also described as VR that is used in the real environment and it also implements the AR that is used in the VR additional device. This research aims to explore the Mixed Reality technology in the virtual room arrangement application using the Google Card Board. The result shows that this research can be a new innovation in the human-computer interaction using the Mixed Reality technology. In this research, the fingertip is used as input pointer to interact with the virtual environment. This methodology represents the combination of the VR technology and the AR technology. The methodology that is used to implement this fingertip in this Mixed Reality environment is called hand segmentation. This application developed is Android platform. Unity 3D is the game engine used in this research.
This paper presents a novel approach for the design of creative location-based mixed reality applications. We introduce a framework called DigitalQuest that simplifies adding geolocated virtual content on top of real-world camera input. Unlike previous work, whichreliessolelyonmarkersorimagepatternrecognition,wedefine a "mirror world" that facilitates interactive mixed reality. DigitalQuest consists of an editor that allows users to easily add their own content as desired and a mobile application that loads content from a server based on the location of the device. Each piece of virtual content can be organized through the editor so that it appears only in certain circumstances, allowing a designer to determine when and where a virtual object is attached to a real-world location. We have used our editor to create a series of futuristic scavenger hunts in which participating teams must solve puzzles in order to access new virtual context appearing in a mixed reality environment via a mobile phone application. In this paper, we introduce our editor and present an example scavenger hunt game, Morimondo, that was built using it. Specifically, we describe our technique to utilize camera and motion sensors on the mobile phone to enable an appropriate level of user engagement within this game. We are able to obtain realistic augmentations with accurate positioning by leveraging sensor fusion and through the use of filters that compensate for sensor noise, using image processing only for error correction or in special situations. The initial success of this project leads us to believe that DigitalQuest could be used to design a wide range of creative multi-user mixed reality applications.
We propose a compositional modeling framework for mixed reality (MR) software architectures in order to express, simulate and validate formally the time depending properties of such systems. Our approach is first based on a functional decomposition of such systems into generic components. The obtained elements as well as their typical interactions give rise to generic representations in terms of timed automata. A whole application is then obtained as a composition of such defined components. To ease writing specifications, we propose a textual language (named MIRELA: mixed reality language) along with the corresponding compilation tools. The generated output contains timed automata in UPPAAL format for simulation and verification of time constraints, and which also may be used to generate source code skeletons for an implementation on a MR platform.
The traditional layout of a psychotherapy room is typically drawn from the earlier part of the last century. With the growing availability of immersive multimedia technologies, and the need for applied forms of learning, the design of a mixed reality technology-assisted practice is considered, and ergonomic and workflow factors are explored. Three suggested layout templates are contrasted, and it is proposed that an optimized mixed reality therapy clinic should retain the familiarity of the classic model while at the same time be configured in a way that is amenable to the many benefits of immersive technology.
The Galileo Project is an interdisciplinary, experimental performance piece derived via crowd-sourced collaboration. The resulting work is representative of mixed reality environments that utilize emerging digital technologies, 3D spatial audio manipulation, interactive computer programming of sound and lighting, and an organic actor. Through these interdisciplinary efforts, the project explores connections between audience, artist, reality and synthesis.
No standards are currently tagged "Mixed Reality"