10,344 resources related to Motion analysis
- Topics related to Motion analysis
- IEEE Organizations related to Motion analysis
- Conferences related to Motion analysis
- Periodicals related to Motion analysis
- Most published Xplore authors for Motion analysis
ICC 2021 - IEEE International Conference on Communications
IEEE ICC is one of the two flagship IEEE conferences in the field of communications; Montreal is to host this conference in 2021. Each annual IEEE ICC conference typically attracts approximately 1,500-2,000 attendees, and will present over 1,000 research works over its duration. As well as being an opportunity to share pioneering research ideas and developments, the conference is also an excellent networking and publicity event, giving the opportunity for businesses and clients to link together, and presenting the scope for companies to publicize themselves and their products among the leaders of communications industries from all over the world.
The conference program will consist of plenary lectures, symposia, workshops and invitedsessions of the latest significant findings and developments in all the major fields of biomedical engineering.Submitted papers will be peer reviewed. Accepted high quality papers will be presented in oral and postersessions, will appear in the Conference Proceedings and will be indexed in PubMed/MEDLINE
The CDC is the premier conference dedicated to the advancement of the theory and practice of systems and control. The CDC annually brings together an international community of researchers and practitioners in the field of automatic control to discuss new research results, perspectives on future developments, and innovative applications relevant to decision making, automatic control, and related areas.
AMC2020 is the 16th in a series of biennial international workshops on Advanced Motion Control which aims to bring together researchers from both academia and industry and to promote omnipresent motion control technologies and applications.
2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI 2020)
The IEEE International Symposium on Biomedical Imaging (ISBI) is the premier forum for the presentation of technological advances in theoretical and applied biomedical imaging. ISBI 2020 will be the 17th meeting in this series. The previous meetings have played a leading role in facilitating interaction between researchers in medical and biological imaging. The 2020 meeting will continue this tradition of fostering cross-fertilization among different imaging communities and contributing to an integrative approach to biomedical imaging across all scales of observation.
Contains articles on the applications and other relevant technology. Electronic applications include analog and digital circuits employing thin films and active devices such as Josephson junctions. Power applications include magnet design as well asmotors, generators, and power transmission
The theory, design and application of Control Systems. It shall encompass components, and the integration of these components, as are necessary for the construction of such systems. The word `systems' as used herein shall be interpreted to include physical, biological, organizational and other entities and combinations thereof, which can be represented through a mathematical symbolism. The Field of Interest: shall ...
The IEEE Reviews in Biomedical Engineering will review the state-of-the-art and trends in the emerging field of biomedical engineering. This includes scholarly works, ranging from historic and modern development in biomedical engineering to the life sciences and medicine enabled by technologies covered by the various IEEE societies.
Broad coverage of concepts and methods of the physical and engineering sciences applied in biology and medicine, ranging from formalized mathematical theory through experimental science and technological development to practical clinical applications.
Broadcast technology, including devices, equipment, techniques, and systems related to broadcast technology, including the production, distribution, transmission, and propagation aspects.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997
A method for computing the 3D camera motion (the ego-motion) in a static scene is described, where initially a detected 2D motion between two frames is used to align corresponding image regions. We prove that such a 2D registration removes all effects of camera rotation, even for those image regions that remain misaligned. The resulting residual parallax displacement field between ...
[1988 Proceedings] Second International Conference on Computer Vision, 1988
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
We present new deterministic methods that, given two eigenspace models-each representing a set of n-dimensional observations-will: 1) merge the models to yield a representation of the union of the sets and 2) split one model from another to represent the difference between the sets. As this is done, we accurately keep track of the mean. Here, we give a theoretical ...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997
We present a method to determine 3D motion and structure of multiple objects from two perspective views, using adaptive Hough transform. In our method, segmentation is determined based on a 3D rigidity constraint. Instead of searching candidate solutions over the entire five-dimensional translation and rotation parameter space, we only examine the two-dimensional translation space. We divide the input image into ...
Proceedings. Fourteenth International Conference on Pattern Recognition (Cat. No.98EX170), 1998
A new algorithm is presented for feature point based motion tracking in long image sequences. Dynamic scenes with multiple, independently moving objects are considered in which feature points may temporarily disappear enter and leave the view field. The existing approaches to feature point tracking have limited capabilities in handling incomplete trajectories, especially when the number of points and their speeds ...
Capture, Recognition, and Imitation of Anthropomorphic Motion
IEEE Life Scences - Paolo Bonato Interview
Robot Motion Optimization
Optimization for Robust Motion Planning and Control
2nd Workshop on Long-Term Human Motion Prediction - ICRA 2020
Tracked Vehicle with Circular Cross-Section to Realize Sideways Motion
Workshop on Human-Swarm Interaction-ICRA 2020
Mapping Human to Robot Motion with Functional Anthropomorphism for Teleoperation and Telemanipulation with Robot Arm Hand Systems
History of Robotics and Automation: Anthropomorphic Motion with Jean Paul Laumond
IMS 2011 Microapps - Yield Analysis During EM Simulation
Large Motion Range Magnet Levitation Using a Planar Array of Coils
IMS 2011 Microapps - A Practical Approach to Verifying RFICs with Fast Mismatch Analysis
IMS MicroApps: Multi-Rate Harmonic Balance Analysis
New Approach of Vehicle Electrification: Analysis of Performance and Implementation Issue
A Flexible Testbed for 5G Waveform Generation and Analysis: MicroApps 2015 - Keysight Technologies
Quick Slip-Turn of HRP-4C on Its Toes
IMS 2012 Microapps - Improve Microwave Circuit Design Flow Through Passive Model Yield and Sensitivity Analysis
Robotics History: Narratives and Networks Oral Histories: Jean-Paul Laumond
Robotics History: Narratives and Networks Oral Histories: John Graig
A method for computing the 3D camera motion (the ego-motion) in a static scene is described, where initially a detected 2D motion between two frames is used to align corresponding image regions. We prove that such a 2D registration removes all effects of camera rotation, even for those image regions that remain misaligned. The resulting residual parallax displacement field between the two region-aligned images is an epipolar field centered at the FOE (Focus- of-Expansion). The 3D camera translation is recovered from the epipolar field. The 3D camera rotation is recovered from the computed 3D translation and the detected 2D motion. The decomposition of image motion into a 2D parametric motion and residual epipolar parallax displacements avoids many of the inherent ambiguities and instabilities associated with decomposing the image motion into its rotational and translational components, and hence makes the computation of ego-motion or 3D structure estimation more robust.
We present new deterministic methods that, given two eigenspace models-each representing a set of n-dimensional observations-will: 1) merge the models to yield a representation of the union of the sets and 2) split one model from another to represent the difference between the sets. As this is done, we accurately keep track of the mean. Here, we give a theoretical derivation of the methods, empirical results relating to the efficiency and accuracy of the techniques, and three general applications, including the construction of Gaussian mixture models that are dynamically updateable.
We present a method to determine 3D motion and structure of multiple objects from two perspective views, using adaptive Hough transform. In our method, segmentation is determined based on a 3D rigidity constraint. Instead of searching candidate solutions over the entire five-dimensional translation and rotation parameter space, we only examine the two-dimensional translation space. We divide the input image into overlapping patches, and, for each sample of the translation space, we compute the rotation parameters of patches using least-squares fit. Every patch votes for a sample in the five- dimensional parameter space. For a patch containing multiple motions, we use a redescending M-estimator to compute rotation parameters of a dominant motion within the patch. To reduce computational and storage burdens of standard multidimensional Hough transform, we use adaptive Hough transform to iteratively refine the relevant parameter space in a "coarse-to-fine" fashion. Our method can robustly recover 3D motion parameters, reject outliers of the flow estimates, and deal with multiple moving objects present in the scene. Applications of the proposed method to both synthetic and real image sequences are demonstrated with promising results.
A new algorithm is presented for feature point based motion tracking in long image sequences. Dynamic scenes with multiple, independently moving objects are considered in which feature points may temporarily disappear enter and leave the view field. The existing approaches to feature point tracking have limited capabilities in handling incomplete trajectories, especially when the number of points and their speeds are large, and trajectory ambiguities are frequent. The proposed algorithm was designed to efficiently resolve these ambiguities.
The apparent pixel motion in an image sequence, called optical flow, is a useful primitive for automatic scene analysis and various other applications of computer vision. In general, however, the optical flow estimation suffers from two significant problems: the problem of illumination that varies with time and the problem of motion discontinuities induced by objects moving with respect to either other objects or with respect to the background. Various integrated approaches for solving these two problems simultaneously have been proposed. Of these, those that are based on the LMedS (least median of squares) appear to be the most robust. The goal of this paper is to carry out an error analysis of two different LMedS-based approaches, one based on the standard LMedS regression and the other using a modification thereof as proposed by us recently. While it is to be expected that the estimation accuracy of any approach would decrease with increasing levels of noise, for LMedS-like methods, it is not always clear as to how much of that decrease in performance can be attributed to the fact that only a small number of randomly selected samples is used for forming temporary solutions. To answer this question, our study here includes a baseline implementation in which all of the image data is used for forming motion estimates. We then compare the estimation errors of the two LMedS-based methods with the baseline implementation. Our error analysis demonstrates that, for the case of Gaussian noise, our modified LMedS approach yields better estimates at moderate levels of noise, but is outperformed by the standard LMedS method as the level of noise increases. For the case of salt-and-pepper noise, the modified LMedS method consistently performs better than the standard LMedS method
A technique is proposed for estimating the parameters of two-dimensional (2-D) uniform motion of multiple moving objects in a scene, based on long-sequence image processing and the application of a multiline fitting algorithm. Plots of the vertical and horizontal projections versus frame number give new images in which uniformly moving objects are represented by skewed band regions, with the angles of the skew from the vertical being a measure of the velocities of the moving objects. For example, vertical bands will correspond to objects with zero velocity. An algorithm called subspace-based line detection (SLIDE) can be used to efficiently determine the skew angles. SLIDE exploits the temporal coherence between the contributions of each of the moving patterns in the frame projections to enhance and distinguish a signal subspace that is defined by the desired motion parameters. A similar procedure can be used to determine the vertical velocities. Some further steps must then be taken to properly associate the horizontal and vertical velocities.
This paper describes a novel cooperative procedure for the segmentation of multiview image sequences exploiting multiple sources of information. Compared to other approaches, no a priori information is needed about the structure and the arrangement of objects in the scene. Three cameras in a particular unsymmetrical set-up are used in the system. The color distribution and the object contours in the constituent 2-D images, the disparity information in stereo image pairs, as well as motion information in subsequent images, are analyzed and evaluated in a cooperative procedure to get reliable segmentation results. The scene is decomposed into a variable number of depth layers, with each layer showing a subset of the segmented regions. The layered representation can be used in a variety of applications. In this paper, the application aims at synthesizing 3-D images for enhanced telepresence allowing the user to "look around" in natural scenes (intermediate views for interactive displays). In another application, 3-D images showing a natural depth-of-focus are synthesized in order to improve viewing comfort with 3-D displays.
Optical flow is the velocity distribution of each pixel in an image. In this paper, the authors extend the 2D optical flow method to 3D optical flow analysis and also improve its performance around boundaries. Dynamic analysis of heart and knee is under investigation.
This correspondence describes a novel approach to three-dimensional (3-D) motion estimation of planar objects based on eigen-normalization, expansion matching (EXM), and a scaled orthographic projection model. Our approach leads to a comprehensive temporal description of all the degrees of freedom in 3-D (three rotations and three translations). Experiments with video streams show robust estimation of the real 3-D rotations and translations of the objects in motion.
No standards are currently tagged "Motion analysis"