PET/MR Multimodality Medical Imaging
29 resources related to PET/MR Multimodality Medical Imaging
- Topics related to PET/MR Multimodality Medical Imaging
- IEEE Organizations related to PET/MR Multimodality Medical Imaging
- Conferences related to PET/MR Multimodality Medical Imaging
- Periodicals related to PET/MR Multimodality Medical Imaging
- Most published Xplore authors for PET/MR Multimodality Medical Imaging
The conference program will consist of plenary lectures, symposia, workshops andinvitedsessions of the latest significant findings and developments in all the major fields ofbiomedical engineering.Submitted papers will be peer reviewed. Accepted high quality paperswill be presented in oral and postersessions, will appear in the Conference Proceedings and willbe indexed in PubMed/MEDLINE & IEEE Xplore
This conference is the annual premier meeting on the use of instrumentation in the Nuclear and Medical fields. The meeting has a very long history of providing an exciting venue for scientists to present their latest advances, exchange ideas, renew existing collaboration and form new ones. The NSS portion of the conference is an ideal forum for scientists and engineers in the field of Nuclear Science, radiation instrumentation, software engineering and data acquisition. The MIC is one of the most informative venues on the state-of-the art use of physics, engineering, and mathematics in Nuclear Medicine and related imaging modalities, such as CT and increasingly so MRI, through the development of hybrid devices
Imaging methods applied to living organisms with emphasis on innovative approaches that use emerging technologies supported by rigorous physical and mathematical analysis and quantitative evaluation of performance.
IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control was the number-three journal in acoustics in 2002, according to the annual Journal Citation Report (2002 edition) published by the Institute for Scientific Information. This publication focuses on the theory, design, and application on generation, transmission, and detection of bulk and surface mechanical waves; fundamental studies in physical acoustics; design of sonic ...
Specific topics include, but are not limited to: a) visualization techniques and methodologies; b) visualization systems and software; c) volume visulaization; d) flow visualization; e) information visualization; f) multivariate visualization; g) modeling and surfaces; h) rendering techniques and methodologies; i) graphics systems and software; j) animation and simulation; k) user interfaces; l) virtual reality; m) visual programming and program visualization; ...
2013 IEEE 10th International Symposium on Biomedical Imaging, 2013
The quantitative reliability of pulmonary PET scans is compromised by respiratory motion. The emergence of simultaneous, whole-body PET/MR imaging enables us to correct for motion artifacts in PET using motion information derived from anatomical MR images. We present here a framework for respiratory motion-compensated PET image reconstruction using simultaneous PET/MR. We have developed a radial FLASH pulse sequence for generating ...
2013 IEEE Nuclear Science Symposium and Medical Imaging Conference (2013 NSS/MIC), 2013
The combination of Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) has gained attention over the past five years due to the complementary advantages of these modalities . Whole-Body (WB) PET/MR systems for clinical applications still have room for improvement regarding image quality optimization. In this paper, the binding process of MR images is addressed since MR images are ...
2014 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2014
Multimodality imaging techniques (PET/CT and PET/MR) offer the opportunity of integrating functional and anatomical information to improve clinical diagnostic accuracy. High-resolution PET images can be obtained by exploiting structural information within- and post-image reconstruction. Although PET/CT is more utilised than PET/MR in routine clinical practice, the majority of multimodal technologies are validated either on simulations or clinically acquired PET/MR data. ...
2012 IEEE Nuclear Science Symposium and Medical Imaging Conference Record (NSS/MIC), 2012
Different imaging modalities sample different properties of the tissue, and thus the tissue may appear different depending on the imaging technique. As a consequence, the shapes of organs and homogenous regions in tissues often have different shapes depending on the type of imaging. This presents a problem for ROI-based multi-modality quantitative imaging studies, since it is not clear what modality ...
Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (Cat. No.00CH37143), 2000
It is common for patients to undergo multiple tomographic imaging which provide complementary information. But variations in patient orientation, and differences in resolution and contrast of the modalities make it difficult for a clinician to mentally fuse all the image information accurately. There has been considerable interest in using image registration techniques to transfer all the information into a common ...
Ignite! Session: Bill Moses
2011 IEEE Medal for Innovations in Healthcare Technology - Harrison H. Barrett
Harrison H. Barrett
Developing Point-of-Care Technologies
From THz imaging to millimeter-wave stimulation of neurons: Is there a killer application for high frequency RF in the medical community? (RFIC 2015 Keynote)
Ultrafast Lasers for Multi-photon Microscopy - Plenary Speaker: Jim Kafka - IPC 2018
How Facial Analysis Technology Can Help Children with Genetic Disorders - IEEE Region 4 Technical Presentation
Biomedical Engineering at the Mayo Clinic
Brooklyn 5G Summit: RFIC Technology for Massive MIMO and Beamforming Panel
IEEE Corporate Innovation Award - Pixar Animation Studios - 2018 IEEE Honors Ceremony
Dr. Scott Fish
Oral History: Earl Bakken
Brooklyn 5G Summit: Going the Distance with CMOs: mm-Waves and Beyond
Brooklyn 5G Summit: Spectrum for 5G Panel
Larson Collection interview with Rudolph Peierls
Brooklyn 5G 2016: CTO Panel: New Wireless Business
Surgical Robotics: Medical robotics and computer-integrated interventional medicine
Panel 3: Mobile Broadband in mmW bands - FCC Perspective - Brooklyn 5G - 2015
ICASSP 2010 - Radar Imaging of Building Interiors
The quantitative reliability of pulmonary PET scans is compromised by respiratory motion. The emergence of simultaneous, whole-body PET/MR imaging enables us to correct for motion artifacts in PET using motion information derived from anatomical MR images. We present here a framework for respiratory motion-compensated PET image reconstruction using simultaneous PET/MR. We have developed a radial FLASH pulse sequence for generating gated volumetric MR images at a reasonable speed without significantly sacrificing image quality. A navigator encapsulated within the pulse sequence enables us to retrospectively compute time bins corresponding to each gate. The deformation fields for each gate with respect to a reference gate are computed from the gated MR images by means of non-rigid registration. The gated MR images are also used to generate individual attenuation maps for each gate. Finally motion-compensated PET reconstruction is performed using a maximum a posteriori (MAP) approach. The complete framework was applied to a clinical study conducted on the Biograph mMR scanner (Siemens Medical Solutions), which allows simultaneous acquisition of whole-body PET/MR data. This study demonstrates the utility of our framework in generating meaningful estimates of deformation fields and correcting for motion artifacts in PET.
The combination of Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI) has gained attention over the past five years due to the complementary advantages of these modalities . Whole-Body (WB) PET/MR systems for clinical applications still have room for improvement regarding image quality optimization. In this paper, the binding process of MR images is addressed since MR images are processed to obtain a continuous 3D WB-image. Issues with automatic binding algorithms are identified, and preliminary post- processing algorithms are proposed.
Multimodality imaging techniques (PET/CT and PET/MR) offer the opportunity of integrating functional and anatomical information to improve clinical diagnostic accuracy. High-resolution PET images can be obtained by exploiting structural information within- and post-image reconstruction. Although PET/CT is more utilised than PET/MR in routine clinical practice, the majority of multimodal technologies are validated either on simulations or clinically acquired PET/MR data. This work describes a PET/CT phantom experiment that provides realistic data for the validation of anatomy-based algorithms in a clinical setting. We performed a PET/CT phantom acquisition combining PET radiotracer concentration and CT Contrast Media (CM) to obtain images with contrast levels similar to clinical [18F]Fluoride bone scans. We performed three acquisitions to cover a wider range of possible clinical situations and evaluate the performance of image enhancement algorithms. On the one hand the CT was used as a prior to regularized the PET image reconstruction, on the other it was integrated with the functional data into a post reconstruction resolution recovery algorithm. Through the analysis of the CT acquisition we described the correlation between CM concentration and CT image contrast. We also quantified a 10-20% difference on the recovered PET radioactivity due to an incorrect attenuation correction. Furthermore, we were able to quantify the accuracy of the true activity estimation when integrating anatomical and functional information. Specifically, the improvement was of 9% when CT images were used as prior within the reconstruction and of 12% when used for resolution recovery in post reconstruction. This experimental procedure aimed to obtain PET/CT contrast similar to that of a patient acquisition. The results reported can be used to reproduce experiments mimicking a wider range of clinical studies and provide a solid ground truth for the validation of image enhancement algorithms based on the integration of anatomical and functional images.
Different imaging modalities sample different properties of the tissue, and thus the tissue may appear different depending on the imaging technique. As a consequence, the shapes of organs and homogenous regions in tissues often have different shapes depending on the type of imaging. This presents a problem for ROI-based multi-modality quantitative imaging studies, since it is not clear what modality should be used for data segmentation. An example of such study is the quantitative PET imaging of Parkinson's disease subjects, which often present functional atrophy without an anatomical atrophy. A choice must be made between anatomical (MRI) and radioactivity-based (PET) ROIs. In addition manual ROI placement can be very time consuming and may lack consistency. In this work, we propose a new approach to multi-modality image segmentation. The proposed method generates so-called mixed ROIs that can be computed in a fully automated mode from single modality-based pure ROIs. The computation of the mixed ROIs is based on the fusion of probability images. The use of the fusion principles made it possible to transition between the pure ROI shapes in a smooth fashion. The mixed ROIs were found to be better aligned with the high activity regions than the pure MR ROIs, and had higher anatomical fidelity compared to the pure PET ROIs. Using the method, it is possible to generate a multitude of ROI sets for a particular study starting from one or more previously defined regions.
It is common for patients to undergo multiple tomographic imaging which provide complementary information. But variations in patient orientation, and differences in resolution and contrast of the modalities make it difficult for a clinician to mentally fuse all the image information accurately. There has been considerable interest in using image registration techniques to transfer all the information into a common coordinate frame. In this paper a maximization of mutual information based multi-modality medical image registration method is described. Mutual information (MI) is usually used to measure the statistical dependence between two random variables, or the amount of information that one variable contains about the other. The method applies mutual information to measure the information redundancy between the intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. There exist many important technical issues to be solved about the method such as how to compute MI more accurately and how to obtain the maximization of MI, which are seldom mentioned in published papers. In this paper we provide some implementation issues, for example, subsampling, PV interpolation, outlier strategy. The combination of these computation techniques and searching strategy leads to a fast and accurate multi-modality image registration. The registration results of 3D human brain volume data of 41 CT-MR and 35 PET-MR from seven patients are validated to be subvoxel. The registration method is promising in clinical use.
The registration of multi-modality medical images is an important tool in surgical application. We presented a method of computing different modality medical images registration of the same patient. It incorporates prior joint intensity distribution between the two imaging modalities based on registered training images. The prior joint intensity distribution is modeled by support vector machine. Results aligning CT/MR and PET/MR scans demonstrate that it can attain sub-voxel registration accuracy. Furthermore, it is a fast registration method because support vector machine solution is sparse
Breast specific gamma-ray imaging is gaining research attention as a mean to improve lesion detectability and patient management. Multimodality imaging is currently a hot topic of research with a gradually increasing number of systems installed in hospitals and new trends such as PET/MR and SPECT/MR. Cadmium Zinc Telluride (CdZnTe) pixelated detectors have the potential to lead in both these trends, allowing the design of flexible systems that can perform PET and SPECT imaging with the same detector material whereas being compact enough to be integrated with current MR systems. In the current study, the computational models of integrated PET/SPECT systems employing pixelated CdZnTe detectors in a stacked arrangement to achieve high detection efficiency were assessed in terms of spatial resolution. In addition to classic collimators for SPECT imaging, the possibility for electronic collimation is also investigated to greatly enhance the flexibility of the system. The models were constructed using a unique and realistic modeling framework that accurately handles charge transport effects in the detector and particle interactions in the system. Current preliminary results for PET imaging suggest that sub-centimeter resolution can be achieved.
Some similarity measures used in state-of-the-art multimodality image registration algorithms, (e.g., mutual information (MI)) have been shown to be suitable anatomical priors for maximum a posteriori reconstruction in emission tomography. Therefore, it is reasonable to assume that some originally designed anatomical priors may also be well suited for multimodality image registration. In this work, we evaluate the registration performance of three variants of an anatomical Markov prior, previously proposed by Bowsher et al. First, simulated data are used to verify whether the suggested registration criteria yield an optimum when an FDG positron emission tomography (PET) image and a T1-weighted magnetic resonance (MR) image of a human brain are perfectly aligned. Next, the registration accuracy of the proposed criteria is assessed for PET to MR and MR to PET registration of simulated human brain images, and compared to the accuracy reached by MI. Last, the new methods are applied to challenging measured rat and mouse brain data sets, consisting of low resolution FDG microPET images and high resolution microMR images with a strong bias field. It was shown that the anatomy-based Markov priors indeed yield a well-defined optimum for aligned PET-MR images and that similar registration accuracy can be achieved as with MI, especially for registration to MR images suffering from a bias field. Nevertheless, in contrast to MI, the new criteria usually require a good initial guess of the transformation parameters in order not to get stuck in a local optimum. The proposed methods are shown to be superior to MI for registering measured microMR brain images with a strong bias field to FDG microPET images if a good initialization is provided.
This work deals with the use of a probabilistic quad-tree graph (Hidden Markov Tree, HMT) to provide fast computation, improved robustness and an effective interpretational framework for image analysis and processing in oncology. Thanks to two efficient aspects (multi observation and multi resolution) of HMT and Bayesian inference, we exploited joint statistical dependencies between hidden states to handle the entire data stack. This new flexible framework was applied first to mono modal PET image denoising taking into consideration simultaneously the Wavelets and Contourlets transforms through multi observation capability of the model. Secondly, the developed approach was tested for multi modality image segmentation in order to take advantage of the high resolution of the morphological computed tomography (CT) image and the high contrast of the functional positron emission tomography (PET) image. On the one hand, denoising performed through the wavelet-contourlet combined multi observation HMT led to the best trade-off between denoising and quantitative bias compared to wavelet or contourlet only denoising. On the other hand, PET/CT segmentation led to a reliable tumor segmentation taking advantage of both PET and CT complementary information regarding tissues of interest. Future work will investigate the potential of the HMT for PET/MR and multi tracer PET image analysis. Moreover, we will investigate the added value of Pairwise Markov Tree (PMT) models and evidence theory within this context.
Image registration based on mutual information is of high accuracy and robustness. Unfortunately, the mutual information function is generally not a smooth function but one containing many local maxima, which has a large influence on optimization. This paper proposes a registration method based on Quantum-behaved Particle Swarm Optimization Algorithm. Not only QPSO has less parameters to control, but also does its sampling space at each iteration covers the whole solution space. Thus QPSO can find the best solution quickly and guarantee to be global convergent. Experiments shows that this registration method could efficiently restrain local maxima of mutual information function and it can improve accuracy. Compare with the gold standard, the subvoxel accuracy can be achieved.
No standards are currently tagged "PET/MR Multimodality Medical Imaging"