Development of A Novel Virtual Tool for Donor Heart Fitting

156174-Thumbnail Image.png
Description
Heart transplantation is the final treatment option for end-stage heart failure. In the United States, 70 pediatric patients die annually on the waitlist while 800 well-functioning organs get discarded. Concern for potential size-mismatch is one source of allograft waste and

Heart transplantation is the final treatment option for end-stage heart failure. In the United States, 70 pediatric patients die annually on the waitlist while 800 well-functioning organs get discarded. Concern for potential size-mismatch is one source of allograft waste and high waitlist mortality. Clinicians use the donor-recipient body weight (DRBW) ratio, a standalone metric, to evaluate allograft size-match. However, this body weight metric is far removed from cardiac anatomy and neglects an individual’s anatomical variations. This thesis body of work developed a novel virtual heart transplant fit assessment tool and investigated the tool’s clinical utility to help clinicians safely expand patient donor pools.

The tool allowed surgeons to take an allograft reconstruction and fuse it to a patient’s CT or MR medical image for virtual fit assessment. The allograft is either a reconstruction of the donor’s actual heart (from CT or MR images) or an analogue from a health heart library. The analogue allograft geometry is identified from gross donor parameters using a regression model build herein. The need for the regression model is donor images may not exist or they may not become available within the time-window clinicians have to make a provisional acceptance of an offer.

The tool’s assessment suggested > 20% of upper DRBW listings could have been increased at Phoenix Children’s Hospital (PCH). Upper DRBW listings in the UNOS national database was statistically smaller than at PCH (p-values: < 0.001). Delayed sternal closure and surgeon perceived complication variables had an association (p-value: 0.000016) with 9 of the 11 cases that surgeons had perceived fit-related complications had delayed closures (p-value: 0.034809).

A tool to assess allograft size-match has been developed. Findings warrant future preclinical and clinical prospective studies to further assess the tool’s clinical utility.
Date Created
2018
Agent

The effects of endovascular treatment parameters on cerebral aneurysm hemodynamics

152160-Thumbnail Image.png
Description
A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases.

A cerebral aneurysm is an abnormal ballooning of the blood vessel wall in the brain that occurs in approximately 6% of the general population. When a cerebral aneurysm ruptures, the subsequent damage is lethal damage in nearly 50% of cases. Over the past decade, endovascular treatment has emerged as an effective treatment option for cerebral aneurysms that is far less invasive than conventional surgical options. Nonetheless, the rate of successful treatment is as low as 50% for certain types of aneurysms. Treatment success has been correlated with favorable post-treatment hemodynamics. However, current understanding of the effects of endovascular treatment parameters on post-treatment hemodynamics is limited. This limitation is due in part to current challenges in in vivo flow measurement techniques. Improved understanding of post-treatment hemodynamics can lead to more effective treatments. However, the effects of treatment on hemodynamics may be patient-specific and thus, accurate tools that can predict hemodynamics on a case by case basis are also required for improving outcomes.Accordingly, the main objectives of this work were 1) to develop computational tools for predicting post-treatment hemodynamics and 2) to build a foundation of understanding on the effects of controllable treatment parameters on cerebral aneurysm hemodynamics. Experimental flow measurement techniques, using particle image velocimetry, were first developed for acquiring flow data in cerebral aneurysm models treated with an endovascular device. The experimental data were then used to guide the development of novel computational tools, which consider the physical properties, design specifications, and deployment mechanics of endovascular devices to simulate post-treatment hemodynamics. The effects of different endovascular treatment parameters on cerebral aneurysm hemodynamics were then characterized under controlled conditions. Lastly, application of the computational tools for interventional planning was demonstrated through the evaluation of two patient cases.
Date Created
2013
Agent

Analytical control grid registration for efficient application of optical flow

151656-Thumbnail Image.png
Description
Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values

Image resolution limits the extent to which zooming enhances clarity, restricts the size digital photographs can be printed at, and, in the context of medical images, can prevent a diagnosis. Interpolation is the supplementing of known data with estimated values based on a function or model involving some or all of the known samples. The selection of the contributing data points and the specifics of how they are used to define the interpolated values influences how effectively the interpolation algorithm is able to estimate the underlying, continuous signal. The main contributions of this dissertation are three fold: 1) Reframing edge-directed interpolation of a single image as an intensity-based registration problem. 2) Providing an analytical framework for intensity-based registration using control grid constraints. 3) Quantitative assessment of the new, single-image enlargement algorithm based on analytical intensity-based registration. In addition to single image resizing, the new methods and analytical approaches were extended to address a wide range of applications including volumetric (multi-slice) image interpolation, video deinterlacing, motion detection, and atmospheric distortion correction. Overall, the new approaches generate results that more accurately reflect the underlying signals than less computationally demanding approaches and with lower processing requirements and fewer restrictions than methods with comparable accuracy.
Date Created
2013
Agent

Video deinterlacing using control grid interpolation frameworks

151024-Thumbnail Image.png
Description
Video deinterlacing is a key technique in digital video processing, particularly with the widespread usage of LCD and plasma TVs. This thesis proposes a novel spatio-temporal, non-linear video deinterlacing technique that adaptively chooses between the results from one dimensional control

Video deinterlacing is a key technique in digital video processing, particularly with the widespread usage of LCD and plasma TVs. This thesis proposes a novel spatio-temporal, non-linear video deinterlacing technique that adaptively chooses between the results from one dimensional control grid interpolation (1DCGI), vertical temporal filter (VTF) and temporal line averaging (LA). The proposed method performs better than several popular benchmarking methods in terms of both visual quality and peak signal to noise ratio (PSNR). The algorithm performs better than existing approaches like edge-based line averaging (ELA) and spatio-temporal edge-based median filtering (STELA) on fine moving edges and semi-static regions of videos, which are recognized as particularly challenging deinterlacing cases. The proposed approach also performs better than the state-of-the-art content adaptive vertical temporal filtering (CAVTF) approach. Along with the main approach several spin-off approaches are also proposed each with its own characteristics.
Date Created
2012
Agent

Rapid 3D phase contrast magnetic resonance angiography through high-moment velocity encoding and 3D parallel imaging

150069-Thumbnail Image.png
Description
Phase contrast magnetic resonance angiography (PCMRA) is a non-invasive imaging modality that is capable of producing quantitative vascular flow velocity information. The encoding of velocity information can significantly increase the imaging acquisition and reconstruction durations associated with this technique. The

Phase contrast magnetic resonance angiography (PCMRA) is a non-invasive imaging modality that is capable of producing quantitative vascular flow velocity information. The encoding of velocity information can significantly increase the imaging acquisition and reconstruction durations associated with this technique. The purpose of this work is to provide mechanisms for reducing the scan time of a 3D phase contrast exam, so that hemodynamic velocity data may be acquired robustly and with a high sensitivity. The methods developed in this work focus on the reduction of scan duration and reconstruction computation of a neurovascular PCMRA exam. The reductions in scan duration are made through a combination of advances in imaging and velocity encoding methods. The imaging improvements are explored using rapid 3D imaging techniques such as spiral projection imaging (SPI), Fermat looped orthogonally encoded trajectories (FLORET), stack of spirals and stack of cones trajectories. Scan durations are also shortened through the use and development of a novel parallel imaging technique called Pretty Easy Parallel Imaging (PEPI). Improvements in the computational efficiency of PEPI and in general MRI reconstruction are made in the area of sample density estimation and correction of 3D trajectories. A new method of velocity encoding is demonstrated to provide more efficient signal to noise ratio (SNR) gains than current state of the art methods. The proposed velocity encoding achieves improved SNR through the use of high gradient moments and by resolving phase aliasing through the use measurement geometry and non-linear constraints.
Date Created
2011
Agent

Advanced methods in post cartesian imaging

149385-Thumbnail Image.png
Description
Magnetic resonance (MR) imaging with data acquisition on a non-rectangular grid permits a variety of approaches to cover k-space. This flexibility can be exploited to achieve clinically relevant characteristics -- fast yet full coverage for short scan times, center out

Magnetic resonance (MR) imaging with data acquisition on a non-rectangular grid permits a variety of approaches to cover k-space. This flexibility can be exploited to achieve clinically relevant characteristics -- fast yet full coverage for short scan times, center out schemes for short Te, over-sampled k-space for robustness to motion, long acquisition time for improved signal-to-noise (SNR) performance and benign under-sampling (aliasing) artifact. This dissertation presents advances in Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction (PROPELLER) trajectory design and improved reconstruction for spiral imaging. Scan time in PROPELLER imaging can be reduced by tailoring the trajectory to the required Field-Of-View (FOV). A technique to design the PROPELLER trajectory for an elliptical FOV is described. The proposed solution is a set of empirically derived closed form equations that preserve the standard PROPELLER geometry and specify the minimum number of blades necessary. Reconstructing spiral scans requires accurate trajectory information. A simple method to measure the deviation from the designed trajectory due to gradient coupling is presented. A line phantom is used to force a uniform structure in a predetermined orientation in k-space. This uniformity permits measurements of zeroth order trajectory deviations due to gradient coupling. Spiral reconstruction is also sensitive to B0 inhomogeneities (variations in the external magnetic field). This sensitivity manifests itself as a spatially varying blur. An algorithm to correct for concomitant field and first order B0 inhomogeneity effects is developed based on de-blurring via convolution by separable kernels. To reduce computation time, an empirical equation for sufficient kernel length is derived. It is also necessary to know the noise characteristics of the proposed algorithm; this is investigated via Monte-Carlo simulations. The algorithm is further extended to correct for concomitant field artifacts by modeling these artifacts as blurring due to a temporally static field map. This approach has the potential for further reduction in computational cost by combining the B0 map with the concomitant field map to simultaneously correct for artifacts resulting from both field inhomogeneities and concomitant field map.
Date Created
2010
Agent