The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using…
The concentration factor edge detection method was developed to compute the locations and values of jump discontinuities in a piecewise-analytic function from its first few Fourier series coecients. The method approximates the singular support of a piecewise smooth function using an altered Fourier conjugate partial sum. The accuracy and characteristic features of the resulting jump function approximation depends on these lters, known as concentration factors. Recent research showed that that these concentration factors could be designed using aexible iterative framework, improving upon the overall accuracy and robustness of the method, especially in the case where some Fourier data are untrustworthy or altogether missing. Hypothesis testing methods were used to determine how well the original concentration factor method could locate edges using noisy Fourier data. This thesis combines the iterative design aspect of concentration factor design and hypothesis testing by presenting a new algorithm that incorporates multiple concentration factors into one statistical test, which proves more ective at determining jump discontinuities than the previous HT methods. This thesis also examines how the quantity and location of Fourier data act the accuracy of HT methods. Numerical examples are provided.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity…
Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and electric fields in the Maxwell's equations. In addition, the methods studied here are meshfree, and are suitable for problems defined on complex domains, where mesh generation is computationally expensive or inaccurate, or for problems where the data is only available at scattered locations.
The contributions of this work include a detailed comparison between standard and divergence-free radial basis approximations, a study of the Lebesgue constants for divergence-free approximations and their dependence on node placement, and an investigation of the flat limit of divergence-free interpolants. Finally, numerical solvers for the incompressible Navier-Stokes equations in primitive variables are implemented using discretizations based on traditional and divergence-free kernels. The numerical results are compared to reference solutions obtained with a spectral
method.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use
in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay…
High-order methods are known for their accuracy and computational performance when applied to solving partial differential equations and have widespread use
in representing images compactly. Nonetheless, high-order methods have difficulty representing functions containing discontinuities or functions having slow spectral decay in the chosen basis. Certain sensing techniques such as MRI and SAR provide data in terms of Fourier coefficients, and thus prescribe a natural high-order basis. The field of compressed sensing has introduced a set of techniques based on $\ell^1$ regularization that promote sparsity and facilitate working with functions having discontinuities. In this dissertation, high-order methods and $\ell^1$ regularization are used to address three problems: reconstructing piecewise smooth functions from sparse and and noisy Fourier data, recovering edge locations in piecewise smooth functions from sparse and noisy Fourier data, and reducing time-stepping constraints when numerically solving certain time-dependent hyperbolic partial differential equations.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is…
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Detecting edges in images from a finite sampling of Fourier data is important in a variety of applications. For example, internal edge information can be used to identify tissue boundaries of the brain in a magnetic resonance imaging (MRI) scan,…
Detecting edges in images from a finite sampling of Fourier data is important in a variety of applications. For example, internal edge information can be used to identify tissue boundaries of the brain in a magnetic resonance imaging (MRI) scan, which is an essential part of clinical diagnosis. Likewise, it can also be used to identify targets from synthetic aperture radar data. Edge information is also critical in determining regions of smoothness so that high resolution reconstruction algorithms, i.e. those that do not “smear over” the internal boundaries of an image, can be applied. In some applications, such as MRI, the sampling patterns may be designed to oversample the low frequency while more sparsely sampling the high frequency modes. This type of non-uniform sampling creates additional difficulties in processing the image. In particular, there is no fast reconstruction algorithm, since the FFT is not applicable. However, interpolating such highly non-uniform Fourier data to the uniform coefficients (so that the FFT can be employed) may introduce large errors in the high frequency modes, which is especially problematic for edge detection. Convolutional gridding, also referred to as the non-uniform FFT, is a forward method that uses a convolution process to obtain uniform Fourier data so that the FFT can be directly applied to recover the underlying image. Carefully chosen parameters ensure that the algorithm retains accuracy in the high frequency coefficients. Similarly, the convolutional gridding edge detection algorithm developed in this paper provides an efficient and robust way to calculate edges. We demonstrate our technique in one and two dimensional examples.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Nonuniform Fourier data are routinely collected in applications such as magnetic resonance imaging, synthetic aperture radar, and synthetic imaging in radio astronomy. To acquire a fast reconstruction that does not require an online inverse process, the nonuniform fast Fourier transform…
Nonuniform Fourier data are routinely collected in applications such as magnetic resonance imaging, synthetic aperture radar, and synthetic imaging in radio astronomy. To acquire a fast reconstruction that does not require an online inverse process, the nonuniform fast Fourier transform (NFFT), also called convolutional gridding, is frequently employed. While various investigations have led to improvements in accuracy, efficiency, and robustness of the NFFT, not much attention has been paid to the fundamental analysis of the scheme, and in particular its convergence properties. This paper analyzes the convergence of the NFFT by casting it as a Fourier frame approximation. In so doing, we are able to design parameters for the method that satisfy conditions for numerical convergence. Our so-called frame theoretic convolutional gridding algorithm can also be applied to detect features (such as edges) from nonuniform Fourier samples of piecewise smooth functions.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
This investigation seeks to establish the practicality of numerical frame approximations. Specifically, it develops a new method to approximate the inverse frame operator and analyzes its convergence properties. It is established that sampling with well-localized frames improves both the accuracy…
This investigation seeks to establish the practicality of numerical frame approximations. Specifically, it develops a new method to approximate the inverse frame operator and analyzes its convergence properties. It is established that sampling with well-localized frames improves both the accuracy of the numerical frame approximation as well as the robustness and efficiency of the (finite) frame operator inversion. Moreover, in applications such as magnetic resonance imaging, where the given data often may not constitute a well-localized frame, a technique is devised to project the corresponding frame data onto a more suitable frame. As a result, the target function may be approximated as a finite expansion with its asymptotic convergence solely dependent on its smoothness. Numerical examples are provided.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of…
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to…
This thesis considers the application of basis pursuit to several problems in system identification. After reviewing some key results in the theory of basis pursuit and compressed sensing, numerical experiments are presented that explore the application of basis pursuit to the black-box identification of linear time-invariant (LTI) systems with both finite (FIR) and infinite (IIR) impulse responses, temporal systems modeled by ordinary differential equations (ODE), and spatio-temporal systems modeled by partial differential equations (PDE). For LTI systems, the experimental results illustrate existing theory for identification of LTI FIR systems. It is seen that basis pursuit does not identify sparse LTI IIR systems, but it does identify alternate systems with nearly identical magnitude response characteristics when there are small numbers of non-zero coefficients. For ODE systems, the experimental results are consistent with earlier research for differential equations that are polynomials in the system variables, illustrating feasibility of the approach for small numbers of non-zero terms. For PDE systems, it is demonstrated that basis pursuit can be applied to system identification, along with a comparison in performance with another existing method. In all cases the impact of measurement noise on identification performance is considered, and it is empirically observed that high signal-to-noise ratio is required for successful application of basis pursuit to system identification problems.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows…
Structural features of canonical wall-bounded turbulent flows are described using several techniques, including proper orthogonal decomposition (POD). The canonical wall-bounded turbulent flows of channels, pipes, and flat-plate boundary layers include physics important to a wide variety of practical fluid flows with a minimum of geometric complications. Yet, significant questions remain for their turbulent motions' form, organization to compose very long motions, and relationship to vortical structures. POD extracts highly energetic structures from flow fields and is one tool to further understand the turbulence physics. A variety of direct numerical simulations provide velocity fields suitable for detailed analysis. Since POD modes require significant interpretation, this study begins with wall-normal, one-dimensional POD for a set of turbulent channel flows. Important features of the modes and their scaling are interpreted in light of flow physics, also leading to a method of synthesizing one-dimensional POD modes. Properties of a pipe flow simulation are then studied via several methods. The presence of very long streamwise motions is assessed using a number of statistical quantities, including energy spectra, which are compared to experiments. Further properties of energy spectra, including their relation to fictitious forces associated with mean Reynolds stress, are considered in depth. After reviewing salient features of turbulent structures previously observed in relevant experiments, structures in the pipe flow are examined in greater detail. A variety of methods reveal organization patterns of structures in instantaneous fields and their associated vortical structures. Properties of POD modes for a boundary layer flow are considered. Finally, very wide modes that occur when computing POD modes in all three canonical flows are compared. The results demonstrate that POD extracts structures relevant to characterizing wall-bounded turbulent flows. However, significant care is necessary in interpreting POD results, for which modes can be categorized according to their self-similarity. Additional analysis techniques reveal the organization of smaller motions in characteristic patterns to compose very long motions in pipe flows. The very large scale motions are observed to contribute large fractions of turbulent kinetic energy and Reynolds stress. The associated vortical structures possess characteristics of hairpins, but are commonly distorted from pristine hairpin geometries.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)