Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy…
Multiple-channel detection is considered in the context of a sensor network where data can be exchanged directly between sensor nodes that share a common edge in the network graph. Optimal statistical tests used for signal source detection with multiple noisy sensors, such as the Generalized Coherence (GC) estimate, use pairwise measurements from every pair of sensors in the network and are thus only applicable when the network graph is completely connected, or when data are accumulated at a common fusion center. This thesis presents and exploits a new method that uses maximum-entropy techniques to estimate measurements between pairs of sensors that are not in direct communication, thereby enabling the use of the GC estimate in incompletely connected sensor networks. The research in this thesis culminates in a main conjecture supported by statistical tests regarding the topology of the incomplete network graphs.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as…
The solution of the linear system of equations $Ax\approx b$ arising from the discretization of an ill-posed integral equation with a square integrable kernel is considered. The solution by means of Tikhonov regularization in which $x$ is found to as the minimizer of $J(x)=\{ \|Ax -b\|_2^2 + \lambda^2 \|L x\|_2^2\}$ introduces the unknown regularization parameter $\lambda$ which trades off the fidelity of the solution data fit and its smoothing norm, which is determined by the choice of $L$. The Generalized Discrepancy Principle (GDP) and Unbiased Predictive Risk Estimator (UPRE) are methods for finding $\lambda$ given prior conditions on the noise in the measurements $b$. Here we consider the case of $L=I$, and hence use the relationship between the singular value expansion and the singular value decomposition for square integrable kernels to prove that the GDP and UPRE estimates yield a convergent sequence for $\lambda$ with increasing problem size. Hence the estimate of $\lambda$ for a large problem may be found by down-sampling to a smaller problem, or to a set of smaller problems, and applying these estimators more efficiently on the smaller problems. In consequence the large scale problem can be solved in a single step immediately with the parameter found from the down sampled problem(s).
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally…
Deconvolution of noisy data is an ill-posed problem, and requires some form of regularization to stabilize its solution. Tikhonov regularization is the most common method used, but it depends on the choice of a regularization parameter λ which must generally be estimated using one of several common methods. These methods can be computationally intensive, so I consider their behavior when only a portion of the sampled data is used. I show that the results of these methods converge as the sampling resolution increases, and use this to suggest a method of downsampling to estimate λ. I then present numerical results showing that this method can be feasible, and propose future avenues of inquiry.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Tikhonov regularization for projected solutions of large-scale ill-posed problems is considered. The Golub{Kahan iterative bidiagonalization is used to project the problem onto a subspace and regularization then applied to nd a subspace approximation to the full problem. Determination of the…
Tikhonov regularization for projected solutions of large-scale ill-posed problems is considered. The Golub{Kahan iterative bidiagonalization is used to project the problem onto a subspace and regularization then applied to nd a subspace approximation to the full problem. Determination of the regularization, parameter for the projected problem by unbiased predictive risk estimation, generalized cross validation, and discrepancy principle techniques is investigated. It is shown that the regularized parameter obtained by the unbiased predictive risk estimator can provide a good estimate which can be used for a full problem that is moderately to severely ill-posed. A similar analysis provides the weight parameter for the weighted generalized cross validation such that the approach is also useful in these cases, and also explains why the generalized cross validation without weighting is not always useful. All results are independent of whether systems are over- or underdetermined. Numerical simulations for standard one-dimensional test problems and two- dimensional data, for both image restoration and tomographic image reconstruction, support the analysis and validate the techniques. The size of the projected problem is found using an extension of a noise revealing function for the projected problem [I. Hn etynkov a, M. Ple singer, and Z. Strako s, BIT Numer. Math., 49 (2009), pp. 669{696]. Furthermore, an iteratively reweighted regularization approach for edge preserving regularization is extended for projected systems, providing stabilization of the solutions of the projected systems and reducing dependence on the determination of the size of the projected subspace.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity…
Divergence-free vector field interpolants properties are explored on uniform and scattered nodes, and also their application to fluid flow problems. These interpolants may be applied to physical problems that require the approximant to have zero divergence, such as the velocity field in the incompressible Navier-Stokes equations and the magnetic and electric fields in the Maxwell's equations. In addition, the methods studied here are meshfree, and are suitable for problems defined on complex domains, where mesh generation is computationally expensive or inaccurate, or for problems where the data is only available at scattered locations.
The contributions of this work include a detailed comparison between standard and divergence-free radial basis approximations, a study of the Lebesgue constants for divergence-free approximations and their dependence on node placement, and an investigation of the flat limit of divergence-free interpolants. Finally, numerical solvers for the incompressible Navier-Stokes equations in primitive variables are implemented using discretizations based on traditional and divergence-free kernels. The numerical results are compared to reference solutions obtained with a spectral
method.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is…
Modern measurement schemes for linear dynamical systems are typically designed so that different sensors can be scheduled to be used at each time step. To determine which sensors to use, various metrics have been suggested. One possible such metric is the observability of the system. Observability is a binary condition determining whether a finite number of measurements suffice to recover the initial state. However to employ observability for sensor scheduling, the binary definition needs to be expanded so that one can measure how observable a system is with a particular measurement scheme, i.e. one needs a metric of observability. Most methods utilizing an observability metric are about sensor selection and not for sensor scheduling. In this dissertation we present a new approach to utilize the observability for sensor scheduling by employing the condition number of the observability matrix as the metric and using column subset selection to create an algorithm to choose which sensors to use at each time step. To this end we use a rank revealing QR factorization algorithm to select sensors. Several numerical experiments are used to demonstrate the performance of the proposed scheme.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The χ[superscript 2] principle and the unbiased predictive risk estimator are used to determine optimal regularization parameters in the context of 3-D focusing gravity inversion with the minimum support stabilizer. At each iteration of the focusing inversion the minimum support…
The χ[superscript 2] principle and the unbiased predictive risk estimator are used to determine optimal regularization parameters in the context of 3-D focusing gravity inversion with the minimum support stabilizer. At each iteration of the focusing inversion the minimum support stabilizer is determined and then the fidelity term is updated using the standard form transformation. Solution of the resulting Tikhonov functional is found efficiently using the singular value decomposition of the transformed model matrix, which also provides for efficient determination of the updated regularization parameter each step. Experimental 3-D simulations using synthetic data of a dipping dike and a cube anomaly demonstrate that both parameter estimation techniques outperform the Morozov discrepancy principle for determining the regularization parameter. Smaller relative errors of the reconstructed models are obtained with fewer iterations. Data acquired over the Gotvand dam site in the south-west of Iran are used to validate use of the methods for inversion of practical data and provide good estimates of anomalous structures within the subsurface.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
The inverse problem associated with electrochemical impedance spectroscopy requiring the solution of a Fredholm integral equation of the first kind is considered. If the underlying physical model is not clearly determined, the inverse problem needs to be solved using a…
The inverse problem associated with electrochemical impedance spectroscopy requiring the solution of a Fredholm integral equation of the first kind is considered. If the underlying physical model is not clearly determined, the inverse problem needs to be solved using a regularized linear least squares problem that is obtained from the discretization of the integral equation. For this system, it is shown that the model error can be made negligible by a change of variables and by extending the effective range of quadrature. This change of variables serves as a right preconditioner that significantly improves the condition of the system. Still, to obtain feasible solutions the additional constraint of non-negativity is required. Simulations with artificial, but realistic, data demonstrate that the use of non-negatively constrained least squares with a smoothing norm provides higher quality solutions than those obtained without the non-negative constraint. Using higher-order smoothing norms also reduces the error in the solutions. The L-curve and residual periodogram parameter choice criteria, which are used for parameter choice with regularized linear least squares, are successfully adapted to be used for the non-negatively constrained Tikhonov least squares problem. Although these results have been verified within the context of the analysis of electrochemical impedance spectroscopy, there is no reason to suppose that they would not be relevant within the broader framework of solving Fredholm integral equations for other applications.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of…
This dissertation involves three problems that are all related by the use of the singular value decomposition (SVD) or generalized singular value decomposition (GSVD). The specific problems are (i) derivation of a generalized singular value expansion (GSVE), (ii) analysis of the properties of the chi-squared method for regularization parameter selection in the case of nonnormal data and (iii) formulation of a partial canonical correlation concept for continuous time stochastic processes. The finite dimensional SVD has an infinite dimensional generalization to compact operators. However, the form of the finite dimensional GSVD developed in, e.g., Van Loan does not extend directly to infinite dimensions as a result of a key step in the proof that is specific to the matrix case. Thus, the first problem of interest is to find an infinite dimensional version of the GSVD. One such GSVE for compact operators on separable Hilbert spaces is developed. The second problem concerns regularization parameter estimation. The chi-squared method for nonnormal data is considered. A form of the optimized regularization criterion that pertains to measured data or signals with nonnormal noise is derived. Large sample theory for phi-mixing processes is used to derive a central limit theorem for the chi-squared criterion that holds under certain conditions. Departures from normality are seen to manifest in the need for a possibly different scale factor in normalization rather than what would be used under the assumption of normality. The consequences of our large sample work are illustrated by empirical experiments. For the third problem, a new approach is examined for studying the relationships between a collection of functional random variables. The idea is based on the work of Sunder that provides mappings to connect the elements of algebraic and orthogonal direct sums of subspaces in a Hilbert space. When combined with a key isometry associated with a particular Hilbert space indexed stochastic process, this leads to a useful formulation for situations that involve the study of several second order processes. In particular, using our approach with two processes provides an independent derivation of the functional canonical correlation analysis (CCA) results of Eubank and Hsing. For more than two processes, a rigorous derivation of the functional partial canonical correlation analysis (PCCA) concept that applies to both finite and infinite dimensional settings is obtained.
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)