Energy and quality-aware multimedia signal processing

151215-Thumbnail Image.png
Description
Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present

Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.
Date Created
2012
Agent

Camera calibration using adaptive segmentation and ellipse fitting for localizing control points

151204-Thumbnail Image.png
Description
There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration

There is a growing interest for improved high-accuracy camera calibration methods due to the increasing demand for 3D visual media in commercial markets. Camera calibration is used widely in the fields of computer vision, robotics and 3D reconstruction. Camera calibration is the first step for extracting 3D data from a 2D image. It plays a crucial role in computer vision and 3D reconstruction due to the fact that the accuracy of the reconstruction and 3D coordinate determination relies on the accuracy of the camera calibration to a great extent. This thesis presents a novel camera calibration method using a circular calibration pattern. The disadvantages and issues with existing state-of-the-art methods are discussed and are overcome in this work. The implemented system consists of techniques of local adaptive segmentation, ellipse fitting, projection and optimization. Simulation results are presented to illustrate the performance of the proposed scheme. These results show that the proposed method reduces the error as compared to the state-of-the-art for high-resolution images, and that the proposed scheme is more robust to blur in the imaged calibration pattern.
Date Created
2012
Agent

A new camera calibration accuracy standard for three-dimensional image reconstruction using Monte Carlo simulations

151165-Thumbnail Image.png
Description
Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry.

Camera calibration has applications in the fields of robotic motion, geographic mapping, semiconductor defect characterization, and many more. This thesis considers camera calibration for the purpose of high accuracy three-dimensional reconstruction when characterizing ball grid arrays within the semiconductor industry. Bouguet's calibration method is used following a set of criteria with the purpose of studying the method's performance according to newly proposed standards. The performance of the camera calibration method is currently measured using standards such as pixel error and computational time. This thesis proposes the use of standard deviation of the intrinsic parameter estimation within a Monte Carlo simulation as a new standard of performance measure. It specifically shows that the standard deviation decreases based on the increased number of images input into the calibration routine. It is also shown that the default thresholds of the non-linear maximum likelihood estimation problem of the calibration method require change in order to improve computational time performance; however, the accuracy lost is negligable even for high accuracy requirements such as ball grid array characterization.
Date Created
2012
Agent

Underwater optical sensorbot for in situ pH monitoring

151156-Thumbnail Image.png
Description
Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as

Continuous underwater observation is a challenging engineering task that could be accomplished by development and deployment of a sensor array that can survive harsh underwater conditions. One approach to this challenge is a swarm of micro underwater robots, known as Sensorbots, that are equipped with biogeochemical sensors that can relay information among themselves in real-time. This innovative method for underwater exploration can contribute to a more comprehensive understanding of the ocean by not limiting sampling to a single point and time. In this thesis, Sensorbot Beta, a low-cost fully enclosed Sensorbot prototype for bench-top characterization and short-term field testing, is presented in a modular format that provides flexibility and the potential for rapid design. Sensorbot Beta is designed around a microcontroller driven platform comprised of commercial off-the-shelf components for all hardware to reduce cost and development time. The primary sensor incorporated into Sensorbot Beta is an in situ fluorescent pH sensor. Design considerations have been made for easy adoption of other fluorescent or phosphorescent sensors, such as dissolved oxygen or temperature. Optical components are designed in a format that enables additional sensors. A real-time data acquisition system, utilizing Bluetooth, allows for characterization of the sensor in bench top experiments. The Sensorbot Beta demonstrates rapid calibration and future work will include deployment for large scale experiments in a lake or ocean.
Date Created
2012
Agent

Multi-user diversity systems with application to cognitive radio

151093-Thumbnail Image.png
Description
This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error

This thesis aims to investigate the capacity and bit error rate (BER) performance of multi-user diversity systems with random number of users and considers its application to cognitive radio systems. Ergodic capacity, normalized capacity, outage capacity, and average bit error rate metrics are studied. It has been found that the randomization of the number of users will reduce the ergodic capacity. A stochastic ordering framework is adopted to order user distributions, for example, Laplace transform ordering. The ergodic capacity under different user distributions will follow their corresponding Laplace transform order. The scaling law of ergodic capacity with mean number of users under Poisson and negative binomial user distributions are studied for large mean number of users and these two random distributions are ordered in Laplace transform ordering sense. The ergodic capacity per user is defined and is shown to increase when the total number of users is randomized, which is the opposite to the case of unnormalized ergodic capacity metric. Outage probability under slow fading is also considered and shown to decrease when the total number of users is randomized. The bit error rate (BER) in a general multi-user diversity system has a completely monotonic derivative, which implies that, according to the Jensen's inequality, the randomization of the total number of users will decrease the average BER performance. The special case of Poisson number of users and Rayleigh fading is studied. Combining with the knowledge of regular variation, the average BER is shown to achieve tightness in the Jensen's inequality. This is followed by the extension to the negative binomial number of users, for which the BER is derived and shown to be decreasing in the number of users. A single primary user cognitive radio system with multi-user diversity at the secondary users is proposed. Comparing to the general multi-user diversity system, there exists an interference constraint between secondary and primary users, which is independent of the secondary users' transmission. The secondary user with high- est transmitted SNR which also satisfies the interference constraint is selected to communicate. The active number of secondary users is a binomial random variable. This is then followed by a derivation of the scaling law of the ergodic capacity with mean number of users and the closed form expression of average BER under this situation. The ergodic capacity under binomial user distribution is shown to outperform the Poisson case. Monte-Carlo simulations are used to supplement our analytical results and compare the performance of different user distributions.
Date Created
2012
Agent

Integrated waveform-agile multi-modal track-before-detect algorithms for tracking low observable targets

150930-Thumbnail Image.png
Description
In this thesis, an integrated waveform-agile multi-modal tracking-beforedetect sensing system is investigated and the performance is evaluated using an experimental platform. The sensing system of adapting asymmetric multi-modal sensing operation platforms using radio frequency (RF) radar and electro-optical (EO) sensors

In this thesis, an integrated waveform-agile multi-modal tracking-beforedetect sensing system is investigated and the performance is evaluated using an experimental platform. The sensing system of adapting asymmetric multi-modal sensing operation platforms using radio frequency (RF) radar and electro-optical (EO) sensors allows for integration of complementary information from different sensors. However, there are many challenges to overcome, including tracking low signal-to-noise ratio (SNR) targets, waveform configurations that can optimize tracking performance and statistically dependent measurements. Address some of these challenges, a particle filter (PF) based recursive waveformagile track-before-detect (TBD) algorithm is developed to avoid information loss caused by conventional detection under low SNR environments. Furthermore, a waveform-agile selection technique is integrated into the PF-TBD to allow for adaptive waveform configurations. The embedded exponential family (EEF) approach is used to approximate distributions of parameters of dependent RF and EO measurements and to further improve target detection rate and tracking performance. The performance of the integrated algorithm is evaluated using real data from three experimental scenarios.
Date Created
2012
Agent

Damage detection in blade-stiffened anisotropic composite panels using lamb wave mode conversions

150833-Thumbnail Image.png
Description
Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance

Composite materials are increasingly being used in aircraft, automobiles, and other applications due to their high strength to weight and stiffness to weight ratios. However, the presence of damage, such as delamination or matrix cracks, can significantly compromise the performance of these materials and result in premature failure. Structural components are often manually inspected to detect the presence of damage. This technique, known as schedule based maintenance, however, is expensive, time-consuming, and often limited to easily accessible structural elements. Therefore, there is an increased demand for robust and efficient Structural Health Monitoring (SHM) techniques that can be used for Condition Based Monitoring, which is the method in which structural components are inspected based upon damage metrics as opposed to flight hours. SHM relies on in situ frameworks for detecting early signs of damage in exposed and unexposed structural elements, offering not only reduced number of schedule based inspections, but also providing better useful life estimates. SHM frameworks require the development of different sensing technologies, algorithms, and procedures to detect, localize, quantify, characterize, as well as assess overall damage in aerospace structures so that strong estimations in the remaining useful life can be determined. The use of piezoelectric transducers along with guided Lamb waves is a method that has received considerable attention due to the weight, cost, and function of the systems based on these elements. The research in this thesis investigates the ability of Lamb waves to detect damage in feature dense anisotropic composite panels. Most current research negates the effects of experimental variability by performing tests on structurally simple isotropic plates that are used as a baseline and damaged specimen. However, in actual applications, variability cannot be negated, and therefore there is a need to research the effects of complex sample geometries, environmental operating conditions, and the effects of variability in material properties. This research is based on experiments conducted on a single blade-stiffened anisotropic composite panel that localizes delamination damage caused by impact. The overall goal was to utilize a correlative approach that used only the damage feature produced by the delamination as the damage index. This approach was adopted because it offered a simplistic way to determine the existence and location of damage without having to conduct a more complex wave propagation analysis or having to take into account the geometric complexities of the test specimen. Results showed that even in a complex structure, if the damage feature can be extracted and measured, then an appropriate damage index can be associated to it and the location of the damage can be inferred using a dense sensor array. The second experiment presented in this research studies the effects of temperature on damage detection when using one test specimen for a benchmark data set and another for damage data collection. This expands the previous experiment into exploring not only the effects of variable temperature, but also the effects of high experimental variability. Results from this work show that the damage feature in the data is not only extractable at higher temperatures, but that the data from one panel at one temperature can be directly compared to another panel at another temperature for baseline comparison due to linearity of the collected data.
Date Created
2012
Agent

Scheduling neural sensors to estimate brain activity

150830-Thumbnail Image.png
Description
Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder

Research on developing new algorithms to improve information on brain functionality and structure is ongoing. Studying neural activity through dipole source localization with electroencephalography (EEG) and magnetoencephalography (MEG) sensor measurements can lead to diagnosis and treatment of a brain disorder and can also identify the area of the brain from where the disorder has originated. Designing advanced localization algorithms that can adapt to environmental changes is considered a significant shift from manual diagnosis which is based on the knowledge and observation of the doctor, to an adaptive and improved brain disorder diagnosis as these algorithms can track activities that might not be noticed by the human eye. An important consideration of these localization algorithms, however, is to try and minimize the overall power consumption in order to improve the study and treatment of brain disorders. This thesis considers the problem of estimating dynamic parameters of neural dipole sources while minimizing the system's overall power consumption; this is achieved by minimizing the number of EEG/MEG measurements sensors without a loss in estimation performance accuracy. As the EEG/MEG measurements models are related non-linearity to the dipole source locations and moments, these dynamic parameters can be estimated using sequential Monte Carlo methods such as particle filtering. Due to the large number of sensors required to record EEG/MEG Measurements for use in the particle filter, over long period recordings, a large amounts of power is required for storage and transmission. In order to reduce the overall power consumption, two methods are proposed. The first method used the predicted mean square estimation error as the performance metric under the constraint of a maximum power consumption. The performance metric of the second method uses the distance between the location of the sensors and the location estimate of the dipole source at the previous time step; this sensor scheduling scheme results in maximizing the overall signal-to-noise ratio. The performance of both methods is demonstrated using simulated data, and both methods show that they can provide good estimation results with significant reduction in the number of activated sensors at each time step.
Date Created
2012
Agent

Integrated structural health management of complex carbon fiber reinforced composite structures

150798-Thumbnail Image.png
Description
Structural health management (SHM) is emerging as a vital methodology to help engineers improve the safety and maintainability of critical structures. SHM systems are designed to reliably monitor and test the health and performance of structures in aerospace, civil, and

Structural health management (SHM) is emerging as a vital methodology to help engineers improve the safety and maintainability of critical structures. SHM systems are designed to reliably monitor and test the health and performance of structures in aerospace, civil, and mechanical engineering applications. SHM combines multidisciplinary technologies including sensing, signal processing, pattern recognition, data mining, high fidelity probabilistic progressive damage models, physics based damage models, and regression analysis. Due to the wide application of carbon fiber reinforced composites and their multiscale failure mechanisms, it is necessary to emphasize the research of SHM on composite structures. This research develops a comprehensive framework for the damage detection, localization, quantification, and prediction of the remaining useful life of complex composite structures. To interrogate a composite structure, guided wave propagation is applied to thin structures such as beams and plates. Piezoelectric transducers are selected because of their versatility, which allows them to be used as sensors and actuators. Feature extraction from guided wave signals is critical to demonstrate the presence of damage and estimate the damage locations. Advanced signal processing techniques are employed to extract robust features and information. To provide a better estimate of the damage for accurate life estimation, probabilistic regression analysis is used to obtain a prediction model for the prognosis of complex structures subject to fatigue loading. Special efforts have been applied to the extension of SHM techniques on aerospace and spacecraft structures, such as UAV composite wings and deployable composite boom structures. Necessary modifications of the developed SHM techniques were conducted to meet the unique requirements of the aerospace structures. The developed SHM algorithms are able to accurately detect and quantify impact damages as well as matrix cracking introduced.
Date Created
2012
Agent

Sensor placement and graphical user interface for photovoltaic array monitoring system

150530-Thumbnail Image.png
Description
With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for

With increased usage of green energy, the number of photovoltaic arrays used in power generation is increasing rapidly. Many of the arrays are located at remote locations where faults that occur within the array often go unnoticed and unattended for large periods of time. Technicians sent to rectify the faults have to spend a large amount of time determining the location of the fault manually. Automated monitoring systems are needed to obtain the information about the performance of the array and detect faults. Such systems must monitor the DC side of the array in addition to the AC side to identify non catastrophic faults. This thesis focuses on two of the requirements for DC side monitoring of an automated PV array monitoring system. The first part of the thesis quantifies the advantages of obtaining higher resolution data from a PV array on detection of faults. Data for the monitoring system can be gathered for the array as a whole or from additional places within the array such as individual modules and end of strings. The fault detection rate and the false positive rates are compared for array level, string level and module level PV data. Monte Carlo simulations are performed using PV array models developed in Simulink and MATLAB for fault and no fault cases. The second part describes a graphical user interface (GUI) that can be used to visualize the PV array for module level monitoring system information. A demonstration GUI is built in MATLAB using data obtained from a PV array test facility in Tempe, AZ. Visualizations are implemented to display information about the array as a whole or individual modules and locate faults in the array.
Date Created
2012
Agent