An Analysis of the Unmanned Aerial Systems-to-Ground Channel and Joint Sensing and Communications Systems Using Software Defined Radio

156306-Thumbnail Image.png
Description
Software-defined radio provides users with a low-cost and flexible platform for implementing and studying advanced communications and remote sensing applications. Two such applications include unmanned aerial system-to-ground communications channel and joint sensing and communication systems. In this work, these applications

Software-defined radio provides users with a low-cost and flexible platform for implementing and studying advanced communications and remote sensing applications. Two such applications include unmanned aerial system-to-ground communications channel and joint sensing and communication systems. In this work, these applications are studied.

In the first part, unmanned aerial system-to-ground communications channel models are derived from empirical data collected from software-defined radio transceivers in residential and mountainous desert environments using a small (< 20 kg) unmanned aerial system during low-altitude flight (< 130 m). The Kullback-Leibler divergence measure was employed to characterize model mismatch from the empirical data. Using this measure the derived models accurately describe the underlying data.

In the second part, an experimental joint sensing and communications system is implemented using a network of software-defined radio transceivers. A novel co-design receiver architecture is presented and demonstrated within a three-node joint multiple access system topology consisting of an independent radar and communications transmitter along with a joint radar and communications receiver. The receiver tracks an emulated target moving along a predefined path and simultaneously decodes a communications message. Experimental system performance bounds are characterized jointly using the communications channel capacity and novel estimation information rate.
Date Created
2018
Agent

Monitor-Based In-Field Wearout Mitigation for CMOS RF Integrated Circuits

156083-Thumbnail Image.png
Description
Performance failure due to aging is an increasing concern for RF circuits. While most aging studies are focused on the concept of mean-time-to-failure, for analog circuits, aging results in continuous degradation in performance before it causes catastrophic failures. In this

Performance failure due to aging is an increasing concern for RF circuits. While most aging studies are focused on the concept of mean-time-to-failure, for analog circuits, aging results in continuous degradation in performance before it causes catastrophic failures. In this regard, the lifetime of RF/analog circuits, which is defined as the point where at least one specification fails, is not just determined by aging at the device level, but also by the slack in the specifications, process variations, and the stress conditions on the devices. In this dissertation, firstly, a methodology for analyzing the performance degradation of RF circuits caused by aging mechanisms in MOSFET devices at design-time (pre-silicon) is presented. An algorithm to determine reliability hotspots in the circuit is proposed and design-time optimization methods to enhance the lifetime by making the most likely to fail circuit components more reliable is performed. RF circuits are used as test cases to demonstrate that the lifetime can be enhanced using the proposed design-time technique with low area and no performance impact. Secondly, in-field monitoring and recovering technique for the performance of aged RF circuits is discussed. The proposed in-field technique is based on two phases: During the design time, degradation profiles of the aged circuit are obtained through simulations. From these profiles, hotspot identification of aged RF circuits are conducted and the circuit variable that is easy to measure but highly correlated to the performance of the primary circuit is determined for a monitoring purpose. After deployment, an on-chip DC monitor is periodically activated and its results are used to monitor, and if necessary, recover the circuit performances degraded by aging mechanisms. It is also necessary to co-design the monitoring and recovery mechanism along with the primary circuit for minimal performance impact. A low noise amplifier (LNA) and LC-tank oscillators are fabricated for case studies to demonstrate that the lifetime can be enhanced using the proposed monitoring and recovery techniques in the field. Experimental results with fabricated LNA/oscillator chips show the performance degradation from the accelerated stress conditions and this loss can be recovered by the proposed mitigation scheme.
Date Created
2017
Agent

Hybrid Multiresolution Simulation & Model Checking: Network-On-Chip Systems

156003-Thumbnail Image.png
Description
Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their

Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation.

A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial.

In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes.

Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.
Date Created
2017
Agent

An Electrical-Stimulus-Only BIST IC For Capacitive MEMS Accelerometer Sensitivity Characterization

155924-Thumbnail Image.png
Description
Testing and calibration constitute a significant part of the overall manufacturing cost of microelectromechanical system (MEMS) devices. Developing a low-cost testing and calibration scheme applicable at the user side that ensures the continuous reliability and accuracy is a crucial need.

Testing and calibration constitute a significant part of the overall manufacturing cost of microelectromechanical system (MEMS) devices. Developing a low-cost testing and calibration scheme applicable at the user side that ensures the continuous reliability and accuracy is a crucial need. The main purpose of testing is to eliminate defective devices and to verify the qualifications of a product is met. The calibration process for capacitive MEMS devices, for the most part, entails the determination of the mechanical sensitivity. In this work, a physical-stimulus-free built-in-self-test (BIST) integrated circuit (IC) design characterizing the sensitivity of capacitive MEMS accelerometers is presented. The BIST circuity can extract the amplitude and phase response of the acceleration sensor's mechanics under electrical excitation within 0.55% of error with respect to its mechanical sensitivity under the physical stimulus. Sensitivity characterization is performed using a low computation complexity multivariate linear regression model. The BIST circuitry maximizes the use of existing analog and mixed-signal readout signal chain and the host processor core, without the need for computationally expensive Fast Fourier Transform (FFT)-based approaches. The BIST IC is designed and fabricated using the 0.18-µm CMOS technology. The sensor analog front-end and BIST circuitry are integrated with a three-axis, low-g capacitive MEMS accelerometer in a single hermetically sealed package. The BIST circuitry occupies 0.3 mm2 with a total readout IC area of 1.0 mm2 and consumes 8.9 mW during self-test operation.
Date Created
2017
Agent

Dual Application ADC using Three Calibration Techniques in 10nm Technology

155894-Thumbnail Image.png
Description
In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal

In this work, a 12-bit ADC with three types of calibration is proposed for high speed security applications as well as a precision application. This converter performs for both applications because it satisfies all the necessary specifications such as minimal device mismatch and offset, programmability to decrease aging effects, high SNR for increased ENOB and fast conversion rate. The designed converter implements three types of calibration necessary for offset and gain error, including: a correlated double sampling integrator used in the first stage of the ADC, a power up auto zero technique implemented in the digital code to store any offset and subtract out if necessary, and an automatic startup and manual calibration to control the common mode voltages. The proposed ADC was designed in Intel’s 10nm technology. This ADC is designed to monitor DC voltages for the precision and high speed applications. The conversion rate of the analog to digital converter is programmable to 7µs or 910ns, depending on the precision or high speed application, respectively. The range of the input and reference supply is 0 to 1.25V. The ADC is designed in Intel 10nm technology using a 1.8V supply consuming an area of 0.0705mm2. This thesis explores challenges of designing a dual-purpose analog to digital converter, which include: 1.) increased offset in 10nm technology, 2.) dual application ADC that can be accurate and fast, 3.) reducing the parasitic capacitance of the ADC, and 4.) gain error that occurs in ADCs.
Date Created
2017
Agent

Optimizing Heap Data Management on Software Managed Manycore Architectures

155791-Thumbnail Image.png
Description
Caches pose a serious limitation in scaling many-core architectures since the demand of area and power for maintaining cache coherence increases rapidly with the number of cores. Scratch-Pad Memories (SPMs) provide a cheaper and lower power alternative that can be

Caches pose a serious limitation in scaling many-core architectures since the demand of area and power for maintaining cache coherence increases rapidly with the number of cores. Scratch-Pad Memories (SPMs) provide a cheaper and lower power alternative that can be used to build a more scalable many-core architecture. The trade-off of substituting SPMs for caches is however that the data must be explicitly managed in software. Heap management on SPM poses a major challenge due to the highly dynamic nature of of heap data access. Most existing heap management techniques implement a software caching scheme on SPM, emulating the behavior of hardware caches. The state-of-the-art heap management scheme implements a 4-way set-associative software cache on SPM for a single program running with one thread on one core. While the technique works correctly, it suffers from signifcant performance overhead. This paper presents a series of compiler-based efficient heap management approaches that reduces heap management overhead through several optimization techniques. Experimental results on benchmarks from MiBenchGuthaus et al. (2001) executed on an SMM processor modeled in gem5Binkert et al. (2011) demonstrate that our approach (implemented in llvm v3.8Lattner and Adve (2004)) can improve execution time by 80% on average compared to the previous state-of-the-art.
Date Created
2017
Agent

Single-inductor, dual-input CCM boost converter for multi-junction PV energy harvesting

155703-Thumbnail Image.png
Description
This thesis presents a power harvesting system combining energy from sub-cells of

multi-junction photovoltaic (MJ-PV) cells. A dual-input, inductor time-sharing boost

converter in continuous conduction mode (CCM) is proposed. A hysteresis inductor current

regulation in designed to reduce cross regulation caused by inductor-sharing

This thesis presents a power harvesting system combining energy from sub-cells of

multi-junction photovoltaic (MJ-PV) cells. A dual-input, inductor time-sharing boost

converter in continuous conduction mode (CCM) is proposed. A hysteresis inductor current

regulation in designed to reduce cross regulation caused by inductor-sharing in CCM. A

modified hill-climbing algorithm is implemented to achieve maximum power point

tracking (MPPT). A dual-path architecture is implemented to provide a regulated 1.8V

output. A proposed lossless current sensor monitors transient inductor current and a time-based power monitor is proposed to monitor PV power. The PV input provides power of

65mW. Measured results show that the peak efficiency achieved is around 85%. The

power switches and control circuits are implemented in standard 0.18um CMOS process.
Date Created
2017
Agent

Removing Reliance on Tester of a VCO-Based ADC Using an On-Chip DAC

155614-Thumbnail Image.png
Description
Accessibility to the internal nodes of an analog/mixed-signal circuit while testing is extremely difficult. Furthermore, with technology scaling, the effect of process variations becomes more pronounced which in turn effects the test time, test cost, and die yield. As devices

Accessibility to the internal nodes of an analog/mixed-signal circuit while testing is extremely difficult. Furthermore, with technology scaling, the effect of process variations becomes more pronounced which in turn effects the test time, test cost, and die yield. As devices become more unreliable, the probability of failure of a die increases, yield decreases affecting the quality of test and cost.Therefore, test time minimization and test cost reduction are important. Moreover, process variations can affect the performance of analog/mixed circuits. Therefore, the performance of a System On-Chip(SoC) which tends to integrate multiple band gap reference circuits (BGRs) is effected due to the wide variations caused in the behavior of the BGR as a result of increasing process variations. Calibration of the BGR is, thus, important in the test process so as to obtain accuracy in the measurement of the output voltage of BGR. Furthermore, as test time minimization and test cost reduction are important in a test process, Built-in Self Test (BIST) techniques have become more popular. To obtain accuracy in the measurement of the output voltage of BGR, a VCO-based zoom-in ADC architecture that was designed to calibrate the output of the BGR voltage which dictates the circuit performance. However, the zoom-voltages for the circuit are generated using a tester. As the number of such ADCs integrated on a SoC increase, the number of nodes to be accessed by the tester increase. Moreover, the capacitance of the probe affects the accuracy of the applied input voltages of the VCO-based ADC. Therefore, accessibility decreases with increase in scaling.Further, generating a wide range of inputs becomes burdensome for the tester. For all the above reasons, an on-chip DAC circuitry was proposed as a part of this thesis, to decrease the reliance on tester. The suggested DAC architecture is a simple resistor string whose resolution depends on the number of zoom-in voltages to be generated. This architecture has a linear and monotonic behavior which is very important as the VCO has a highly non-linear behavior. Thus, the voltages generated by the DAC should be accurate with minimum error so that the worst-case Integral Non-Linearity error (INL) is less than 1mV considering resistor mismatches over process variations. With the increase in the number of VCO-based ADCs on a chip, the test time savings increase exponentially. Thus, the introduction of an on-chip DAC circuitry offers various advantages like decreasing accessibility requirement during the test process, occupying less area, reducing test cost and most importantly, decreasing the reliance on tester.
Date Created
2017
Agent

Designing Low Cost Error Correction Schemes for Improving Memory Reliability

155283-Thumbnail Image.png
Description
Memory systems are becoming increasingly error-prone, and thus guaranteeing their reliability is a major challenge. In this dissertation, new techniques to improve the reliability of both 2D and 3D dynamic random access memory (DRAM) systems are presented. The proposed schemes

Memory systems are becoming increasingly error-prone, and thus guaranteeing their reliability is a major challenge. In this dissertation, new techniques to improve the reliability of both 2D and 3D dynamic random access memory (DRAM) systems are presented. The proposed schemes have higher reliability than current systems but with lower power, better performance and lower hardware cost.

First, a low overhead solution that improves the reliability of commodity DRAM systems with no change in the existing memory architecture is presented. Specifically, five erasure and error correction (E-ECC) schemes are proposed that provide at least Chipkill-Correct protection for x4 (Schemes 1, 2 and 3), x8 (Scheme 4) and x16 (Scheme 5) DRAM systems. All schemes have superior error correction performance due to the use of strong symbol-based codes. In addition, the use of erasure codes extends the lifetime of the 2D DRAM systems.

Next, two error correction schemes are presented for 3D DRAM memory systems. The first scheme is a rate-adaptive, two-tiered error correction scheme (RATT-ECC) that provides strong reliability (10^10x) reduction in raw FIT rate) for an HBM-like 3D DRAM system that services CPU applications. The rate-adaptive feature of RATT-ECC enables permanent bank failures to be handled through sparing. It can also be used to significantly reduce the refresh power consumption without decreasing the reliability and timing performance.

The second scheme is a two-tiered error correction scheme (Config-ECC) that supports different sized accesses in GPU applications with strong reliability. It addresses the mismatch between data access size and fixed sized ECC scheme by designing a product code based flexible scheme. Config-ECC is built around a core unit designed for 32B access with a simple extension to support 64B and 128B accesses. Compared to fixed 32B and 64B ECC schemes, Config-ECC reduces the failure in time (FIT) rate by 200x and 20x, respectively. It also reduces the memory energy by 17% (in the dynamic mode) and 21% (in the static mode) compared to a state-of-the-art fixed 64B ECC scheme.
Date Created
2017
Agent

Scalable register file architecture for CGRA accelerators

155058-Thumbnail Image.png
Description
Coarse-grained Reconfigurable Arrays (CGRAs) are promising accelerators capable

of accelerating even non-parallel loops and loops with low trip-counts. One challenge

in compiling for CGRAs is to manage both recurring and nonrecurring variables in

the register file (RF) of the CGRA. Although prior works

Coarse-grained Reconfigurable Arrays (CGRAs) are promising accelerators capable

of accelerating even non-parallel loops and loops with low trip-counts. One challenge

in compiling for CGRAs is to manage both recurring and nonrecurring variables in

the register file (RF) of the CGRA. Although prior works have managed recurring

variables via rotating RF, they access the nonrecurring variables through either a

global RF or from a constant memory. The former does not scale well, and the latter

degrades the mapping quality. This work proposes a hardware-software codesign

approach in order to manage all the variables in a local nonrotating RF. Hardware

provides modulo addition based indexing mechanism to enable correct addressing

of recurring variables in a nonrotating RF. The compiler determines the number of

registers required for each recurring variable and configures the boundary between the

registers used for recurring and nonrecurring variables. The compiler also pre-loads

the read-only variables and constants into the local registers in the prologue of the

schedule. Synthesis and place-and-route results of the previous and the proposed RF

design show that proposed solution achieves 17% better cycle time. Experiments of

mapping several important and performance-critical loops collected from MiBench

show proposed approach improves performance (through better mapping) by 18%,

compared to using constant memory.
Date Created
2016
Agent