Full metadata
Title
Computational Imaging for Energy-Efficient Cameras: Adaptive ROI-based Object Tracking and Optically Defocused Event-based Sensing.
Description
Computer vision is becoming an essential component of embedded system applications such as smartphones, wearables, autonomous systems and internet-of-things (IoT). These applications are generally deployed into environments with limited energy, memory bandwidth and computational resources. This trend is driving the development of energy-effi cient image processing solutions from sensing to computation. In this thesis, diff erent alternatives are explored to implement energy-efficient computer vision systems. First, I present a fi eld programmable gate array (FPGA) implementation of an adaptive subsampling algorithm for region-of-interest (ROI) -based object tracking. By implementing the computationally intensive sections of this algorithm on an FPGA, I aim to offl oad computing resources from energy-ineffi cient graphics processing units (GPUs) and/or general-purpose central processing units (CPUs). I also present a working system executing this algorithm in near real-time latency implemented on a standalone embedded device. Secondly, I present a neural network-based pipeline to improve the performance of event-based cameras in non-ideal optical conditions. Event-based cameras or dynamic vision sensors (DVS) are bio-inspired sensors that measure logarithmic per-pixel brightness changes in a scene. Their advantages include high dynamic range, low latency and ultra-low power when compared to standard frame-based cameras. Several tasks have been proposed to take advantage of these novel sensors but they rely on perfectly calibrated optical lenses that are in-focus. In this work I propose a methodto reconstruct events captured with an out-of-focus event-camera so they can be fed into an intensity reconstruction task. The network is trained with a dataset generated by simulating defocus blur in sequences from object tracking datasets such as LaSOT and OTB100. I also test the generalization performance of this network in scenes captured with a DAVIS event-based sensor equipped with an out-of-focus lens.
Date Created
2022
Contributors
- Torres Muro, Victor Isaac (Author)
- Jayasuriya, Suren (Thesis advisor)
- Spanias, Andreas (Committee member)
- Seo, Jae-Sun (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
87 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.2.N.171616
Level of coding
minimal
Cataloging Standards
Note
Partial requirement for: M.S., Arizona State University, 2022
Field of study: Engineering
System Created
- 2022-12-20 06:17:32
System Modified
- 2022-12-20 06:17:32
- 1 year 11 months ago
Additional Formats