Full metadata
Title
Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision
Description
Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks.
Date Created
2019
Contributors
- Lohit, Suhas Anand (Author)
- Turaga, Pavan (Thesis advisor)
- Spanias, Andreas (Committee member)
- Li, Baoxin (Committee member)
- Jayasuriya, Suren (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
169 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.I.55542
Level of coding
minimal
Note
Doctoral Dissertation Electrical Engineering 2019
System Created
- 2020-01-14 09:15:13
System Modified
- 2021-08-26 09:47:01
- 3 years 2 months ago
Additional Formats