Differentiable Programming for Physics-based Hyperspectral Unmixing

Description
Hyperspectral unmixing is an important remote sensing task with applications including material identification and analysis. Characteristic spectral features make many pure materials identifiable from their visible-to-infrared spectra, but quantifying their presence within a mixture is a challenging task due to

Hyperspectral unmixing is an important remote sensing task with applications including material identification and analysis. Characteristic spectral features make many pure materials identifiable from their visible-to-infrared spectra, but quantifying their presence within a mixture is a challenging task due to nonlinearities and factors of variation. In this thesis, physics-based approaches are incorporated into an end-to-end spectral unmixing algorithm via differentiable programming. First, sparse regularization and constraints are implemented by adding differentiable penalty terms to a cost function to avoid unrealistic predictions. Secondly, a physics-based dispersion model is introduced to simulate realistic spectral variation, and an efficient method to fit the parameters is presented. Then, this dispersion model is utilized as a generative model within an analysis-by-synthesis spectral unmixing algorithm. Further, a technique for inverse rendering using a convolutional neural network to predict parameters of the generative model is introduced to enhance performance and speed when training data are available. Results achieve state-of-the-art on both infrared and visible-to-near-infrared (VNIR) datasets as compared to baselines, and show promise for the synergy between physics-based models and deep learning in hyperspectral unmixing in the future.
Date Created
2020
Agent

Investigating Methods of Achieving Photorealistic Materials for Augmented Reality Applications on Mobile Devices

131793-Thumbnail Image.png
Description
As the prevalence of augmented reality (AR) technology continues to increase, so too have methods for improving the appearance and behavior of computer-generated objects. This is especially significant as AR applications now expand to territories outside of the entertainment

As the prevalence of augmented reality (AR) technology continues to increase, so too have methods for improving the appearance and behavior of computer-generated objects. This is especially significant as AR applications now expand to territories outside of the entertainment sphere and can be utilized for numerous purposes encompassing but not limited to education, specialized occupational training, retail & online shopping, design, marketing, and manufacturing. Due to the nature of AR technology, where computer-generated objects are being placed into a real-world environment, a decision has to be made regarding the visual connection between the tangible and the intangible. Should the objects blend seamlessly into their environment or purposefully stand out? It is not purely a stylistic choice. A developer must consider how their application will be used — in many instances an optimal user experience is facilitated by mimicking the real world as closely as possible; even simpler applications, such as those built primarily for mobile devices, can benefit from realistic AR. The struggle here lies in creating an immersive user experience that is not reliant on computationally-expensive graphics or heavy-duty models. The research contained in this thesis provides several ways for achieving photorealistic rendering in AR applications using a range of techniques, all of which are supported on mobile devices. These methods can be employed within the Unity Game Engine and incorporate shaders, render pipelines, node-based editors, post-processing, and light estimation.
Date Created
2020-05
Agent

Viewpoint Recommendation for Aesthetic Photography

157866-Thumbnail Image.png
Description
This thesis addresses the problem of recommending a viewpoint for aesthetic photography. Viewpoint recommendation is suggesting the best camera pose to capture a visually pleasing photograph of the subject of interest by using any end-user device such as drone, mobile

This thesis addresses the problem of recommending a viewpoint for aesthetic photography. Viewpoint recommendation is suggesting the best camera pose to capture a visually pleasing photograph of the subject of interest by using any end-user device such as drone, mobile robot or smartphone. Solving this problem enables to capture visually pleasing photographs autonomously in areal photography, wildlife photography, landscape photography or in personal photography.

The viewpoint recommendation problem can be divided into two stages: (a) generating a set of dense novel views based on the basis views captured about the subject. The dense novel views are useful to better understand the scene and to know how the subject looks from different viewpoints and (b) each novel is scored based on how aesthetically good it is. The viewpoint with the greatest aesthetic score is recommended for capturing a visually pleasing photograph.
Date Created
2019
Agent

Building Constraints, Geometric Invariants and Interpretability in Deep Learning: Applications in Computational Imaging and Vision

157840-Thumbnail Image.png
Description
Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such

Over the last decade, deep neural networks also known as deep learning, combined with large databases and specialized hardware for computation, have made major strides in important areas such as computer vision, computational imaging and natural language processing. However, such frameworks currently suffer from some drawbacks. For example, it is generally not clear how the architectures are to be designed for different applications, or how the neural networks behave under different input perturbations and it is not easy to make the internal representations and parameters more interpretable. In this dissertation, I propose building constraints into feature maps, parameters and and design of algorithms involving neural networks for applications in low-level vision problems such as compressive imaging and multi-spectral image fusion, and high-level inference problems including activity and face recognition. Depending on the application, such constraints can be used to design architectures which are invariant/robust to certain nuisance factors, more efficient and, in some cases, more interpretable. Through extensive experiments on real-world datasets, I demonstrate these advantages of the proposed methods over conventional frameworks.
Date Created
2019
Agent

Structured disentangling networks for learning deformation invariant latent spaces

157645-Thumbnail Image.png
Description
Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify

Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach.
Date Created
2019
Agent

Adaptive Lighting for Data-Driven Non-Line-Of-Sight 3D Localization

157215-Thumbnail Image.png
Description
Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive

Non-line-of-sight (NLOS) imaging of objects not visible to either the camera or illumina-

tion source is a challenging task with vital applications including surveillance and robotics.

Recent NLOS reconstruction advances have been achieved using time-resolved measure-

ments. Acquiring these time-resolved measurements requires expensive and specialized

detectors and laser sources. In work proposes a data-driven approach for NLOS 3D local-

ization requiring only a conventional camera and projector. The localisation is performed

using a voxelisation and a regression problem. Accuracy of greater than 90% is achieved

in localizing a NLOS object to a 5cm × 5cm × 5cm volume in real data. By adopting

the regression approach an object of width 10cm to localised to approximately 1.5cm. To

generalize to line-of-sight (LOS) scenes with non-planar surfaces, an adaptive lighting al-

gorithm is adopted. This algorithm, based on radiosity, identifies and illuminates scene

patches in the LOS which most contribute to the NLOS light paths, and can factor in sys-

tem power constraints. Improvements ranging from 6%-15% in accuracy with a non-planar

LOS wall using adaptive lighting is reported, demonstrating the advantage of combining

the physics of light transport with active illumination for data-driven NLOS imaging.
Date Created
2019
Agent

Performance Evaluation of Object Proposal Generators for Salient Object Detection

157065-Thumbnail Image.png
Description
The detection and segmentation of objects appearing in a natural scene, often referred to as Object Detection, has gained a lot of interest in the computer vision field. Although most existing object detectors aim to detect all the objects in

The detection and segmentation of objects appearing in a natural scene, often referred to as Object Detection, has gained a lot of interest in the computer vision field. Although most existing object detectors aim to detect all the objects in a given scene, it is important to evaluate whether these methods are capable of detecting the salient objects in the scene when constraining the number of proposals that can be generated due to constraints on timing or computations during execution. Salient objects are objects that tend to be more fixated by human subjects. The detection of salient objects is important in applications such as image collection browsing, image display on small devices, and perceptual compression.

This thesis proposes a novel evaluation framework that analyses the performance of popular existing object proposal generators in detecting the most salient objects. This work also shows that, by incorporating saliency constraints, the number of generated object proposals and thus the computational cost can be decreased significantly for a target true positive detection rate (TPR).

As part of the proposed framework, salient ground-truth masks are generated from the given original ground-truth masks for a given dataset. Given an object detection dataset, this work constructs salient object location ground-truth data, referred to here as salient ground-truth data for short, that only denotes the locations of salient objects. This is obtained by first computing a saliency map for the input image and then using it to assign a saliency score to each object in the image. Objects whose saliency scores are sufficiently high are referred to as salient objects. The detection rates are analyzed for existing object proposal generators with respect to the original ground-truth masks and the generated salient ground-truth masks.

As part of this work, a salient object detection database with salient ground-truth masks was constructed from the PASCAL VOC 2007 dataset. Not only does this dataset aid in analyzing the performance of existing object detectors for salient object detection, but it also helps in the development of new object detection methods and evaluating their performance in terms of successful detection of salient objects.
Date Created
2019
Agent

Tree-Based Deep Mixture of Experts with Applications to Visual Saliency Prediction and Quality Robust Visual Recognition

156747-Thumbnail Image.png
Description
Mixture of experts is a machine learning ensemble approach that consists of individual models that are trained to be ``experts'' on subsets of the data, and a gating network that provides weights to output a combination of the expert predictions.

Mixture of experts is a machine learning ensemble approach that consists of individual models that are trained to be ``experts'' on subsets of the data, and a gating network that provides weights to output a combination of the expert predictions. Mixture of experts models do not currently see wide use due to difficulty in training diverse experts and high computational requirements. This work presents modifications of the mixture of experts formulation that use domain knowledge to improve training, and incorporate parameter sharing among experts to reduce computational requirements.

First, this work presents an application of mixture of experts models for quality robust visual recognition. First it is shown that human subjects outperform deep neural networks on classification of distorted images, and then propose a model, MixQualNet, that is more robust to distortions. The proposed model consists of ``experts'' that are trained on a particular type of image distortion. The final output of the model is a weighted sum of the expert models, where the weights are determined by a separate gating network. The proposed model also incorporates weight sharing to reduce the number of parameters, as well as increase performance.



Second, an application of mixture of experts to predict visual saliency is presented. A computational saliency model attempts to predict where humans will look in an image. In the proposed model, each expert network is trained to predict saliency for a set of closely related images. The final saliency map is computed as a weighted mixture of the expert networks' outputs, with weights determined by a separate gating network. The proposed model achieves better performance than several other visual saliency models and a baseline non-mixture model.

Finally, this work introduces a saliency model that is a weighted mixture of models trained for different levels of saliency. Levels of saliency include high saliency, which corresponds to regions where almost all subjects look, and low saliency, which corresponds to regions where some, but not all subjects look. The weighted mixture shows improved performance compared with baseline models because of the diversity of the individual model predictions.
Date Created
2018
Agent

Characterization of Energy and Performance Bottlenecks in an Omni-directional Camera System

Description
Generating real-world content for VR is challenging in terms of capturing and processing at high resolution and high frame-rates. The content needs to represent a truly immersive experience, where the user can look around in 360-degree view and perceive the

Generating real-world content for VR is challenging in terms of capturing and processing at high resolution and high frame-rates. The content needs to represent a truly immersive experience, where the user can look around in 360-degree view and perceive the depth of the scene. The existing solutions only capture and offload the compute load to the server. But offloading large amounts of raw camera feeds takes longer latencies and poses difficulties for real-time applications. By capturing and computing on the edge, we can closely integrate the systems and optimize for low latency. However, moving the traditional stitching algorithms to battery constrained device needs at least three orders of magnitude reduction in power. We believe that close integration of capture and compute stages will lead to reduced overall system power.

We approach the problem by building a hardware prototype and characterize the end-to-end system bottlenecks of power and performance. The prototype has 6 IMX274 cameras and uses Nvidia Jetson TX2 development board for capture and computation. We found that capturing is bottlenecked by sensor power and data-rates across interfaces, whereas compute is limited by the total number of computations per frame. Our characterization shows that redundant capture and redundant computations lead to high power, huge memory footprint, and high latency. The existing systems lack hardware-software co-design aspects, leading to excessive data transfers across the interfaces and expensive computations within the individual subsystems. Finally, we propose mechanisms to optimize the system for low power and low latency. We emphasize the importance of co-design of different subsystems to reduce and reuse the data. For example, reusing the motion vectors of the ISP stage reduces the memory footprint of the stereo correspondence stage. Our estimates show that pipelining and parallelization on custom FPGA can achieve real time stitching.
Date Created
2018
Agent

Portable and Low-Cost Detection Platform for Hepatitis B Virus Infections

134018-Thumbnail Image.png
Description
Approximately 248 million people in the world are currently living with chronic Hepatitis B virus (HBV) infection. HBV and HCV infections are the primary cause of liver diseases such as cirrhosis and hepatocellular carcinomas in the world with an estimated

Approximately 248 million people in the world are currently living with chronic Hepatitis B virus (HBV) infection. HBV and HCV infections are the primary cause of liver diseases such as cirrhosis and hepatocellular carcinomas in the world with an estimated 1.4 million deaths annually. HBV in the Republic of Peru was used as a case study of an emerging and rapidly spreading disease in a developing nation. Wherein, clinical diagnosis of HBV infections in at-risk communities such the Amazon Region and the Andes Mountains are challenging due to a myriad of reasons. High prices of clinical diagnosis and limited access to treatment are alone the most significant deterrent for individuals living in at-risk communities to get the much need help. Additionally, limited testing facilities, lack of adequate testing policies or national guidelines, poor laboratory capacity, resource-limited settings, geographical isolation, and public mistrust are among the chief reasons for low HBV testing. Although, preventative vaccination programs deployed by the Peruvian health officials have reduced the number of infected individuals by year and region. To significantly reduce or eradicate HBV in hyperendemic areas and countries such as Peru, preventative clinical diagnosis and vaccination programs are an absolute necessity. Consequently, the need for a portable low-priced diagnostic platform for the detection of HBV and other diseases is substantial and urgent not only in Peru but worldwide. Some of these concerns were addressed by designing a low-cost, rapid detection platform. In that, an immunosignature technology (IMST) slide used to test for reactivity against the presence of antibodies in the serum-sample was used to test for picture resolution and clarity. IMST slides were scanned using a smartphone camera placed on top of the designed device housing a circuit of 32 LED lights at 647 nm, an optical magnifier at 15X, and a linear polarizing film sheet. Tow 9V batteries powered the scanning device LED circuit ensuring enough lighting. The resulting pictures from the first prototype showed that by lighting the device at 647 nm and using a smartphone camera, the camera could capture high-resolution images. These results conclusively indicate that with any modern smartphone camera, a small box lighted to 647 nm, and optical magnifier; a powerful and expensive laboratory scanning machine can be replaced by another that is inexpensive, portable and ready to use anywhere.
Date Created
2018-05
Agent