Image Fusion with Quantum Processing for Remote Sensing Applications
Document
Description
This thesis presents algorithms, simulations, and results using machine learning,image fusion, and quantum feature extraction for radar and remote sensing applications.
Previous efforts in the classification of synthetic aperture radar (SAR) images using
quantum machine learning provided encouraging results but, nevertheless modest
classification accuracy. In this thesis, a novel quantum processed image fusion technique
is used for identifying and classifying scenes obtained from C-band SAR and optical
images. More specifically, a four-qubit quantum circuit is designed to extract features
from the SAR image dataset. This method enhances the spectral details otherwise not
seen when using the raw SAR dataset. In addition to the quantum circuit used for
feature extraction, deep neural networks (NN) are used for scene classification. The
Visual Geometry Group 16 (VGG16), a convolutional neural network that is sixteen
layers deep, is customized and used for scene classification. In addition to the above, an alternative dataset is explored, leading to a multi-label classification problem. Two additional quantum-processed fusion models are
developed, a new 12-qubit quantum circuit is designed, and a noise model is used.
For scene classification, two classical neural networks are deployed, specifically the
VGG16 and Residual Network (ResNet50) architectures. Furthermore, F1 score, F2
score, precision, and recall are used to evaluate the results. Overall, the merit of quantum fusion is highlighted, including enhanced classi-fication accuracy and reduced storage requirements. Additionally, the promising
improvements in overall system performance and the potential to decrease size, weight,
power, and cost (SWaP-C) is discussed.