University of Notre Dame
Browse

Image Classification Under Noise and Adversarial Attacks: Spatio-Temporal and Robust Designs

Download (1.77 MB)
thesis
posted on 2022-06-11, 00:00 authored by Bernardo Aquino

Machine Learning is one of the most successful branches of Artificial Intelligence, with applications ranging from medicine to control systems and computer vision. With the increase in computational power in the last 40 years and the increase in data availability, many new algorithms have emerged as well as new challenges, such as feature selection and optimal data selection. In this dissertation, we analyze applications of image classification algorithms in the optics field, by leveraging the physical properties of the models and embedding them into the algorithms, and the robustness of neural networks against adversarial attacks using incremental dissipativity.
First, we consider the problem of minimizing wavelength usage and therefore decreasing measurement time in spectroscopy experiments. We propose an algorithm for wavelength selection for substance detection and classification of spectral data in overdetermined systems, that is when the number of wavelengths is greater than the number of substances that are in the set to be detected. This type of problem is common when using low signal-to-noise portable sensors. A cardinality-penalizing constrained multivariate regression-based method is proposed and evaluated experimentally.
We also analyze the problem of substance classification using a mid-infrared laser and sensors with a low signal-to-noise ratio. We propose and demonstrate a statistical method that classifies spectral data generated from MIR imaging spectroscopy experiments using few wavelengths and inexpensive detector arrays while still achieving high accuracy, by leveraging the spatial structure of the data and compensating for the thermal changes on the samples. Our method can classify using as few as a single measurement and allows the use of low SNR sensors.
Finally, we address the problem of adversarial examples degradation on the classification performance in neural networks. Empirical methods for promoting robustness to such examples have been proposed, but often lack both analytical insights and formal guarantees. Recently, some robustness certificates have appeared in the literature based on system theoretic notions. This work proposes an incremental dissipativity-based robustness certificate for neural networks in the form of a linear matrix inequality for each layer. We also propose a sufficient spectral norm bound for this certificate which is scalable to neural networks with multiple layers. We demonstrate the improved performance against adversarial attacks on a feed-forward neural network trained on MNIST and an Alexnet trained using CIFAR-10.

History

Date Modified

2022-06-30

Defense Date

2022-06-10

CIP Code

  • 14.1001

Research Director(s)

Vijay Gupta

Committee Members

Panos Antsaklis Anthony Hoffman Scott Howard

Degree

  • Doctor of Philosophy

Degree Level

  • Doctoral Dissertation

Alternate Identifier

1333695401

Library Record

6236502

OCLC Number

1333695401

Additional Groups

  • Electrical Engineering

Program Name

  • Electrical Engineering

Usage metrics

    Dissertations

    Categories

    No categories selected

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC