Description
One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet-of-things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human-in-the-loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI-enabled CPS in the real world for linear
and nonlinear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human-in-the-loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI-enabled CPS in the real world for linear
and nonlinear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system
Details
Title
- Operational Safety Verification of AI-Enabled Cyber-Physical Systems
Contributors
- Lamrani, Imane (Author)
- Gupta, Sandeep Ks (Thesis advisor)
- Banerjee, Ayan (Committee member)
- Zhang, Yi (Committee member)
- Runger, George C. (Committee member)
- Rodriguez, Armando (Committee member)
- Arizona State University (Publisher)
Date Created
The date the item was original created (prior to any relationship with the ASU Digital Repositories.)
2020
Subjects
Resource Type
Collections this item is in
Note
- Doctoral Dissertation Computer Science 2020