Full metadata
Title
Operational Safety Verification of AI-Enabled Cyber-Physical Systems
Description
One of the main challenges in testing artificial intelligence (AI) enabled cyber physicalsystems (CPS) such as autonomous driving systems and internet-of-things (IoT) medical
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human-in-the-loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI-enabled CPS in the real world for linear
and nonlinear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system
devices is the presence of machine learning components, for which formal properties are
difficult to establish. In addition, operational components interaction circumstances, inclusion of human-in-the-loop, and environmental changes result in a myriad of safety concerns
all of which may not only be comprehensibly tested before deployment but also may not
even have been detected during design and testing phase. This dissertation identifies major challenges of safety verification of AI-enabled safety critical systems and addresses the
safety problem by proposing an operational safety verification technique which relies on
solving the following subproblems:
1. Given Input/Output operational traces collected from sensors/actuators, automatically
learn a hybrid automata (HA) representation of the AI-enabled CPS.
2. Given the learned HA, evaluate the operational safety of AI-enabled CPS in the field.
This dissertation presents novel approaches for learning hybrid automata model from time
series traces collected from the operation of the AI-enabled CPS in the real world for linear
and nonlinear CPS. The learned model allows operational safety to be stringently evaluated
by comparing the learned HA model against a reference specifications model of the system.
The proposed techniques are evaluated on the artificial pancreas control system
Date Created
2020
Contributors
- Lamrani, Imane (Author)
- Gupta, Sandeep Ks (Thesis advisor)
- Banerjee, Ayan (Committee member)
- Zhang, Yi (Committee member)
- Runger, George C. (Committee member)
- Rodriguez, Armando (Committee member)
- Arizona State University (Publisher)
Topical Subject
Resource Type
Extent
98 pages
Language
eng
Copyright Statement
In Copyright
Primary Member of
Peer-reviewed
No
Open Access
No
Handle
https://hdl.handle.net/2286/R.I.62933
Level of coding
minimal
Note
Doctoral Dissertation Computer Science 2020
System Created
- 2021-01-14 09:14:49
System Modified
- 2021-08-26 09:47:01
- 3 years 2 months ago
Additional Formats