Towards Quantification of Assurance for Learning-enabled Components
Perception, localization, planning, and control, high-level functions often organized in a so-called pipeline, are amongst the core building blocks of modern autonomous (ground, air, and underwater) vehicle architectures. These functions are increasingly being implemented using learning-enabled components (LECs), i.e., (software) components leveraging knowledge acquisition and learning processes such as deep learning. Providing quantified component-level assurance as part of a wider (dynamic) assurance case can be useful in supporting both pre-operational approval of LECs (e.g., by regulators), and runtime hazard mitigation, e.g., using assurance-based failover configurations. This paper develops a notion of assurance for LECs based on i) identifying the relevant dependability attributes, and ii) quantifying those attributes and the associated uncertainty, using probabilistic techniques. We give a practical grounding for our work using an example from the aviation domain: an autonomous taxiing capability for an unmanned aircraft system (UAS), focusing on the application of LECs as sensors in the perception function. We identify the applicable quantitative measures of assurance, and characterize the associated uncertainty using a non-parametric Bayesian approach, namely Gaussian process regression. We additionally discuss the relevance and contribution of LEC assurance to system-level assurance, the generalizability of our approach, and the associated challenges.
READ FULL TEXT