Reasoning About Safety-Critical Information Flow Between Pilot and Computer

12/26/2018
by   Seth Ahrenbach, et al.
0

This paper presents research results that develop a dynamic logic for reasoning about safety-critical information flow among humans and computers. The logic advances previous efforts to develop logics of agent knowledge, which make assumptions that are too strong for realistic human agents. We introduce Dynamic Agent Safety Logic (DASL), based on Dynamic Epistemic Logic (DEL), with extensions to account for safe actions, belief, and the logical relationships among knowledge, belief, and safe action. With this logic we can infer which safety-critical information a pilot is missing when executing an unsafe action. We apply the logic to the Air France 447 incident as a case study and provide a mechanization of the case study in the Coq proof assistant.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro