Securing Input Data of Deep Learning Inference Systems via Partitioned Enclave Execution

07/03/2018
by   Zhongshu Gu, et al.
0

Deep learning systems have been widely deployed as backend engines of artificial intelligence (AI) services for their approaching-human performance in cognitive tasks. However, end users always have some concerns about the confidentiality of their provisioned input data, even for those reputable AI service providers. Accidental disclosures of sensitive user data might unexpectedly happen due to security breaches, exploited vulnerabilities, neglect, or insiders. In this paper, we systematically investigate the potential information exposure in deep learning based AI inference systems. Based on our observation, we develop DeepEnclave, a privacy-enhancing system to mitigate sensitive information disclosure in deep learning inference pipelines. The key innovation is to partition deep learning models and leverage secure enclave techniques on cloud infrastructures to cryptographically protect the confidentiality and integrity of user inputs. We formulate the information exposure problem as a reconstruction privacy attack and quantify the adversary's capabilities with different attack strategies. Our comprehensive security analysis and performance measurement can act as a guideline for end users to determine their principle of partitioning deep neural networks, thus to achieve maximum privacy guarantee with acceptable performance overhead.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset