Secure Estimation under Causative Attacks
This paper considers the problem of secure parameter estimation when the estimation algorithm is prone to causative attacks. Causative attacks, in principle, target decision-making algorithms to alter their decisions by making them oblivious to specific attacks. Such attacks influence inference algorithms by tampering with the mechanism through which the algorithm is provided with the statistical model of the population about which an inferential decision is made. Causative attacks are viable, for instance, by contaminating the historical or training data, or by compromising an expert who provides the model. In the presence of causative attacks, the inference algorithms operate under a distorted statistical model for the population from which they collect data samples. This paper introduces specific notions of secure estimation and provides a framework under which secure estimation under causative attacks can be formulated. A central premise underlying the secure estimation framework is that forming secure estimates introduces a new dimension to the estimation objective, which pertains to detecting attacks and isolating the true model. Since detection and isolation decisions are imperfect, their inclusion induces an inherent coupling between the desired secure estimation objective and the auxiliary detection and isolation decisions that need to be formed in conjunction with the estimates. This paper establishes the fundamental interplay among the decisions involved and characterizes the general decision rules in closed-form for any desired estimation cost function. Furthermore, to circumvent the computational complexity associated with growing parameter dimension or attack complexity, a scalable estimation algorithm and its attendant optimality guarantees are provided. The theory developed is applied to secure parameter estimation in a sensor network.
READ FULL TEXT