Consequences of Misaligned AI

02/07/2021
by   Simon Zhuang, et al.
0

AI systems often rely on two key components: a specified goal or reward function and an optimization algorithm to compute the optimal behavior for that goal. This approach is intended to provide value for a principal: the user on whose behalf the agent acts. The objectives given to these agents often refer to a partial specification of the principal's goals. We consider the cost of this incompleteness by analyzing a model of a principal and an agent in a resource constrained world where the L attributes of the state correspond to different sources of utility for the principal. We assume that the reward function given to the agent only has support on J < L attributes. The contributions of our paper are as follows: 1) we propose a novel model of an incomplete principal-agent problem from artificial intelligence; 2) we provide necessary and sufficient conditions under which indefinitely optimizing for any incomplete proxy objective leads to arbitrarily low overall utility; and 3) we show how modifying the setup to allow reward functions that reference the full state or allowing the principal to update the proxy objective over time can lead to higher utility solutions. The results in this paper argue that we should view the design of reward functions as an interactive and dynamic process and identifies a theoretical scenario where some degree of interactivity is desirable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/27/2022

Defining and Characterizing Reward Hacking

We provide the first formal definition of reward hacking, a phenomenon w...
research
07/20/2023

Of Models and Tin Men – a behavioural economics study of principal-agent problems in AI alignment using large-language models

AI Alignment is often presented as an interaction between a single desig...
research
02/26/2019

Conservative Agency via Attainable Utility Preservation

Reward functions are often misspecified. An agent optimizing an incorrec...
research
06/23/2022

Formalizing the Problem of Side Effect Regularization

AI objectives are often hard to specify properly. Some approaches tackle...
research
06/21/2021

Alignment Problems With Current Forecasting Platforms

We present alignment problems in current forecasting platforms, such as ...
research
11/24/2016

The Off-Switch Game

It is clear that one of the primary tools we can use to mitigate the pot...
research
03/26/2018

Robust principal components for irregularly spaced longitudinal data

Consider longitudinal data x_ij, with i=1,...,n and j=1,...,p_i, where x...

Please sign up or login with your details

Forgot password? Click here to reset