Representing Knowledge as Predictions (and State as Knowledge)
This paper shows how a single mechanism allows knowledge to be constructed layer by layer directly from an agent's raw sensorimotor stream. This mechanism, the General Value Function (GVF) or "forecast," captures high-level, abstract knowledge as a set of predictions about existing features and knowledge, based exclusively on the agent's low-level senses and actions. Thus, forecasts provide a representation for organizing raw sensorimotor data into useful abstractions over an unlimited number of layers–a long-sought goal of AI and cognitive science. The heart of this paper is a detailed thought experiment providing a concrete, step-by-step formal illustration of how an artificial agent can build true, useful, abstract knowledge from its raw sensorimotor experience alone. The knowledge is represented as a set of layered predictions (forecasts) about the agent's observed consequences of its actions. This illustration shows twelve separate layers: the lowest consisting of raw pixels, touch and force sensors, and a small number of actions; the higher layers increasing in abstraction, eventually resulting in rich knowledge about the agent's world, corresponding roughly to doorways, walls, rooms, and floor plans. I then argue that this general mechanism may allow the representation of a broad spectrum of everyday human knowledge.
READ FULL TEXT