Solving the Rubik's Cube Without Human Knowledge

05/18/2018
by   Stephen McAleer, et al.
0

A generally intelligent agent must be able to teach itself how to solve problems in complex domains with minimal human supervision. Recently, deep reinforcement learning algorithms combined with self-play have achieved superhuman proficiency in Go, Chess, and Shogi without human data or domain knowledge. In these environments, a reward is always received at the end of the game, however, for many combinatorial optimization environments, rewards are sparse and episodes are not guaranteed to terminate. We introduce Autodidactic Iteration: a novel reinforcement learning algorithm that is able to teach itself how to solve the Rubik's Cube with no human assistance. Our algorithm is able to solve 100 length of 30 moves -- less than or equal to solvers that employ human domain knowledge.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset