Meta Discovery: Learning to Discover Novel Classes given Very Limited Data
In learning to discover novel classes(L2DNC), we are given labeled data from seen classes and unlabeled data from unseen classes, and we need to train clustering models for the unseen classes. Since L2DNC is a new problem, its application scenario and implicit assumption are unclear. In this paper, we analyze and improve it by linking it to meta-learning: although there are no meta-training and meta-test phases, the underlying assumption is exactly the same, namely high-level semantic features are shared among the seen and unseen classes. Under this assumption, L2DNC is not only theoretically solvable, but also can be empirically solved by meta-learning algorithms slightly modified to fit our proposed framework. This L2DNC methodology significantly reduces the amount of unlabeled data needed for training and makes it more practical, as demonstrated in experiments. The use of very limited data is also justified by the application scenario of L2DNC: since it is unnatural to label only seen-class data, L2DNC is causally sampling instead of labeling. The unseen-class data should be collected on the way of collecting seen-class data, which is why they are novel and first need to be clustered.
READ FULL TEXT