Multi-view Information Bottleneck Without Variational Approximation

04/22/2022
by   Qi Zhang, et al.
0

By "intelligently" fusing the complementary information across different views, multi-view learning is able to improve the performance of classification tasks. In this work, we extend the information bottleneck principle to a supervised multi-view learning scenario and use the recently proposed matrix-based Rényi's α-order entropy functional to optimize the resulting objective directly, without the necessity of variational approximation or adversarial training. Empirical results in both synthetic and real-world datasets suggest that our method enjoys improved robustness to noise and redundant information in each view, especially given limited training samples. Code is available at <https://github.com/archy666/MEIB>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset