EPIK: Eliminating multi-model Pipelines with Knowledge-distillation

11/27/2022
by   Bhavesh Laddagiri, et al.
0

Real-world tasks are largely composed of multiple models, each performing a sub-task in a larger chain of tasks, i.e., using the output from a model as input for another model in a multi-model pipeline. A model like MATRa performs the task of Crosslingual Transliteration in two stages, using English as an intermediate transliteration target when transliterating between two indic languages. We propose a novel distillation technique, EPIK, that condenses two-stage pipelines for hierarchical tasks into a single end-to-end model without compromising performance. This method can create end-to-end models for tasks without needing a dedicated end-to-end dataset, solving the data scarcity problem. The EPIK model has been distilled from the MATra model using this technique of knowledge distillation. The MATra model can perform crosslingual transliteration between 5 languages - English, Hindi, Tamil, Kannada and Bengali. The EPIK model executes the task of transliteration without any intermediate English output while retaining the performance and accuracy of the MATra model. The EPIK model can perform transliteration with an average CER score of 0.015 and average phonetic accuracy of 92.1 time for execution has reduced by 54.3 has a similarity score of 97.5 EPIK model (student model) can outperform the MATra model (teacher model) even though it has been distilled from the MATra model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset