Last Query Transformer RNN for knowledge tracing

02/10/2021
by   SeungKee Jeon, et al.
0

This paper presents an efficient model to predict a student's answer correctness given his past learning activities. Basically, I use both transformer encoder and RNN to deal with time series input. The novel point of the model is that it only uses the last input as query in transformer encoder, instead of all sequence, which makes QK matrix multiplication in transformer Encoder to have O(L) time complexity, instead of O(L^2). It allows the model to input longer sequence. Using this model I achieved the 1st place in the 'Riiid! Answer Correctness Prediction' competition hosted on kaggle.

READ FULL TEXT

page 1

page 2

research
02/03/2021

Riiid! Answer Correctness Prediction Kaggle Challenge: 4th Place Solution Summary

This paper presents my solution to the challenge "Riiid! Answer Correctn...
research
11/06/2021

Transformer Based Bengali Chatbot Using General Knowledge Dataset

An AI chatbot provides an impressive response after learning from the tr...
research
10/19/2020

SAINT+: Integrating Temporal Features for EdNet Correctness Prediction

We propose SAINT+, a successor of SAINT which is a Transformer based kno...
research
09/24/2022

A Deep Investigation of RNN and Self-attention for the Cyrillic-Traditional Mongolian Bidirectional Conversion

Cyrillic and Traditional Mongolian are the two main members of the Mongo...
research
04/11/2023

Multi-granulariy Time-based Transformer for Knowledge Tracing

In this paper, we present a transformer architecture for predicting stud...
research
09/28/2021

Nana-HDR: A Non-attentive Non-autoregressive Hybrid Model for TTS

This paper presents Nana-HDR, a new non-attentive non-autoregressive mod...
research
01/29/2022

Research on Patch Attentive Neural Process

Attentive Neural Process (ANP) improves the fitting ability of Neural Pr...

Please sign up or login with your details

Forgot password? Click here to reset