This project focuses on the implementation of an enhanced architecture, SAINT: Separated Self-Attentive Neural Knowledge Tracing, which builds upon the existing SAINT model. The SAINT architecture aims to improve knowledge tracing in educational settings by leveraging self-attention mechanisms.

Inspired by the paper “Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing,” our implementation incorporates additional features into the original SAINT model. These enhancements enhance the model’s ability to compute queries, keys, and values, resulting in improved performance and accuracy.

The implementation is carried out using the PyTorch deep learning framework, ensuring compatibility with the existing dataset used in the Riiid! Answer Correctness Prediction Kaggle Competition. By applying SAINT: Separated Self-Attentive Neural Knowledge Tracing to this dataset, we aim to further enhance knowledge tracing capabilities and achieve more accurate predictions.

By exploring and implementing these advanced techniques, our project aims to contribute to the field of knowledge tracing and make strides in improving educational assessment methodologies.