A contextual analysis of multi-layer perceptron models in classifying hand-written digits and letters: limited resources

07/05/2021
by   Tidor-Vlad Pricope, et al.
0

Classifying hand-written digits and letters has taken a big leap with the introduction of ConvNets. However, on very constrained hardware the time necessary to train such models would be high. Our main contribution is twofold. First, we extensively test an end-to-end vanilla neural network (MLP) approach in pure numpy without any pre-processing or feature extraction done beforehand. Second, we show that basic data mining operations can significantly improve the performance of the models in terms of computational time, without sacrificing much accuracy. We illustrate our claims on a simpler variant of the Extended MNIST dataset, called Balanced EMNIST dataset. Our experiments show that, without any data mining, we get increased generalization performance when using more hidden layers and regularization techniques, the best model achieving 84.83 we were able to increase that figure to 85.08 feature space, reducing the memory size needed by 64 to remove possibly harmful training samples like deviation from the mean helped us to still achieve over 84 memory size for the training set. This compares favorably to the majority of literature results obtained through similar architectures. Although this approach gets outshined by state-of-the-art models, it does scale to some (AlexNet, VGGNet) trained on 50

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset