The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. The database is also widely used for training and testing in the field of machine learning. I have posted about MNIST twice and have provided a solution to Kaggle’s Digit Recognizer, which is based on the MNIST datasets, the link being found here:- Get started with Kaggle by entering their Digit Recognizer competition |AI In Plain English |Medium
Fashion MNIST is an alternative to MNIST, and is intended to serve as a direct drop-in replacement for the original MNIST dataset to benchmark machine learning algorithms, as it shares the same image size and the structure of training and testing splits. The reasons for swapping MNIST out for MNIST are that MNIST is too easy, it is overused, and cannot represent modern cross validation methods.
With this in mind, I took it upon myself to obtain a Fashion MNIST dataset and make predictions on it to determine whether it is in fact too easy.
I wrote the program in Google Colab, which is a free online Jupyter Notebook that has many libraries already installed in it. I therefore only needed to import the libraries I would need to execute the program, starting with numpy and pandas:-
Because the train and test files were the same format, I decided to append the test file to the train file:-
I then defined the X and y variables. The label, being y, is the column named “y” in the dataset. The X variable, being the data that would be used to form the prediction, is the train dataset with the “index” and “y” column dropped:-
Once I defined the X and y variables, I scaled the data by dividing it by 255. The number 255 has been used because the original values in the datasets consist of RGB (being the three hues of light used to mix together to create different colours) coefficients in the range of 0 to 255. The data needs to be scaled to the value of the 0 to 1 range in order to adequately make predictions on it.
I then split the X dataset up for training and validation, with the validation set being comprised of 10% of X.
I then selected the model, being sklearn’s neural network, MLPClassifier. I achieved 100% accuracy when I trained and fitted X_train and y_train:-
I made predictions on the validation set, achieving an accuracy of 82.5%, which is a lower accuracy than the 96.74% accuracy I achieved on the MNIST validation set:-
In conclusion, the Fashion MNIST dataset is more difficult to achieve a high accuracy on than the MNIST dataset. In order to achieve a higher accuracy, a higher level of programming will need to be adopted.
The code for this post can be found in its entirety in my personal GitHub account, the link being here:- MNIST/Fashion_MNIST_MLPC.ipynb at main · TracyRenee61/MNIST (github.com)