This is Part 2 of a 3-part series to The Complete Guide To Sentiment Analysis with Ludwig. Part 1 shows how to load the dataset and how to obtain baseline Ludwig models. This part will show how to work with Transformer encoders like BERT and how to compare all the models using Ludwig.
You can follow along with the code through the Colab notebook.
Authors: Kanishk Kalra, Michael Zhu, Elias Castro Hernandez, and Piero Molino
Thanks to Debbie Yuen for the Images
BERT is a state-of-the-art model introduced in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Devlin et al. (2019), that uses the Transformer architecture Attention is all you need by Vaswani et al. BERT is a family of models made of a deep stack of self-attention-based transformer layers pre-trained using a Masked Language Model objective over a large corpus, which consists in reading a sentence from the corpus, masking-out some words from it and teaching the model to predict the masked words back. Fine-tuning the pre-trained BERT representations on specific datasets allows us to achieve close to state-of-the-art performance in several sentence-level and token-level tasks.
Building BERT Using Ludwig
There’s no major difference between Ludwig configuration for BERT and the ones we defined in Part 1 for the Parallel CNN and bi-LSTM. The only minor changes are in the input_features
and the training parameters.
In the input_features
, we change the encoder to BERT ('encoder': 'bert'
) and remove all the embedding related parameters, not needed for this type of architecture. All the other parameters are kept the same for this section. While fine-tuning a BERT encoder, we suggest using a smaller learning rate ('learning_rate': 0.00002
) and smaller batch size('batch_size': 16
) to avoid out-of-memory issues because of the high memory consumption of BERT. Since BERT is larger than the previous models we trained and takes a lot of time per epoch (~1.25 hrs/epoch), we recommend setting the number of epochs to 2 ('epochs': 2
) when working on Colab as it is sufficient to achieve good results on SST and avoids loss of metadata in case Colab gets disconnected.
Visualizing and Interpreting BERT’s Results
Now that we’ve fine-tuned BERT for SST-5, let’s visualize the learning curves with the help of the training statistics returned intrain_stats_bert
.
Just a glimpse of the learning curves suggests that BERT requires little to no fine-tuning to perform at par or even better in comparison to the other two models (Parallel CNN, Bi-LSTM) we trained previously, as illustrated in the epochs vs accuracy graph where already at the end of the first epoch the validation accuracy is above 0.5.
We observe that the model achieved a maximum validation accuracy of 53.8% at epoch 2, higher than both the earlier models. To confirm the good results obtained on the validation set, let’s use some tools built into Ludwig to evaluate and compare BERT with the previous models using the held-out test set.
So far we have seen how different models perform on the training and validation set of SST-5 dataset, but what we really care about is how well do they generalize on the test set. Ludwig provides a very intuitive way for evaluating the models on the test data using the evaluate
method. The method returns test statistics, predictions for the model evaluated and the output directory where the results get stored. The performance of the models can be compared head to head using the compare_performance
visualization method that plots the performance of each of the models in the form of bar charts and the predictions can be compared in the form of donut plots using the compare_classifiers_predictions
visualization method.
Let’s first evaluate the models on the test set. It’s noteworthy to mention that Colab might remove the models from the memory to free up space, but we can seamlessly load the trained models back from their respective directories with the load
method that Ludwig provides.
Head to Head Performance Comparison on Test Set
Let’s now compare the performance of the models on the test set across accuracy and Hits@3 metric in the form of horizontal bar-charts obtained with the compare_performance
function.
We can see how BERT performs better on the sentiment analysis task as compared to the other two models, with an accuracy of 54.48 %.
Even though BERT performs better, there’s a trade-off between the accuracy of the predictions and the time required to calculate them. We created a function to obtain 5 shuffled test sets in order to average the elapsed prediction time.
We see that Parallel CNN takes 1.25ms on an average per prediction.
The Bi-LSTM takes 3.69ms on an average per prediction.
Finally, BERT takes 8.62ms on an average per prediction.
We can distinctly see the difference in how BERT performs better on the sentiment analysis task with an accuracy of 54.48%, although predicting using it takes 8.62ms while predicting using the Bi-LSTM take 3.69ms. Depending on deployment constraints you can choose to run a faster, but less accurate model or a slower but more accurate one.
Comparing Test Set Predictions
Now, let’s visualize the test predictions for each of the models.
After comparing the performance of the models, let’s use Ludwig’s visualization module to get detailed insights into predictions by comparing them pairwise to understand how much they align or differ using the donut plot generated by the compare_classifiers_predictions
visualization function.
The inner donut illustrates a generic estimate of the models’ predictions — both correct (36.8%), both incorrect (39.5%), either of them correct (23.6%). The outer donut represents a detailed outline of the inner estimates — both correct (36.8%), both incorrect yet same (30.4%) or different (9.1%), Parallel CNN being correct and Bi-LSTM being incorrect (11.1%) and vice-versa (12.5%).
The plot shows that the model behaves similarly because of the big green and red areas, and that the bi-LSTM outperforms the Parallel CNN, although there’s still an 11% of data points where the Parallel CNN are correct and the bi-LSTM ones are wrong.
Similarly, from the above donut plot, we can compare Bi-LSTM’s and BERT’s predictions. According to the plot, a generic estimate of the models’ predictions — both correct (38.0%), both incorrect (34.2%), either of them correct (27.8%) and a detailed outline of the generic estimates — both correct (38.0%), both incorrect yet same (26.8%) or different (7.4%), Bi-LSTM being correct and BERT being incorrect (11.4%) and vice-versa (16.5%).
In this case, the predictions of the models are more diverse with a larger yellow area, but also in this case, despite BERT being more accurate, there’s still an 11% of data points that the bi-LSTM predicts correctly and BERT does not.
Let’s use the model we evaluated as most accurate and most generalizable to predict the sentiment of a movie review we made-up.
Our BERT Ludwig model predicts the correct sentiment of those made-up reviews.
Want to know how you can easily optimize the hyperparameters of your Ludwig models? Check out Part 3 of the series, where we discuss hyperparameter optimization. Missed out the Part 1, find it here.
We encourage you to check out the documentation to learn more and to become engaged with the Ludwig open source community. We aim to make deep learning free and accessible to all. Also, follow Ludwig on Twitter to stay afloat with all news and developments. We hope you’ll join us.