Date of Award


Publication Type


Degree Name



Computer Science


Adversarial examples, Data augmentation, Robustness, Attention mechanism, Learning models







Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.


Being an NLP pre-training model built with transformer and attention mechanism, BERT has proven to be highly efficient in developing various applications like text classification, machine translation, question-answering systems etc. Despite the recent advancement, however, the generalizability of the models remains a challenging issue. In this thesis, we study the generalizability issues of the prediction models in the question-answering systems, particularly for the unanswerable examples. To gain the insight about where the models do not generalize well, we are interested in constructing adversarial examples that are challenging for the model to predict correctly. The adversarial examples are obtained by pairing each question with a different context in a same dataset. Constructing adversarial examples only, we make sure that the new context does not contain any answer to the question it is paired with. In order to maximally challenge the prediction models, among the large number of candidates of the context to a given question, we select the one with the highest text similarity score to the original context of this question. The proposed method is exercised on SQuAD, a benchmark question answering dataset, with three deep learning models, namely, BERT, LSTM, and GRU, respectively. Our experiment shows that the examples constructed from the proposed method drastically reduce the performance of the models, from a range of 3.19-6.4% to a range of 0.03-0.18%, demonstrating the effectiveness of the method. The experiment also shows that the existing models are capable of learning from the constructed examples, leading to the enhanced performance.