Date of Award

Fall 2021

Publication Type

Thesis

Degree Name

M.Sc.

Department

Computer Science

Keywords

Aspect-based opinion mining, Multi-task learning, Neural network, Pooling strategies, Product aspect extraction, Product opinion extraction

Supervisor

S. Samet

Supervisor

C. Ezeife

Rights

info:eu-repo/semantics/openAccess

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Abstract

Aspect Based Opinion Mining (ABOM) systems take user's reviews or posts as input from social media. The system aims to extract the aspect terms (e.g., pizza) and categories (e.g., food) and their polarities, to help the customers and identify product weaknesses. By solving these product weaknesses, companies can enhance customer satisfaction, increase sales, and boost revenues. Neural networks are widely used as classification algorithms for performing ABOM tasks for both the training (learning) phase from historical reviews to form class labels and the testing phase to predict the label for unknown data (new reviews). Neural network algorithms consist of artificial neurons (mathematical functions) that combine input weights (models) and input data (e.g., Review) to predict network outputs (e.g., Pizza) through repeated adjustments of errors and backpropagated weights.

Previous approaches, such as BERT-PT (BERT- Post Training) and BAT (BERT Adversarial Training), perform ABOM on User's Reviews by building separate models to complete each ABOM subtasks (e.g., aspect term and aspect sentiment extraction). Their methods can be summarized in steps of obtaining user's reviews, split the sentence into words, convert words into a vector, neural network to convert vectors into class probability to perform the Sequence Labeling and Multi-Class classification. BERT-LSTM/Attention approach uses pooling strategies on all the intermediate layers of the BERT model to achieve better results; While BERT-PT uses post-training and BAT uses adversarial training. Their limitation is that they use separate models to perform each ABOM subtask and require more training time, while not being able to consider aspect category and Coarse-grained ABOM tasks.

This thesis proposes the BERT-MTL, which uses the Multi-Task Learning approach, solving two or more tasks simultaneously by taking advantage of the similarities between the tasks and enhancing the model's accuracy by reducing training time. We propose BERT- MTL, which takes a four-step process of obtaining the sentences as input, splitting each review into word tokens with the BERT-tokenizer module. The tokens embedding are then fed into the BERT encoder layer, which produces each token-related vector. In the end, vectors are then given input to the different classification layers to perform ABOM tasks. To evaluate our model's performance, we have used the SemEval-14 restaurant dataset. Our proposed model outperforms previous models on several ABOM tasks in terms of Macro F1 and Accuracy with the help of Multi-Task Learning and various Pooling Strategies.

Share

COinS