Date of Award

2016

Publication Type

Master Thesis

Degree Name

M.Sc.

Department

Computer Science

Keywords

academic papers, feature selection, feature weight normalization, language models, text classification

Supervisor

Lu, Jianguo

Rights

info:eu-repo/semantics/openAccess

Abstract

The fast growing speed of the size of scholarly data have made it necessary to nd out e cient machine learning ways to automatically categorize the data. This thesis aims to build a classi er that can automatically categorize Computer Science (CS) papers based on text content. To nd out the best method for CS papers, we collect and prepare two large labeled data sets: CiteSeerX and arXiv, and experiment with di erent classi cation approaches including Naive Bayes and Logistic Regression, di erent feature selection schemes, di erent language models, and di erent feature weighting schemes. We found that with large size of training set, Bi-gram modeling with normalized feature weight performs the best for all the two data sets. It is surprising that arXiv data set can be classi ed up to 0.95 F1 value, while CiteSeerX reaches lower F1 (0.764). That is probably caused by labeling of CiteSeerX is not as accurate as arXiv data set.

Share

COinS