Encoding sentences with neural models

Abstract

In this paper, we evaluate techniques for creating sentence representations on the sentiment classification task. We show that: word order is important; Tree-LSTMs outperform their recurrent counterparts, which agrees with findings of Tai et al. (2015); sentiment classification is harder when sentence length increases; and that supervising sentiment at the node level decreases overfitting, but does not lead to performance improvements. We present a method for framing the sentiment classification task as a regression problem, which has, to the best of our knowledge, not been done for this specific task before. Although this does not lead to performance improvements, it allows for a different and useful perspective of the problem. Interesting further work would be to analyze why the regression case does not work as well as expected, and to explore the benefits of both perspectives.

Type
Publication
Encoding sentences with neural models