Multiplicative Representation for Unsupervised Semantic Role Induction

 

We propose a neural model to learn argument embeddings from the context by explicitly incorporating dependency relations as multiplicative factors.

August 7, 2016
The Annual Meeting of the Association for Computational Linguistics 2016

 

Authors

Yi Luan (Disney Research/University of Washington)

Yangfeng Ji (Georgia Institute of Technology)

Hannaneh Hajishirzi (University of Washington)

Boyang Li (Disney Research)

 

Multiplicative Representation for Unsupervised Semantic Role Induction

Abstract

In unsupervised semantic role labeling, identifying the role of an argument is usually informed by its dependency relation with the predicate. In this work, we propose a neural model to learn argument embeddings from the context by explicitly incorporating dependency relations as multiplicative factors, which bias argument embeddings according to their dependency roles. Our model outperforms existing state-of-the-art embeddings in unsupervised semantic role induction on the CoNLL 2008 dataset. Qualitative results demonstrate our model can effectively bias argument embeddings based on their dependency role.

Copyright Notice