Translate and label!: An encoder-decoder approach for cross-lingual semantic role labeling
We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language. Unlike annotation projection techniques, our model does not need parallel data during inference time. Our approach can b...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article (Journal) Chapter/Article |
| Language: | English |
| Published: |
29 Aug 2019
|
| In: |
Arxiv
|
| Online Access: | Verlag, Volltext: http://arxiv.org/abs/1908.11326 |
| Author Notes: | Angel Daza and Anette Frank |
| Summary: | We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language. Unlike annotation projection techniques, our model does not need parallel data during inference time. Our approach can be applied in monolingual, multilingual and cross-lingual settings and is able to produce dependency-based and span-based SRL annotations. We benchmark the labeling performance of our model in different monolingual and multilingual settings using well-known SRL datasets. We then train our model in a cross-lingual setting to generate new SRL labeled data. Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality sentences and assigns accurate semantic role annotations. Our proposed architecture offers a flexible method for leveraging SRL data in multiple languages. |
|---|---|
| Item Description: | Gesehen am 23.01.2020 |
| Physical Description: | Online Resource |