Ayuda
Ir al contenido

Dialnet


Adversarial Training for Cross-Domain Universal Dependency Parsing

  • Autores: Motoki Sato, Hitoshi Manabe, Hiroshi Noji, Hiroshi Noji
  • Localización: Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies : August 3-4, 2017 Vancouver, Canada / Jan Hajic (ed. lit.), 2017, ISBN 978-1-945626-70-8, págs. 71-79
  • Idioma: inglés
  • Enlaces
  • Resumen
    • We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique. Our approach is an extension to the recently proposed adversarial training technique for domain adaptation, which we apply on top of a graph-based neural dependency parsing model on bidirectional LSTMs. In our experiments, we find our baseline graph-based parser already outperforms the official baseline model (UDPipe) by a large margin. Further, by applying our technique to the treebanks of the same lan- guage with different domains, we observe an additional gain in the performance, in particular for the domains with less training data


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno