Ayuda
Ir al contenido

Dialnet


Enriching Word Embeddings with Global Information and Testing on Highly Inflected Language

  • Autores: Lukáš Svoboda, Tomáš Brychcín
  • Localización: Computación y Sistemas (CyS), ISSN 1405-5546, ISSN-e 2007-9737, Vol. 23, Nº. 3, 2019, págs. 773-783
  • Idioma: inglés
  • Enlaces
  • Resumen
    • Abstract In this paper we evaluate our new approach based on the Continuous Bag-of-Words and Skip-gram models enriched with global context information on highly inflected Czech language and compare it with English results. As a source of information we use Wikipedia, where articles are organized in a hierarchy of categories. These categories provide useful topical information about each article. Both models are evaluated on standard word similarity and word analogy datasets. Proposed models outperform other word representation methods when similar size of training data is used. Model provide similar performance especially with methods trained on much larger datasets.

Los metadatos del artículo han sido obtenidos de SciELO México

Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno