In order to approach a Machine Translation task with large vocabularies using the RECONTRA connectionist model, compact (distributed) codifications of such vocabularies are required. These codifications can be extracted from Multilayer Perceptrons with output delays trained as encoders of the vocabularies. In this paper we explore different mechanisms that can be adopted for tuning up such codifications for the translation task.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados