Luis Luis Pellegrin, Octavio Loyola-González, José Ortiz Béjar, Miguel Ángel Medina Pérez, Andres Eduardo Gutierrez Rodríguez, Eric S. Tellez, Mario Graff Guerrero, Sabino Miranda, Daniela Alejandra Moctezuma Ochoa, Mauricio García-Limón, Alicia Morales Reyes, Carlos Alberto Reyes-García, Eduardo F. Morales, Hugo Jair Escalante
Abstract This paper describes the design of the 2017 RedICA: Text-Image Matching (RICATIM) challenge, including the dataset generation, a complete analysis of results, and the descriptions of the top-ranked developed methods. The academic challenge explores the feasibility of a novel binary image classification scenario, where each instance corresponds to the concatenation of learned representations of an image and a word. Instances are labeled as positive if the word is relevant for describing the visual content of the image, and negative otherwise. This novel approach of the image classification problem poses an alternative scenario where any text-image pair can be represented in such space, so any word could be considered for describing an image. The proposed methods are diverse and competitive, showing considerable improvements over the proposed baselines.
© 2001-2025 Fundación Dialnet · Todos los derechos reservados