The incremental learning approach was firstly motivated as the human capability for incorporating knowledge from new experiences worth being programmed into artificial agents. However, nowadays there exist other practical (i.e.
industrial) reasons which increase the interest in incremental algorithms.
Nowadays, companies from a very wide range of activities store huge amounts of data every day. One-shot algorithms are not easily able to process and incorporate to a knowledge base this great amount of continuously incoming instances in a reasonable amount of time and memory space.
We believe that, in this environment, incremental learning becomes particularly relevant since this sort of algorithms are able to revise already existing models of data without beginning from scratch and without re-processing past data.
We present two different and general heuristics in order to convert batch hill-climbing searchers into incremental ones. We believe that the heuristic that we call Traversal Operators in Correct Order (TOCO) is the most novel and original contribution. This heuristic states that, given a learned knowledge structure and the learning path used to obtain the structure where the traversal operators are ordered in decreasing contribution of quality, the structure will be revised only when the order of traversal operators is changed in the light of new data and also that the structure will be rebuild from the first unordered operator of the path. So, the benefit of the TOCO heuristic is twofold. First, the model will only be revised when it is invalidated by new data, and second, in the case that it must be revised, the learning algorithm will not begin from scratch.
The second heuristic of our work, that we called Reduced Search Space (RSS) heuristic, uses the knowledge gathered from previous learning steps and states that structures that had very low quality in past learning steps will still have low quality with respect to the
© 2001-2024 Fundación Dialnet · Todos los derechos reservados