I argue here that sophisticated AI systems, with the exception of those aimed at the psychological modeling of human cognition, must be based on general philosophical theories of rationality and, conversely, philosophical theories of rationality should be tested by implementing them in AI systems. So the philosophy and the AI go hand in hand. I compare human and generic rationality within a broad philosophy of AI and conclude by suggesting that ultimately, virtually all familiar philosophical problems will turn out to be at least indirectly relevant to the task of building an autonomous rational agent, and conversely, the AI enterprise has the potential to throw light at least indirectly on most philosophical problems.
© 2001-2024 Fundación Dialnet · Todos los derechos reservados