This article examines the critical ethical challenges posed by algorithmic bias in artificial intelligence (AI) systems, focusing on its implications for social justice and data equity. Through a systematic review of case studies and theoretical frameworks, we analyze how biased datasets and algorithmic designs perpetuate structural inequalities, particularly affecting marginalized communities. The study highlights key examples, such as gender and racial biases in facial recognition and hiring algorithms, while exploring mitigation strategies rooted in data justice principles. Additionally, we evaluate regulatory responses, including the European Union's AI Act, which proposes a risk-based governance framework. The findings underscore the urgent need for interdisciplinary approaches to develop fairer AI systems that align with ethical standards and human rights.
© 2001-2026 Fundación Dialnet · Todos los derechos reservados