This thesis proposes a sociotechnical framework for effectively implementing Trustworthy Artificial Intelligence (TAI), addressing the technical, human, and regulatory dimensions of AI-induced harms. Rather than treating TAI as a purely technical goal, we emphasize its interdisciplinary nature, aligning algorithmic development with societal needs and legal norms throughout the AI system lifecycle. Our approach focuses particularly on harms and investigates how they can be mitigated through algorithmic design, effective regulation development, and implementation.
From a technical standpoint, we focus on harms derived from discrimination in algorithmic decisions. First, we introduce FairShap, a novel data valuation method that quantifies the contribution of individual training examples to group fairness decision-making metrics. This enables a more complete diagnosis and mitigation of discrimination in high-risk decision-making systems, aligning with auditing obligations under the EU AI Act.
Moving beyond the fairness definitions in decision-making, we propose ERG, a graph-based approach to measure and mitigate structural disparities in social capital within social networks. This approach addresses emerging regulatory demands, such as those in the EU Digital Services Act, which require assessing and reducing systemic risks in online platforms.
In the context of algorithm use, we design and evaluate a human-AI complementarity framework for collaborative decision-making in high-stakes resource allocation tasks. By combining human and algorithmic matching decisions and optimizing the hand-off using bandit-based strategies, we explore how semi-automated systems can be designed to outperform humans or algorithms alone. This approach adheres to the TAI principles of technical robustness, user oversight, and minimal harm, as set out in EU GDPR and TAI guidelines.
Finally, in the governance sphere, we examine the use of AI for worker management under Spanish labor law. We identify applicable legal frameworks across the AI system lifecycle, analyze the alignment between TAI principles, the EU AI Act, and labor duties, and highlight tensions such as the gap between correlation-based models and the legal requirement for causal justification in certain decisions.
Taken together, these contributions demonstrate that ensuring trustworthiness requires more than just algorithmic improvements. Instead, it must be understood as a sociotechnical system, emerging from the interaction of data, algorithms, institutions, and regulatory constraints. This thesis provides practical insights for researchers, practitioners, and policymakers seeking to develop AI systems that are technically robust, socially aligned, and legally compliant.
© 2001-2025 Fundación Dialnet · Todos los derechos reservados