The Algorithmic Bias in Recommendation Systems and Its Social Impact on User Behavior
PDF

Keywords

Social Inequalities
Ethical Principles
Algorithmic Bias
Recommendation Systems

How to Cite

Liu, L. (2024). The Algorithmic Bias in Recommendation Systems and Its Social Impact on User Behavior : Algorithmic Bias in Recommendation Systems. International Theory and Practice in Humanities and Social Sciences, 1(1), 290–303. https://doi.org/10.70693/itphss.v1i1.204
Received 2024-11-25
Accepted 2024-11-27
Published 2024-12-17

Abstract

Algorithmic bias in recommendation systems poses significant challenges, influencing user experiences and perpetuating societal inequalities. This study provides a comprehensive analysis of the origins, impacts, and mitigation strategies of algorithmic bias in recommendation systems. By categorizing bias into data bias, model bias, and feedback loops, this research highlights the multifaceted nature of algorithmic bias and its implications for user behavior, including the formation of filter bubbles, decision-making distortions, and the reinforcement of social inequalities. The study employs a mixed-methods approach, integrating both theoretical analysis and empirical case studies from popular platforms such as Netflix, YouTube, and Amazon. These case studies illustrate the real-world implications of algorithmic bias and demonstrate the effectiveness of various mitigation strategies, including diversity optimization, transparency enhancement, and fairness-aware learning. The findings underscore the importance of a balanced approach that incorporates technical, ethical, and policy-based interventions to promote socially responsible recommendation systems.

https://doi.org/10.70693/itphss.v1i1.204
PDF

References

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, 149-159. https://doi.org/10.1145/3287560.3287586

Chen, L., Zheng, Y., Yang, Q., & Zhang, X. (2020). Fairness-aware recommendation: A survey. arXiv preprint. https://arxiv.org/abs/2001.09784

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint. https://arxiv.org/abs/1702.08608

Nguyen, T. T., Hui, P.-M., Harper, F. M., Terveen, L., & Konstan, J. A. (2014). Exploring the filter bubble: The effect of using recommender systems on content diversity. Proceedings of the 23rd International Conference on World Wide Web, 677-686. https://doi.org/10.1145/2566486.2568012

Pariser, E. (2011). The filter bubble: How the new personalized web is changing what we read and how we think. Penguin.

Resnick, P., & Varian, H. R. (1997). Recommender systems. Communications of the ACM, 40(3), 56-58. https://doi.org/10.1145/245108.245121

Suresh, H., & Guttag, J. V. (2021). A framework for understanding unintended consequences of machine learning. Communications of the ACM, 64(1), 62-71. https://doi.org/10.1145/3433949

Zhou, T., Zhang, H., & Wu, W. (2020). Diverse and fair recommendation in recommender systems. IEEE Transactions on Knowledge and Data Engineering, 32(8), 1536-1548. https://doi.org/10.1109/TKDE.2019.2919618

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

Copyright (c) 2024 Lingyuan Liu (Author)

Downloads

Download data is not yet available.