NO BLACK BOX: PROMOTING INCLUSION AND DEMOCRACY IN THE AGE OF ARTIFICIAL INTELLIGENCE

Monica Di Domenico, Giuseppina Rita Mangione, Elsa Maria Bruni

Abstract


This work dissects how the ascendant role of artificial intelligence (AI) and algorithms in adjudicating rights and citizenship status is leading to the burgeoning complexity of contemporary citizenship’s concept. Consequently, a new kind of citizenship is shaped by digital interactions mediated by algorithms: the "Algorithmic Citizenship". The analysis delves into the threat of algorithmic discrimination, a phenomenon where biased or incomplete training data within algorithms exacerbates historical inequalities and injustices. Furthermore, the risk of "algorithmic historical revisionism" is introduced, underscoring how algorithms can distort our common understanding of the past by presenting skewed historical information. The work emphasizes the importance of transparency and explainability in algorithms to enable users to comprehend the rationale behind algorithmic decisions and the data used in their formulation. It examines the intricate challenges posed by the opacity of certain AI algorithms, often referred to as "black boxes," pointing out the need for transparent decision-making processes, especially in domains demanding clear explanations.


Keywords


Algorithmic Citizenship, Artificial Intelligence, Bias, Education, Critical Thinking

Full Text:

PDF

References


- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias in criminal justice. ProPublica

- Beatini, V., Cohen, D., Di Tore, S., Pellerin, H., Aiello, P., Sibilio, M., & Berthoz, A. (2024). Measuring perspective taking with the “Virtual Class” videogame: A child development study. Computers in Human Behavior, 151, 108012.

- Benjamin, R., Assessing risk, automating racism. Science, 2019. 366(6464): p. 421-422

- Bornstein, S. (2018). Antidiscriminatory algorithms. Ala. L. Rev., 70, 519.

- Bridle, J. (2016). Algorithmic citizenship, digital statelessness. GeoHumanities, 2(2), 377-381.

- Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.

- Cheney-Lippold, J. (2016). Jus Algoritmi: How the national security agency remade citizenship. International Journal of Communication, 10, 22.

- Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2), 153-163.

- Crawford, K., & Calo, R. (2016). There is a blind spot in AI research. Nature, 538(7625), 311-313.

- Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62.

- Di Tore, S., Aiello, P., Sibilio, M., & Berthoz, A. (2020). Simplex didactics: promoting transversal learning through the training of perspective taking. Journal of e-Learning and Knowledge Society, 16(3), 34-49.27.

- Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and information systems, 33(1), 1-33.

- Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2018). Human decisions and machine predictions. The quarterly journal of economics, 133(1), 237-293.

- Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. In International conference on machine learning (pp. 1885-1894). PMLR.

- Latour, B. (2007). Reassembling the social: An introduction to actor-network-theory. Oup Oxford.

- Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. In Algorithms of oppression. New York university press.

- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453.

- Panciroli, C., & Rivoltella, P. C. (2023). Can an algorithm be fair?: intercultural biases and critical thinking in generative artificial intelligence social uses. Scholé: rivista di educazione e studi culturali: LXI, 2, 2023, 67-84.

- Peck, J. (2018). New York City jails have a racial bias problem. The New York Times

- Suresh, H., & Guttag, J. (2021, October). A framework for understanding sources of harm throughout the machine learning life cycle. In Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (pp. 1-9).

- Zhang, B. H., Lemoine, B., & Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 335-340).




DOI: https://doi.org/10.32043/gsd.v8i2.1144

Refbacks

  • There are currently no refbacks.


Copyright (c) 2024 ITALIAN JOURNAL OF HEALTH EDUCATION, SPORT AND INCLUSIVE DIDACTICS

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Italian Journal of Health Education, Sports and Inclusive Didactics 
ISSN printed: 2532-3296