Towards a Multidisciplinary Framework for Ethical AI in Education

In user experience design, one of the main steps is understanding the user's needs to create desirable, feasible, and viable products. However, as we design and develop new technologies, this checklist is no longer enough. Now, more than ever, we need to talk about the ethics of technology, especially artificial intelligence.

During my master's, I enrolled in a Media Law course, where I noticed that the main gaps in US law exist in regulating and scrutinizing AI systems. This article explores the meaning and need for a multidisciplinary AI ethics framework in education to transform the future of technology.


THE NEED FOR AI ETHICS

Ethics are a set of principles that evaluate what is morally right and what is morally wrong. When creating and deploying AI in our communities, we have to ask what guidelines we should follow and who gets to decide what those guidelines are. Along with numerous benefits, rapid AI adoption has also led to real-world harms, for example, the use of facial recognition software in Detroit resulted in a Black man being falsely arrested for a crime he did not commit [Allyn]. Cases like this show how quickly technology can amplify biases when no clear safeguards exist.

So, this rise in the use of AI systems by industry, government, and academic institutions requires frameworks to evaluate fairness, privacy, and safety. Now, the question is, where do we start?


AI ETHICS IN EDUCATION

The foundational step for ensuring a responsible AI future is to integrate AI ethics into education. We must revisit the instructions that future generations of developers and designers receive on AI-related topics [Borenstein and Howard 62]. I pursued a Bachelor of Engineering in Computer Science, and in my four years at the university, there was never a discussion about the possible harms of technology or a mention of AI ethics. This gap is not unique to my experience. Too often, ethics are treated as optional or as "someone else's problem" in STEM curricula [63].

Even when ethics modules are included, they lack epistemological depth and cross-disciplinary engagement. We need a substantively collaborative, holistic, and ethically generative pedagogy in AI education [Raji et al. 515]. Instead of introducing an elective on AI ethics, these frameworks should be integrated into every step of STEM education. Students need to recognize the strengths and limitations of their AI models by framing problems based on their real-world impact to ask challenging questions: Who benefits from this system? Who might be harmed? What biases are built into the data or the design?


MULTIDISCIPLINARY APPROACH

Current approaches to ethics education burden individual faculty with the responsibility of implementing course design from scratch [516]. A shift from quick ethics fixes to holistic, collaborative approaches that blend STEM and humanities and social sciences through participatory methods may result in improved frameworks. Raji et al. suggest a reset in pedagogy that values transversal problem framing, shared terminology, mixed methods, and inclusive co-creation [523]. Students in these STEM programs must also acknowledge personal responsibility and own their impact, regardless of their intent.


ETHICAL AI IN THE GLOBAL CONTEXT

Another significant challenge we currently face with AI ethics education is the absence of a global perspective. Researchers and curriculum designers should study not just curricula from around the world but also the definition of ethics. Who decides what is ethical? Who decides what is fair? The analyses of AI in a global context are biased toward Anglo-European ideas and perspectives.

A literature review [Hagerty and Rubinov 11] suggests that AI is likely to have markedly different social impacts depending on the geographical setting. Likewise, perceptions and understandings of AI are likely to be profoundly shaped by local cultural and social context. Bringing these perspectives into ethics education is necessary to build frameworks that are detailed, well-researched, relevant, and useful in different cultural contexts.


CONCLUSION

Based on this secondary research, I suggest that AI ethics frameworks in the curriculum should follow a multidisciplinary model: foundational values (human rights and equity), diverse governance mechanisms, pedagogical strategies (co-creation and real data sets and scenarios to teach privacy, bias mitigation, and fairness), and stakeholder engagement (designers, developers, educators, policymakers). A multidisciplinary framework for AI ethics in education will ensure fairness, accountability, transparency, inclusivity, and cultural sensitivity.

Works Cited
  • Allyn, B.: 'The Computer Got It Wrong': How Facial Recognition Led To False Arrest Of Black Man, NPR. (2020).
  • Hagerty, Alexa, and Igor Rubinov. "Global AI Ethics: A Review of the Social Impacts and Ethical Implications of Artificial Intelligence." ArXiv (Cornell University), 2019.
  • Borenstein, J., and A. Howard. Emerging Challenges in AI and the Need for AI Ethics Education. AI and Ethics, vol. 1, no. 2, 2020, pp. 205–211.
  • Raji, Inioluwa Deborah, Morgan Klaus Scheuerman, and Razvan Amironesei. “You Can’t Sit With Us: Exclusionary Pedagogy in AI Ethics Education.” Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 3–10 Mar. 2021, Virtual Event, Canada, pp. 515–525, ACM.
  • Goffi, Emmanuel R., and Aco Momčilović. “Respecting Cultural Diversity in Ethics Applied to AI: A New Approach for a Multicultural Governance.” Revista Misión Jurídica, vol. 15, no. 23, July–Dec. 2022, pp. 111–122. Revista Misión Jurídica.
  • Holmes, Wayne, et al. “Ethics of AI in Education: Towards a Community-Wide Framework.” International Journal of Artificial Intelligence in Education, vol. 32, 2022, pp. 504–526.