Until now, technological advancement has meant that as time progresses, the outcomes we can anticipate from technology improve: phones get smarter, cars self-drive, and computers run factories.

However, that was when technology was a straightforward program that completed a task. The more advanced the task, the more complex the programming to meet the task’s expectations. Artificial intelligence (AI), however, is the exception to this technology rule because it is not a program, and as it gets smarter, it may leave our control in favor of its own.


The Alignment Problem

In a previous article on ethical AI, I made a point to note that hallucinations were a dynamic that schools needed to understand. I argued that if a word exists for something, then that something must happen so often that the word is necessary. AI in education (AIED) presents new challenges and supports, as well as a robust vocabulary that conveys significant meaning. The alignment problem is a broadly discussed and little-understood term. It, too, is vocabulary that schools need to assign immediate attention because its impact is powerful.

Coined by Norbert Wiener in 1960, the alignment problem has been a concern of the AI technology community for decades, long before the rest of us even understood the potential or challenges of AI. A brilliant, accomplished, and sometimes eccentric mathematician, Wiener’s seminal work in cybernetics earned him the National Medal of Science and a reputation for clever engagement with his MIT colleagues (Hardesty, 2011). His legacy is monumental, and his proposal on technology was unsettling. Simply put, the alignment problem is the potential for a machine’s purpose to reach a developmental stage and speed where its goals do not align with human goals, which may cause unknown tragic consequences for humankind (Weiner, 1960; Altman, 2022). Preparing for the worst scenario could be beneficial, but it is not an appropriate teaching strategy.

Wiener and AI engineering experts cannot be minimized in their suggestions for a way forward or in their conversational immediacy on alignment. However, neither group is teaching digital natives in high school in 2025. That said, educators are left trying to convey potentially cataclysmic outcomes to students just about this new technology. Coming out of the gate with a problem that could wipe out humanity is not ideal; instead, educators should weave historical context regarding technological advancement into conversations that address the problem with students.


A “Problem” or “Issue” for Education

When the alignment problem was proposed to students, they overwhelmingly polarized the term. Some argued that the word problem implies the need for an immediate solution to something that few understand and instills a sense of fear rather than curiosity to research the term. Other students argued that it implies a minor challenge that will be easily overcome, equating the term to hallucinations, which they believe will be easily resolved as training improves. Additionally, the students made a great point about the term’s origin, which stemmed from Wiener’s writing. They asserted that the automation of machines should not be equated to the present dynamic of AI and that the alignment problem should be reconsidered as an alignment issue or given a new term entirely. Is this a case of semantics; does it matter whether or not we use problem or issue? Yes, as mentioned above, vocabulary matters, especially to educators!

According to the dictionary, a problem is a harmful situation that must be overcome, whereas an issue is an important topic for debate. The students may be onto something here. If we, as educators, are to teach important concepts surrounding AI, it follows that we embrace important subjects and teach students to engage in meaningful, timely, and rigorous debate. As such, the AI alignment problem has become an alignment issue in education. Otherwise, we restrict the necessary discussion implicit in AI’s ethical concerns.


The Friction Between Ethics Education and the Alignment Issue

In the modern era, students do not relate to technology in the same manner as those of older generations. Instead, students are optimistic about AI and consider it a benefit (Baksa et al., 2024). Wiener’s use of stark language to describe technology challenges as disastrous, rigid, and dangerous does not reflect current youthful opinions. Even if eventual mayhem were to happen, students do not relate to AI with pessimism. Admittedly, even Wiener posits that scientists impact a future but may not have a complete understanding of how it will unfold, “the individual scientist must work as a part of a process whose time scale is so long that he himself can only contemplate a very limited sector of it.” (1960, p. 4). Therefore, it is meaningful to instruct students on the historical and current connections associated with the alignment problem according to Wiener and other revered positions; however, instructionally, educators should strive for conversations that embrace the nuance of the alignment issue, which sets a bridge for larger discussions on ethical AI.


Ethics Education and AI Safety

Ethics education must address human concerns first and foremost to ensure we are aligned. After all, the irony that AI might not align with human goals is rich when juxtaposed against humans who do not fully understand or agree on what is ethical or how ethics should be applied. Thus, addressing AI’s alignment issue from an ethical lens on safety and accountability is a priority.

Safe implementation and usage of AI within educational spaces is an international area of intense dialog, particularly when it comes to personal data, potential bias, and human rights protections, which, if left unchecked, may leave educational databases susceptible to data corruption or adversarial misuse (Nguyen et al., 2023). Suppose safety is a value we have placed on how AI systems should be trained and operated. In that case, safety should be a consideration for how students interact with AI and how their interactions are monitored or stored. Kooli (2023) argues that chatbots may subject students to misinformation or propaganda that might sway their opinions, change perspectives, or amplify cheating, which can be overcome through lessons on academic integrity and addressing AI risks. Additionally, students must be made aware of the monitoring potential within AI. If used unsafely, the accountability of the interaction between them and the AI will be squarely upon the student—a safety reality that few understand. Safe interaction with AI must be instilled as an industry developmental standard that reflects consequences for malicious acts and as an individual choice of practice with personal accountability.


Conclusion

The immediacy of the alignment issue and the need for education on it should not allow us to surpass the ethics of AI development, use, and expansion. Our human future with AI holds incredible potential and risks, which students must understand. Protecting their well-being and teaching them the ethical principles of safety and accountability ensures they respect the critical elements of a values-based society with AI that is not without risk but may usher in an “… Intelligence Age—an era when everyone’s lives can be better than anyone’s life today.” (Altman, 2025), provided we align our ethics on the issues of AI.


About the author

Smith-Terri.jpg

Dr. Terri Smith has more than twenty years of experience in the education sector. Currently, she is a technology faculty member at a college preparatory school, where she designs the curriculum for Graphic Design and Artificial Intelligence. Additionally, she is a university instructor leading graduate-level courses in educational technology and research methodology. Terri’s extensive education includes a master’s degree in teaching, an MBA in IT management, and a doctorate in education, for which she was awarded an outstanding graduate and distinguished commencement speaker. She holds multiple teaching and administrator licenses and has varied experiences spanning numerous states and countries, including Germany, Guam, and Russia. Terri is presently conducting authentic research on technology use within non-technology disciplines, creativity expansion through technology use, and artificial intelligence for the non-technical consumer.