Unless otherwise noted, content on this page was adapted from the Canvas course Navigating the Future: Open Education with Generative AI, developed and offered under the auspices of College of the Canyons, serving as Technical Assistance Provider, for the California Community Colleges Chancellor’s Office Zero Textbook Cost Degree Grant Program (April 2024), CC BY 4.0
Ethics and the use of AI are inherently and necessarily connected, and AI practices in education should begin and end with ethics. But while we likely can agree that we should do all we can to create and use new technologies in an ethical way, there is no silver bullet, particularly with the fast-moving target of AI. Nevertheless, it is imperative for us as educators to try. Doing so ensures the most effective and responsible use of, and teaching about these technologies.
Leon Furze, who studies the implications of Generative Artificial Intelligence on writing instruction and education, outlines nine areas of ethical concern:
While all of these issues are worth our attention, on this page, we'll take a closer look at three that are especially relevant to educators: Bias, Truth, and Copyright.
Not unique to Generative AI, algorithmic bias is “discrimination against one group over another due to the recommendations or predictions of a computer program” (Wood). This results from hidden, structural biases resulting from the data used as the inputs – and by the humans selecting those inputs – for a program. Biases can include assumptions based on such identities and criteria as race, gender, sex, disability, privilege (e.g. access to prior learning such as AP courses), or poverty. For example, if you ask a generative AI tool to create a picture of an entrepreneur, you will likely see more pictures featuring men than women, unless you specify “female entrepreneur”). You can use this explorer to see these biases at work.
💡 See: How AI reduces the world to stereotypes, by Victoria Turk – a visually stunning, effective (and somewhat alarming) look at bias in AI.
Generative AI has also demonstrated political bias, which "can be harder to detect and eradicate than gender or racial bias" (Motoki, et al., 2023). Recent research found that in 100 randomized test-retests, ChatGPT favored one political party over another in several countries (see More human than human: measuring ChatGPT political bias).
Instructors should make students aware of the possibility of discrimination being programmed into AI and teach them that humans must be a part of the process to develop inputs used (with the recognition that humans may themselves perpetuate discriminatory practices through the data). As Jake Silberg and James Manyika of the McKinsey Global Institute suggest, "AI has the potential to help humans make fairer decisions—but only if we carefully work toward fairness in AI systems as well."
💡 See: Maha Bali's piece, What I Mean When I Say Critical AI Literacy, and her list of recommended resources on inequality and oppression created, exacerbated, or reproduced by AI and algorithms.
Verification rule of thumb: Only use generative AI writing when you have enough time and expertise to check the outputs and verify the validity and accuracy.
OpenAI's own messaging to educators acknowledges AI content "might sound right but be wrong," offering misleading or incorrect information, sometimes called an AI "hallucination," and that "verifying AI recommendations often requires a high degree of expertise." While AI's outputs can sound legitimate and highly convincing, they are, in essence, algorithms predicting what words would make sense in response. In addition, numerous tests of AI outputs have demonstrated biased, one-sided, and stereotypical world and cultural views (because they are trained on our own biased, one-sided human outputs, or, possibly, their own biased AI).
So if we are asking AI something and we do not have a way to check its answer, that is problematic.
Sometimes generative AI will be wrong, sometimes it will be biased, and ALWAYS, it will lack real understanding or intention.
Image generated by DALL-E on April 17, 2024, using the prompt "create an image of a female detective studying a paper trail for clues."
The legal questions surrounding AI and copyright are still being decided and guidance for users is evolving. There are many cases in the courts in the U.S. at this moment (early 2024), and there is only one formal decision by the U.S. Copyright Office and no new law. There was an Executive Order, but it does not (nor probably does the White House have the power to) answer the questions which come up about generative AI and copyright.
Things to keep in mind as you consider using generative AI for developing OER:
Image generated by DALL-E on April 18, 2024, using the prompt "create an image of the scales of justice with a copyright symbol on one side of the scale and a graduation cap on the opposite side of the scale."