Skip to Main Content

Artificial Intelligence Toolkit for Faculty

Ethical Considerations: An Overview

Ethics and the use of AI are inherently and necessarily connected, and AI practices in education should begin and end with ethics. But while we likely can agree that we should do all we can to create and use new technologies in an ethical way, there is no silver bullet, particularly with the fast-moving target of AI. Nevertheless, it is imperative for us as educators to try. Doing so ensures the most effective and responsible use of, and teaching about these technologies.

Leon Furze, who studies the implications of Generative Artificial Intelligence on writing instruction and education, outlines nine areas of ethical concern:

  1. Bias - AI data can lead to biased, discriminatory output.
  2. Environment - AI technology impacts the environment through mining, energy consumption, and waste. For example, AI tools can have significant carbon footprints, ranging from increasing carbon dioxide emissions to generating more e-waste to surrendering human decision-making in food production.
  3. Truth - AI raises concerns about plagiarism, cheating, and fake news. For example, Generative AI makes it easier to create deep fake videos which can be used to exploit people’s images and voices for nefarious purposes, ranging from pornographic to political videos and audio.
  4. Copyright - AI can breach copyright laws and infringe intellectual property rights. 
  5. Privacy - AI raises concerns about personal data collection and surveillance. 
  6. Datafication - AI raises concerns about privacy and exploitation. Every part of our lives is data. 
  7. Affect Recognition - AI emotion detection raises concerns about accuracy, privacy, and discrimination. 
  8. Human Labor - AI automation raises concerns about job automation and exploitation. 
  9. Power - AI reinforces global power imbalances and structural inequalities.

While all of these issues are worth our attention, on this page, we'll take a closer look at three that are especially relevant to educators: Bias, Truth, and Copyright.

Deeper Dive: Bias

two of three female secretaries and three male surgeons in response to Midjourney promptsNot unique to Generative AI, algorithmic bias is “discrimination against one group over another due to the recommendations or predictions of a computer program” (Wood). This results from hidden, structural biases resulting from the data used as the inputs – and by the humans selecting those inputs – for a program. Biases can include assumptions based on such identities and criteria as race, gender, sex, disability, privilege (e.g. access to prior learning such as AP courses), or poverty. For example, if you ask a generative AI tool to create a picture of an entrepreneur, you will likely see more pictures featuring men than women, unless you specify “female entrepreneur”). You can use this explorer to see these biases at work.

💡 See: How AI reduces the world to stereotypes, by Victoria Turk – a visually stunning, effective (and somewhat alarming) look at bias in AI.

Generative AI has also demonstrated political bias, which "can be harder to detect and eradicate than gender or racial bias" (Motoki, et al., 2023). Recent research found that in 100 randomized test-retests, ChatGPT favored one political party over another in several countries (see More human than human: measuring ChatGPT political bias).

Strategies to Combat Generative AI Bias

Instructors should make students aware of the possibility of discrimination being programmed into AI and teach them that humans must be a part of the process to develop inputs used (with the recognition that humans may themselves perpetuate discriminatory practices through the data). As Jake Silberg and James Manyika of the McKinsey Global Institute suggest, "AI has the potential to help humans make fairer decisions—but only if we carefully work toward fairness in AI systems as well." 

💡 See: Maha Bali's piece, What I Mean When I Say Critical AI Literacy, and her list of recommended resources on inequality and oppression created, exacerbated, or reproduced by AI and algorithms.

Telling the Truth: Fact Check and Verify

Be Skepticaldetective looking for clues in paperwork

Verification rule of thumb: Only use generative AI writing when you have enough time and expertise to check the outputs and verify the validity and accuracy. 

Why Verify? 

OpenAI's own messaging to educators acknowledges AI content "might sound right but be wrong," offering misleading or incorrect information, sometimes called an AI "hallucination," and that "verifying AI recommendations often requires a high degree of expertise." While AI's outputs can sound legitimate and highly convincing, they are, in essence, algorithms predicting what words would make sense in response. In addition, numerous tests of AI outputs have demonstrated biased, one-sided, and stereotypical world and cultural views (because they are trained on our own biased, one-sided human outputs, or, possibly, their own biased AI).

So if we are asking AI something and we do not have a way to check its answer, that is problematic.

Two Questions We Should Ask 

  1. Do you have the time to evaluate the output?
  2. If you do not have the expertise, do you have another way to independently verify the output? 

Be Critical

Sometimes generative AI will be wrong, sometimes it will be biased, and ALWAYS, it will lack real understanding or intention. 

  • Language models are designed primarily to produce plausible outputs, not true ones.
  • Biases from all the text they were trained on are baked in and are impossible to entirely remove.
  • Language models generate a statistical model of patterns in language; there is no intention or comprehension behind the outputs, even though it might seem like there is.

Image generated by DALL-E on April 17, 2024, using the prompt "create an image of a female detective studying a paper trail for clues."

Copyright & Intellectual Property

scales of justiceThe legal questions surrounding AI and copyright are still being decided and guidance for users is evolving. There are many cases in the courts in the U.S. at this moment (early 2024), and there is only one formal decision by the U.S. Copyright Office and no new law. There was an Executive Order, but it does not (nor probably does the White House have the power to) answer the questions which come up about generative AI and copyright.

Things to keep in mind as you consider using generative AI for developing OER:

  1. Whether training generative AI is in violation of copyright law is very unclear, with good arguments on both sides.
  2. The status of the generative AI materials is, provisionally, clear in the U.S.: they are all born into the public domain, according to the U.S. Copyright Office and one quite specific court decision.
  3. If those conclusions stand, educators (in the U.S.) can use generative AI materials as they would any other public domain materials and/or OER
  4. Since all of the above is still in flux, educators should exercise caution when using GenAI materials, for example by including very complete citation/attributions which can be used later if the law changes.
  5. Educators should keep an eye on the outcomes of the many cases about these issues which are in the courts at this time.

Image generated by DALL-E on April 18, 2024, using the prompt "create an image of the scales of justice with a copyright symbol on one side of the scale and a graduation cap on the opposite side of the scale."

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.