Skip to Main Content

A-Z Databases

Alphabets Find by Title

All A B C D E F G H I J K L M N O P Q R S T U V W X Y Z #
Video available
Audio available
Open Access
Accessibility info link
TexShare resource
GLAM resource

AI in Higher Education

Provides explainers and emerging trends in the rapidly-developing landscape of AI, with a focus on Large Language Models (LLMs), AI tools, AI literacy, and the ethical considerations of using (or not using) these tools.

What is the current ETAMU policy on student use of AI in the classroom?


[East Texas A&M University] acknowledges that there are legitimate uses of Artificial Intelligence, ChatBots, or other software that has the capacity to generate text, or suggest replacements for text beyond individual words, as determined by the instructor of the course.

Any use of such software must be documented. Any undocumented use of such software constitutes an instance of academic dishonesty (plagiarism).

Individual instructors may disallow entirely the use of such software for individual assignments or for the entire course. Students should be aware of such requirements and follow their instructors’ guidelines. If no instructions are provided the student should assume that the use of such software is disallowed.

In any case, students are fully responsible for the content of any assignment they submit, regardless of whether they used an AI, in any way. This specifically includes cases in which the AI plagiarized another text or misrepresented sources.

East Texas A&M University (ETAMU) syllabus language for AI use in courses (drafted on May 25, 2023)

Adapting College Writing for the Age of Large Language Models Such as ChatGPT: Some Next Steps for Educators


Large language models (LLMs) such as ChatGPT are sophisticated statistical models that

  • predict probable word sequences in response to a prompt even though they do not “understand” language in any human-like sense.
  • deliver passages of text which resemble writing authored by humans.
  • This synthetic text is not directly “plagiarized” from some original, and it is usually grammatically and syntactically well-crafted.  

From an academic integrity perspective, this means that “AI”-generated writing  

1) is not easily identifiable as such to the unpracticed eye;  

2) does not conform to “plagiarism” as that term is typically understood by teachers and students; and  

3) encourages students to think of writing as task-specific labor disconnected from learning and the application of critical thinking. 

Many teachers who assign writing are, thus, understandably concerned that students will use ChatGPT or other text generators to skip the learning and thinking around which their writing assignments are designed. 

In the long run, teachers need to help students develop a critical awareness of generative machine models:

  • how they work;
  • why their content is often biased, false, or simplistic; and
  • what their social, intellectual, and environmental implications might be.

But that kind of preparation takes time, not least because journalism on this topic is often clickbait-driven, and “AI” discourse tends to be jargony, hype-laden, and conflated with science fiction. (We offer a few solid links below.) In the meantime, the following practices should help to protect academic integrity and student learning. At least some of these practices might also enrich your teaching.  

AI Detection Tools


Although there are many tools currently on the market today that claim to be able to reliably detect AI-generated text output, the reality is that these tools are often less accurate than they claim and can be easily fooled by students taking low-effort steps to "humanize" AI generated text. Furthermore, many AI detectors have been shown to be more likely to falsely classify human-generated text as AI-generated if it was written by a student whose first language is not English, leading to an increased likelihood of those students experiencing false accusations of plagiarism. For more information, see the following video:



Updating Common Practices


  • Encourage intrinsic motivation. Most educators already strive to make their assignments engaging, but it’s worth emphasizing that students who feel connected to their writing will be less interested in outsourcing their work to an automated process.  
  • Highlight how the writing process helps students learn. Make explicit that the goal of writing is neither a product nor a grade but, rather, a process that empowers critical thinking. Writing, reading, and research are entwined activities that help people to communicate more clearly, develop original thinking, evaluate claims, and form judgments.
  • Update academic integrity policies to make them explicit about the use of automated writing tools. Academic integrity policies and honor codes should specify what, if any, use of automated writing assistance is appropriate.
  • Ask students to affirm that their submissions are their own work and not that of another person or of any automated system. This practice has long been used to deter plagiarism and can be adapted to include text generation. Some instructors ask students to add the following statement along with their initials when they turn in written work: “I certify that this assignment represents my own work. I have not used any unauthorized or unacknowledged assistance or sources in completing it including free or commercial systems or services offered on the internet.” 

Recommended Practices


  • Assign prompts that generative AI systems are not good at. Some tasks are either impossible for AI models to perform reliably, or require the student to supervise and edit in ways that entail significant expertise, rhetorical skills, and time. As such, students who are simply eager to earn a good grade with minimal effort will likely find such assignments difficult to automate. These requirements may also make assignments more robust in other ways. These tasks vary by discipline, and AI systems are always evolving, so exploring and collaborating with others in your field of study or experimenting on your own is good practice.
  • Require verifiable sources. Currently, many generative AI tools fabricate sources and quotations, which may appear alongside authentic sources in AI-generated bibliographies or works cited lists. Students who use an AI model for research projects would need to find and input sources and quotations themselves and verify ALL information produced by the model.
  • Analysis of specifics from images, audio, or videos. Students would need to describe these kinds of media in detail in order to generate automated outputs about them.
  • Analysis that draws on class discussion. Assigning this criterion requires the student to input notes from class discussion, involving time and effort.
  • Assignments that articulate nuanced relationships between ideas. Such assignments could entail comparing two passages that students themselves choose from two assigned texts. Students might be asked to explain
    • why they chose these particular passages
    • how the chosen passages illuminate the whole of the texts from which they were excerpted
    • how the two passages compare according to instructions that bear on the course themes or content. 

LLMs usually cannot do a good job of explaining how a particular passage from a longer text illuminates the whole of that longer text. Moreover, ChatGPT’s outputs on comparison and contrast are often superficial. Typically the system breaks down a task of logical comparison into bite-size pieces, conveys shallow information about each of those pieces, and then formulaically “compares” and “contrasts” in a noticeably superficial or repetitive way. 

  • Assign in-class writing as a supplement to or launching point for take-home assignments.Students may be more likely to complete an assignment without automated assistance if they have gotten started through in-class writing. (Note: In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities).