Skip to Main Content

A-Z Databases

Alphabets Find by Title

All A B C D E F G H I J K L M N O P Q R S T U V W X Y Z #
Video available
Audio available
Open Access
Accessibility info link
TexShare resource
GLAM resource

AI in Higher Education

Provides explainers and emerging trends in the rapidly-developing landscape of AI, with a focus on Large Language Models (LLMs), AI tools, AI literacy, and the ethical considerations of using (or not using) these tools.

What Is AI?


What is artificial intelligence?

There are many definitions of AI. Generally, AI refers to technology that can complete tasks that we think of as being "human", like recognizing images, making decisions, and producing speech or text.

Most of us use AI everyday when we use Siri, Alexa, Google Maps, Waze, or even get recommendation on what to watch on Netflix and Spotify.

What is ChatGPT?

ChatGPT is an AI platform developed by OpenAI that uses large, connected networks of computers that have been "trained" on extremely large volumes of data to predict and generate information in natural language. These kinds of models are called Large Language Models (LLMs). Other LLMs include Google Bard, Anthropic-Claude, BLOOM, and LLaMA.

There are other types of AI that produce voice, photos, sound, and video. These are also becoming increasingly adept at producing and reproducing media that can be mistaken for human-created, "real" items.


Can I trust the information I get from AI like ChatGPT?


As with any information, you need to critically evaluate any information you get from AI platforms. 

  • Limited knowledge - These AI applications have been trained of large amounts of data, but not all of that data is accurate, current, or reliable.
  • Lack of "common sense" - All LLMs are basically only capable of predicting words based on its training data. It is NOT capable of understanding or applying common sense to a situation.
  • Lack of context - AI applications provide information based only on the input you provide. It cannot provide contextual understanding.
  • Lack of accountability - AI applications can and do provide incorrect information. In AI systems that operate using a "Black Box" model, where the link between specific items in a training data set and decisions made by the system is obscured, even the technologists who create these systems may not know how the AI produces all of its output. 
  • "Hallucinations" (also known as "Confabulation") - Sometimes an AI application produces incorrect or fictional information, or creates output that is not based in reality. Because the AI produces text that sounds authoritative, it can be hard for a non-expert to recognize hallucination/confabulation. 

Although many AI applications can seem as if they are intelligent, they are really just highly sophisticated algorithms that can produce text, photos, video, and sound that are similar to content produced by human beings. It is important to remember that these applications are tools built on algorithms. They do not possess what we think of as human intelligence. However, continued development of AI platforms have the potential to develop more sophisticated reasoning.

This page is adapted from Atlantic Technological University's Chat GPT & AI LibGuide.

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License