Skip to Main Content
Canisius University Homepage Andrew L. Bouwhuis, S.J. Library Homepage Canisius University | myCanisius | Desire2Learn

AI Resources for Faculty: What are LLMAI's?

This guide is designed for faculty to use to understand and effectively use Artificial Intelligence in their courses.

Understanding Generative AI

The generative AI chatbots are remarkably capable of producing lucid, grammatically correct text in human languages, and represent the development of powerful new technology, quite unlike many previous attempts at machine intelligence. For an explanation of how these chatbots’ underlying engines work, and how they came about, watch Dr. Justin Del Vecchio’s discussion of AI chatbots and their uses.

Within our cultural context of science fiction, humans interacting with the AI chatbots often develop the sense that they are conversing with a thinking machine akin to C-3PO, the Geth, or HAL 9000. However, generative AIs are quite different than these fictional characters: Google Gemini, ChatGPT, and Microsoft CoPilot chatbots are simulations of persons, rather than sentient, self-aware artificial automata of sci-fi lore. In COLI, we have taken to describing an LLM AI as a machine that simulates a person who knows what they are talking about responding to questions.

Jay Peters at the technology journalism site The Verge has an excellent, brief primer on AI in late 2024, that introduces the common terms used in journalism, marketing, and social media surrounding AI.

Sample of ChatGPT Output June 2023

ChatGPT Introduces itself, June 5th, 2023

 

LLM AIs’ nature as simulations both limit and augment their capabilities. While ChatGPT may not be able to pilot a drone aircraft or operate a robotic kitchen, it can simulate many styles of writing.

Prompt Engineering

Generative AIs can perform a wide variety of writing and image-creating tasks but even subtle differences in how users write prompts can create an almost infinite variety of responses.

This has led some educators to suggest that prompt engineering itself should be a skill and learning objective in various disciplines. The prompt process will likely undergo changes as well, so it is not clear what that skillset might look like in the future. But having students learn to use, and what to expect from AIs today can be a valuable information literacy lesson. Students may be directed to use an AI for an assignment, but the professor must include instructions on how to get the best out of the AI, or explicitly require students to experiment with crafting different prompts.

The chatbot AIs’ penchant for varying responses, together with subtleties in prompt engineering, mean that efforts to determine if an AI wrote a particular text is complicated. Even with an identical prompt, the same AI will regenerate similar, but not identical texts in response to a prompt. Beyond that, it can be difficult to generate similarity to a given text, without knowing the initial prompt for that text.

Hallucination

A major limit to the three widely-available LLM AI chatbots is their capacity for mistakes or falsehoods. The AI Chatbots present simulations of humans, and do not possess human capacities to judge what is correct or not. If an LLM AI is prompted to answer a question for which it does not have training data it may decline to answer, or it may provide a plausible but fictional answer.  These are what AI developers refer to as “hallucinations.”  Some examples of these fictions could be:

  • descriptions of a book whose text or detailed summaries of the same are not in the AI’s training data.  The AI might develop a plausible but false interpretation or summary based on the book’s title, or what other information it may have on the book’s subject. 
  • scientific or engineering explanations of complex phenomena.  One example is rocketry.
  • Explanations of its own processes. The AIs may reliably describe what they do in general terms, but suppose you ask it specifically how it generated a particular text: “How did you arrive at that solution?” “How did you determine that x is the case?” The AI cannot necessarily respond to these questions accurately, because it doesn’t reason, so much as generate (more) plausible text. “If you ask it to explain why it wrote something, it will give you a plausible answer that is completely made up,” Explains Professor Ethan Mollick, “When you ask it for its thought process, is not interrogating its own actions, it is just generating text that sounds like it is doing so.” Mollick warns that because of this, it is unlikely AIs can successfully interrogate their own biases.

We may say that LLM AIs “making stuff up” or “get it wrong,” but they are not malfunctioning.  They simulate human composition.  This is why companies like OpenAI and Anthropic provide warnings that AI output is not, by itself, to be trusted, and should be verified through research techniques such as lateral reading. Experts disagree on whether hallucination can be eliminated, and OpenAI’s founder, Sam Altman, has personally warned against trusting AI outputs for factual information.

Whatever we do in our courses regarding LLM AIs, conveying this basic truth to students that these tools can smoothly invent things should be part of it.  But it doesn’t preclude use of AI chatbots altogether.

What Is Or Is Not An AI?

This tentative guide is concerned with Large Language Model AIs, which are (at the time this is written) the most powerful artificial text generation tools available.  Other tools, be it autocorrect, software that manages manufacturing supply chains, or even the non-playable antagonist and their henchmen in video games, can be considered artificial intelligence. Arvind Narayanan and Sayash Kapoor comment that if a technology can do tasks that "require creative effort or training for a human to perform," are capable of learning behaviors from data, and can make judgments or decisions autonomously, they qualify as AI.  

However, in COLI and ABL, we expect to see in the next several months and years LLM AI-powered tools and features appear across the internet, in software and on mobile devices. We anticipate that these will in the near term be those most relevant to academics.