The generative AI chatbots are remarkably capable of producing lucid, grammatically correct text in human languages, and represent the development of powerful new technology, quite unlike many previous attempts at machine intelligence. For an explanation of how these chatbots’ underlying engines work, and how they came about, watch Dr. Justin Del Vecchio’s discussion of AI chatbots and their uses.
Within our cultural context of science fiction, humans interacting with the AI chatbots often develop the sense that they are conversing with a thinking machine akin to C-3PO, the Geth, or HAL 9000. However, generative AIs are quite different than these fictional characters: Google Gemini, ChatGPT, and Microsoft CoPilot chatbots are simulations of persons, rather than sentient, self-aware artificial automata of sci-fi lore. In COLI, we have taken to describing an LLM AI as a machine that simulates a person who knows what they are talking about responding to questions.
Jay Peters at the technology journalism site The Verge has an excellent, brief primer on AI in late 2024, that introduces the common terms used in journalism, marketing, and social media surrounding AI.
LLM AIs’ nature as simulations both limit and augment their capabilities. While ChatGPT may not be able to pilot a drone aircraft or operate a robotic kitchen, it can simulate many styles of writing.
Generative AIs can perform a wide variety of writing and image-creating tasks but even subtle differences in how users write prompts can create an almost infinite variety of responses.
This has led some educators to suggest that prompt engineering itself should be a skill and learning objective in various disciplines. The prompt process will likely undergo changes as well, so it is not clear what that skillset might look like in the future. But having students learn to use, and what to expect from AIs today can be a valuable information literacy lesson. Students may be directed to use an AI for an assignment, but the professor must include instructions on how to get the best out of the AI, or explicitly require students to experiment with crafting different prompts.
The chatbot AIs’ penchant for varying responses, together with subtleties in prompt engineering, mean that efforts to determine if an AI wrote a particular text is complicated. Even with an identical prompt, the same AI will regenerate similar, but not identical texts in response to a prompt. Beyond that, it can be difficult to generate similarity to a given text, without knowing the initial prompt for that text.
A major limit to the three widely-available LLM AI chatbots is their capacity for mistakes or falsehoods. The AI Chatbots present simulations of humans, and do not possess human capacities to judge what is correct or not. If an LLM AI is prompted to answer a question for which it does not have training data it may decline to answer, or it may provide a plausible but fictional answer. These are what AI developers refer to as “hallucinations.” Some examples of these fictions could be:
We may say that LLM AIs “making stuff up” or “get it wrong,” but they are not malfunctioning. They simulate human composition. This is why companies like OpenAI and Anthropic provide warnings that AI output is not, by itself, to be trusted, and should be verified through research techniques such as lateral reading. Experts disagree on whether hallucination can be eliminated, and OpenAI’s founder, Sam Altman, has personally warned against trusting AI outputs for factual information.
Whatever we do in our courses regarding LLM AIs, conveying this basic truth to students that these tools can smoothly invent things should be part of it. But it doesn’t preclude use of AI chatbots altogether.
This tentative guide is concerned with Large Language Model AIs, which are (at the time this is written) the most powerful artificial text generation tools available. Other tools, be it autocorrect, software that manages manufacturing supply chains, or even the non-playable antagonist and their henchmen in video games, can be considered artificial intelligence. Arvind Narayanan and Sayash Kapoor comment that if a technology can do tasks that "require creative effort or training for a human to perform," are capable of learning behaviors from data, and can make judgments or decisions autonomously, they qualify as AI.
However, in COLI and ABL, we expect to see in the next several months and years LLM AI-powered tools and features appear across the internet, in software and on mobile devices. We anticipate that these will in the near term be those most relevant to academics.