Skip to Main Content
Canisius University Homepage Andrew L. Bouwhuis, S.J. Library Homepage Canisius University | myCanisius | Desire2Learn

AI Resources for Faculty: AI and Academics (Faculty)

This guide is designed for faculty to use to understand and effectively use Artificial Intelligence in their courses.

In the tabs below, faculty will find notes on the relationship between LLM AIs and college-level pedagogy. Our recommendations are necessarily tentative and subject to change as those in various fields better understand AI operation, and AIs frequently develop or improve capabilities.

We present some course-level considerations, and where to begin thinking about AI and student use within classes.  We describe possibilities for mitigating student misuse of AI, or how to detect or prevent cheating. We also suggest ways of having students use AI for certain learning tasks and even assignments. University education should not be just knowledge transfer, but guiding students as they determine who they want to become, and cultivate learning skills and habits of mind. It is likely that they will use AI along that path; if they do not learn to use AI safely, ethically and responsibly from academic intellectuals and practitioners, they will learn to use it elsewhere.

In the tabs on this page, you will find working notes on the relationship between LLM AIs and college-level pedagogy. Our recommendations are necessarily tentative and subject to change as those in various fields better understand AI operation, and AIs frequently develop or improve capabilities.

We present some possibilities for mitigating student misuse of AI, or how to detect or prevent cheating. We also suggest ways of having students use AI for certain learning tasks and even assignments. University education should not be just knowledge transfer, but guiding students as they determine who they want to become, and cultivate learning skills and habits of mind. It is likely that they will use AI along that path; if they do not learn to use AI safely, ethically and responsibly from academic intellectuals and practitioners, they will learn to use it elsewhere.

Each discipline, and each faculty member will determine the extent to which LLM AIs compel adaptation or alteration of their curriculum and teaching methods.  Here we offer generalizations that may assist our faculty in considering AI’s relationship to their courses, and how to adapt curriculum and pedagogy to the generative AI era.
Start by asking:

  1. What tasks, methods, or ways of thinking, that constitute our learning goals and objectives, can AIs do or not do well?
  2. Do we still need to assess our students’ abilities to perform tasks, methods, or ways of thinking that AIs can do reasonably well? If so, how can we do that without AI interference?
  3. Where might we incorporate AI into curriculum and assessment, in ways to that reflect probably real-world use of AI? How might our students be required to use AI in their professional lives, and how might we prepare them for those tasks?

Ignoring AI is no longer a responsible option for faculty.  It is widely available, and increasingly part of professional practice in a variety of fields.  And each day, it is becoming more capable.  In addition to our list of possible AI capabilities, watch as Jason Tangen at the University of Queensland summarizes AI’s abilities with respect to common undergraduate assignments:

You should add an AI clause to your syllabus. This indicates what role AI may or may not play in your course. Canisius University’s Academic Integrity Code establishes two things regarding AI in course work:

  1. In the absence of specific instructions from the course instructor, use of AI is “unauthorized assistance” and therefore prohibited.
  2. Exceptions to the above are when instructors permit or direct students to use generative AI. Instructors chose when, where, and how much students may use AI for any specific task, assignment, activity or procedure in a course.

Academic freedom is in the second point; you determine the role of AI in your course. Your overarching course policy will be large part determined by how you approach AI within lessons and assignments. For example, you may generalize that AI use is permissible with Assignment Type A, but not Type B. Or, you may say that in assignments where AI use is permitted or encouraged, students should always be transparent, describing how they used AI and even perhaps linking to the chat as a citation.

This guide can supply a series of guidelines you may copy, modify, and otherwise use. If they do not supply you with a complete policy, some may inspire a policy crafted specifically for your course.

No AI

The first level of the Furze et. al. AIAS are assignments where AI use by students is prohibited or impractical. Examples may include:

  • In-class or proctored exams and quizzes.
  • In-class graded activities, such as group discussions producing notes or a document.
  • In-class laboratory experiments or role-playing exercises that are the basis for subsequent written reflections.
  • Video or audio projects that include the student, or content specific to the student’s life, location or situation.
  • Low-stakes assignments reviewing specific course content.

In some traditional assignments, it may not be practical to prohibit AI use by students. AI detection is not sufficiently comprehensive or is too burdensome for faculty. The above examples generally require less complicated methods (such as proctoring) of preventing student AI use, or make AI less relevant for assignments.

Why No AI?

You may prohibit AI use in your course or on specific assignments. Reasons can include:

If you can describe why AI threatens to undermine necessary processes for your students’ learning, be transparent about it to your students. Providing explicit rationale for assignments and how they work potentially motivates students to respect them.

If you are struggling to articulate reasons for prohibiting AI, revisit the relationship between your assignments and learning objectives. Do your assignments really ensure students are learning what they should learn? Might it be time to experiment with AI?

You may also discuss with students ethical considerations surrounding AI, including criticisms. In the video below, Professor John Ippolito, University of Maine, outlines various ethical, political and social problems presented by Generative AI. (see also the Learn with AI toolkit, developed by Ippolito and partners in the  New Media program and Center for Innovation and Teaching and Learning at the University of Maine.)

The problems described by Ippolito can be part of an explanation for why you don’t want AI used in a course, but they could also be the basis of a classroom demonstration, or even an assignment that has students experiment with AI’s negative potential.

If you elect to create assignments where AI is prohibited, you are either assigning work where students are prevented from using AI, or AI is impractical or unhelpful for completing the assignment. In each case you will need to monitor whether these appropriately assess your curriculum and are practically AI-proof.

In Class

You may create activities or student work assessed in the classroom, where students are less able to use AI if it is prohibited.

Many disciplines continue to use in-class exams, where students write in pen or pencil on paper. Faculty have students prepare and present lectures or demonstrations, as individuals or groups, which include both disciplinary content and methods, as well as presentation skills, as assignment and learning objectives. In most respects AI does not present a problem for these assessments, although students can pay to access some AI tools that generate presentation scripts and slides. Newer variations on these methods could be innovative pedagogy. For example, Professors could allow students to work through exam problems they got wrong, for partial additional credit, alone or in groups in the classroom.

In-class writing exercises can be an excellent assessment option. Simply having students write short reflective pieces and discuss them in small groups (“Think-pair-share” is only one form) can oblige students to consider lecture content or readings. Students may work in groups to develop a presentation or talking points that they then share with the rest of the class in a later phase. Students may perform research, composition, or calculation, while their professor walks around addressing questions and reviewing student efforts. The drafts students write in-class may later be polished and published in Google Docs or Microsoft Word. Quite apart from AI considerations, these active-learning methods are “flipped class” strategies, and alternatives to lectures.

Games and role-playing for learning is interactive pedagogy, and a part of some disciplines’ pedagogy for generations. In the liberal arts, the Reacting to the Past series is an example that engages students in a dynamic, unpredictable but educational exercise. Other in-class games can demonstrate mathematical or statistical concepts, or strategic concepts in a resource-limited environment. (Philip Sabin’s Gotcha is an example.) These may be played or enacted during class, while requiring students to prepare beforehand. Or, students may play games outside of class, video-record their activity, or prepare reports describing their experience. With appropriate instructions such assignments may make AI use impractical.

As with any assessment method, faculty should consider the strengths and weaknesses of in-class activities. Do they assess all course learning goals, or all priorities useful for professional life (and not just school)? What skills or habits are exams, in-class collaboration, or presentations not effective for assessing? Will wearable technologies soon make proctoring exams more challenging or impractical?

On the other hand, in-class assessments might work in tandem with out-of-class work. Shorter assignments needn’t be AI-proof if the professor reminds students that the assignments are the best preparation for an upcoming in-class exam. If a student simply uses AI to prepare a brief for role-playing, they may be embarrassed among peers when the AI supplies bad information or stilted script. So a course design may still feature lots of assessments where AI is accessible, but students realize it’s impractical for the purpose.

Writing Assignments

Bowen and Watson, echoing faculty who have experimented with AI, note that the AI chatbots operate at a B- or perhaps C-level performance with respect to undergraduate writing. However, it can be challenging to determine whether or not a student has used AI, and to what extent, to complete an assignment. Here, we point out a few techniques to make impractical simple copy-paste use of AI text in assignments:

  • Turnitin’s AI detector can often, though not always detect AI-generated text within students submissions. Make clear to students that it is part of the course, demonstrate what it shows you, and plan to use other methods of AI detection or mitigation alongside it.
  • Require that students cite scholarly or specialized sources in a standard professional style, such as MLA or APA.
    • AI chatbots will frequently generate both accurate and bogus citations, and the latter are often suspicious at a glance to faculty. However, instructors should budget time to check citations.
    • AI chatbots might be more accurate in citing sources on the open internet, but these tend to be less scholarly or professional than those found, for example, in academic databases.
    • Books and most scholarly journals, even in digital form have traditional pagination, and so citations should require page numbers.
    • Students might generate their own sources, such as a recording interview, photographs, or other creations that can be submitted along with the writing assignment.
    • Make source types and citations threshold requirements: if a student neglects to provide them the assignment is not accepted and not graded, with late penalties for revision and an automatic zero after a period of time.
  • Demonstrate to students how AI cannot do the work. Show examples in class of hallucinations, or just less-than-suitable responses. This can encourage students to use AI responsibly, or opt out of AI where it is inappropriate. In the process, students may learn broader insights into the capabilities and limits of generative AI.

Conclusion

There are means and reasons to prevent AI use. However, improvements to AI engines are appearing on a monthly basis, so that elements in the above advice may become obsolete at any point. It is worth asking, too, if preventing student AI use is the best use of our resources. If our disciplines or professions are, in several respects, adopting generative AI for certain productive methods, should we not teach students to use AI properly in those cases? If we don’t, are we leaving it to students to discover how to use AI, from sources good or bad?

Why Use AI?

The Furze, et. al. AIAS scale provides us with a starting framework to think about AI use in classes and assignments. There are compelling reasons why, despite legitimate concerns about AI’s social, political and environmental implications, we might have students incorporate AI chatbot text into assignment work.

In general, we might foster in our students (and ourselves) what Maha Bali calls “Critical AI Literacy,” or “understanding of how machine learning works, how generative AI tools are trained, and how to judge the quality of their outputs, in order to assess whether or not they are appropriate to use in a certain context.” Bali provides a practical metaphor for the AIAS: procuring cake. Must we bake every cake from scratch? Or at times do we use a cake mix, which speeds the process while leaving us the chance to personalize the cake with decoration? Perhaps we leave the work to the professionals, and hire a good bakery to bake a fine cake (since maybe our goal is to plan a special event, rather than simply bake the cake!) Bali wryly analogizes misuse and overreliance on AI: what we want least, perhaps, is our students purchasing poor-quality cakes!

How to use AI

Before crafting assignments that might employ AI, it pays to learn fundamentals for using AI chatbots, so that you know what to expect, and can provide your students with advice. Good sources to start with are:

  • COLI’s AI Prompting Guide
  • The documentation at the Chatbot websites.
  • Webinars, articles, and guides for education and specifically for your discipline. Some of the resources linked on the Sources page for this guide are a great placed to start.

Disiciplinary or Professional AI Use

Key to any use of AI in your course is understanding how your discipline or profession may use AI. Consult your academic journals, trade press, listservs or social media to discover how colleagues at other universities and in other sectors use AI.

By early 2023, a common sense among those paying attention to LLM AI development was that prompting skills, or prompt engineering, would be an important learning goal for students: what to say to chatbots to get the best work from them. Although the AIs have since gotten more capable of interpreting requests, this is still basically true, but varies by discipline. If you determine how professionals in your field use AI, you can attempt to replicate that through assignments. On the other hand, you may discover that AIs are not in use for important tasks because they lack access to certain kinds of data (such as in databases) or require procedures AIs simply cannot (safely) perform.

AI in Writing Assignments

AI is likely to have the greatest impact in writing assignments, a broad category including everything from short reaction essays, to traditional term papers, to lab reports. Elsewhere we offer tips for making it impractical for students to simply copy, paste, and submit AI-generated text as assignment submissions. But it is likely that most students will experiment with AI in various phases of completing an assignment, whether their professors tell them to or not.

Jérémie Rostan calls classic research papers “AI-Blind Assessments,” where an AI might provide students help but it is difficult for an instructor to know, in the absence of students reporting it, how the students used AI. If students delegate most or all effort in completing traditional assignments, it may leave clues, such as detectable AI text or bad citations. But preventing, detecting and mitigating unauthorized AI use might mean more work for instructors.

It is worth reviewing and possibly updating why we assign writing in the AI era. What do our writing assignments accomplish? Do our assignments closely simulate a real-world product students should expect to do later in their careers? If so, it may pay to investigate how AI might already be incorporated into professional practice, and try to build that into class assignments. Less vocationally, do we ask students to write so we may assess their understanding, grasp of process, or reflective capabilities? When we do this, we are probably asking to use writing as a thinking process. If so, we might introduce AI into that process in some respects, obliging our students to do the harder work of creation and composition.

Following the Furze AI Assessment Scale (AIAS), we can consider however much or little students employ AI in our assignments. AI can be helpful early on, without interfering with students’ obligation and ability to do the hard work of research, organizing, or drafting. AI might assist students in early brainstorming or organizing their thoughts, in a dialog. In this early step AI does not tell them what to write, so much as suggesting possibilities to hone down a topic, or organize their efforts. This might put AI in the place of a peer with good advice on next steps. “The goal is not to have AI do the thinking,” Bowen and Watson comment, “but to have a dialogue that helps you think.”

For example, an undergraduate student may explain ChatGPT or Claude that, assigned a paper in a U.S. history course, she is interested in researching a public health topic. The chatbot may offer suggestions on historical public health issues, events, or even debates among historians (historiography) concerning public health. The student might further discuss this with the chatbot, to hone down a topic, for example “vaccines” to “polio vaccine.”

As a next step, the undergraduate student might prompt ChatGPT for help getting started researching early polio vaccines in the United States. Depending on the topic the student may need ask a series of prompts to get the sources she particularly wants, such as books instead of just internet summary articles, or scholarly work instead of stories on Buzzfeed. ChatGPT might reply with a list of possible sources, ranging from excellent to irrelevant, and may even fabricate sources that sound plausible but do not exist. But if the student follows up on the suggestions, she will read some great sources, while discarding unhelpful suggestions well before they could harm or impede her efforts. In this way, ChatGPT is likely better than a Google Search.

An AI chatbot might even beat an academic database, which tend to be more useful for researchers who have a better command of their professional literature. Students may consult a database such as JSTOR after having been given an article suggestion by the chatbot. (Occasionally ChatGPT explicitly recommends scholarly databases!)

If the student’s project includes archival research, virtual or local, the AI chatbot might suggest types of sources not obvious to the student, such as county or state public health bureau records, oral histories in historical society archives, transcripts of legislative hearings, or theses in University archives.

A chatbot AI cannot replace a real librarian and frequently an AI will suggest a student consult their university library staff. But students can do so armed with better questions so that a librarian consultation is more productive.

With any of the above possibilities included in a scaffolded project, the professor might ask students to hand in a transcript of their chatbot conversation. This helps the instructor understand how students used chatbots, and could be the basis for continually improving the assignment.

Evaluating AI Work

A popular assignment type emerging across academics is having students assess student AI work. Evaluating others’ work tends to be higher on learning taxonomies, but the possibilities here can vary in challenge and complexity. The benefits of such assignments can be that students learn about AIs as a component of information literacy, as well as learn to work within a discipline to evaluate information. Examples include:

  • a professor may present students with AI-composed text and ask students (as individuals or in groups, in class or as an outside-class writing assignment) to assess the AI work along several criteria.
  • an assignment may have students working with an AI to generate a response, perhaps through a chain or series of prompts, and then assess how well the the AI is able to compose text or image content according to criteria.
  • The above two examples could be a business or marketing plan, a military warning order, or essay providing a critical reading of a Nathaniel Hawthorne novel.
  • Instead of merely provide evaluation, students may be asked to improve the AI work, perhaps simply to optimal or A-level, or even in gradations (C+, B, B+, A, etc.) to understand the important differences that characterize quality work.
  • Students might prompt the AI to impersonate a famous historical figure, and then assess the result through their own research into primary and secondary sources.

In all cases, students might employ analysis skills central to information literacy our respective disciplines. Students will practice some form of lateral reading, if they are fact-checking AI output, or looking for its sources. Can students spot certain arguments or descriptions that have political implications, and so are not as “objective” as perhaps the AI’s tone might suggest?  Can students spot errors or falsehoods?  Can students employ web literacy and fact-checking skills to assess veracity or just cultural nuances within an AI’s version of a story?  Are students able to spot writing styles that reflect class, race, ethnic or other social identities in the chatbot’s default mode? Perhaps a more complex question is how well do different language styles and dialects do in communicating with the chatbots via prompts, reflecting perhaps cultural and social limits of the underlying technology. Canisius Writing Center Director Dr. Graham Stowe comments that “Hegemonic and dominant linguistic systems are bound to be embedded in the systems that make the bots function,” and this can provide assignments that foster critical literacy among our students.

Hypothes.is has suggested prompts for students to analyze ChatGPT-composed essays. These prompts might work or inspire equally applicable assignments based around other AI chatbots as well.

One concern with AI evaluation assignments may be that a chatbot AI might produce biased or even racist or sexist composition. Particularly when impersonating historical figures, AIs might voice opinions and even use words that are considered harmful or at the very least inappropriate. Students should be warned that, although companies such as OpenAI and Anthropic are continually working to prevent AIs from presenting bias, it is not impossible for AIs to do this. As when students are assigned historical primary sources that may contain bias, AI bias might be a circumstance for classroom discussion, or student inquiry: what is the public discussion about AI bias? What sorts of bias, implicit or otherwise, inform AIs through their training data? When, or under what circumstances is it ever appropriate for an AI to present biased language?

PROGRAMMING OR SPREADSHEET CHORES

LLM AIs have some success in writing computer code, Excel formulae, and other digital coding for calculation or functions. Dr. Justin Del Vecchio, in Canisius University’s Cybersecurity program, conducted a series of tests and found that ChatGPT provided tremendous time efficiency in code-crafting. A professor may instruct students to experiment with AIs to move through processes more quickly, in support of activities that directly serve or assess other learning objectives. The point of the assignment may not be to develop a particular module in Python, or an array formula in Excel, but these may be necessary steps in completing that assignment. Why not have an AI do that step, while teaching students to incorporate the AI’s more or less generic (or flawed) suggestion into their own work with prudent evaluation or necessary modifications?

Students may also be asked to reverse engineer AI-generated code, both to understand encoding by way of demonstration, as well as to debug or assess problems with the code. AI-generated code may generate code that simply fails to executive. Or, AIs may generate executable code that nevertheless incorporates bad practices or inelegant process. Students can become stronger in their coding skills by evaluating AI code, akin to peer review.

These ideas have the potential that, as Dr. Del Vecchio points out, computer science students “will learn a new skill; how to create a proper set of instructions or requirements that AI agents, like ChatGPT, use to autogenerate code.” Programmers will combine this skill and the AIs with their own, higher-order software development skills, for creating new kinds of software.

Homework Help, or, a Second Opinion

It’s worth considering that students likely use AI as web search and Wikipedia are commonly used, and paper dictionaries and encyclopedias were once used: to find a quick understanding of a word, phrase, concept or process. Students may even realize (as savvy web-search and Wikipedia users realize) that answers may be wrong, incomplete or misleading. But if ChatGPT’s explanation of “commodity fetish” unlocks a students’ understanding of a text, the student will likely return to ChatGPT.

How can we capitalize on this? Can we develop safe ways to encourage students to use AI to further their understanding, while remaining properly critical about AI’s outcomes?

Assess Student-AI interaction

In all cases where students may use AI, a professors can require students to provide a transcript of their chat with the AI chatbot. Mike Kentz calls this the “grade-the-chat” method, but whether or not the professor assigns a specific grade for the AI interaction, having students hand in their chat transcript offers various possibilities:

  • providing evidence of how they used AI can complicate student attempts to use it inappropriately.
  • A professor can assess how students prompt the AI. This can help a professor teach students how to use AI in a discipline or practice.
  • Student prompts for the AI might reveal the student’s level of mastery with course content, concepts and processes. In many complex jobs, a user cannot adequately prompt the AI if the user doesn’t understand what or how to ask.
  • If a professor has students reflect and comment on the chat, they may oblige students to consider how and why AI was potentially useful to them, or not.

Making chat transcripts part of the assignment can be built into scaffolding, or breaking a composition project down into steps or parts, such as brainstorming, outlining, research and annotated bibliography, note-taking, drafting, and so on. This shifts teaching writing from product to process, which can illustrate to students how to use AI safely, or reject it as not particularly effective or helpful. A simple example is Jason Guyla’s Portfolio Process

The Golden Griffin Advantage

As a final consideration, Canisius might be in a better place to have students critically engage with generative artificial intelligence. Our smaller class sizes and liberal arts foundation means that we teach skills, in student-faculty interactions, that are particularly suited to get the most from AI while identifying and avoiding its drawbacks. “The more we can prepare students to question assumptions, analyze problems more deeply, tolerate the discomfort of ambiguity, find subproblems, and clearly reframe problems (all part of a classic liberal arts education),” Bowen and Watson point out, “the better prepared they will be not only for the first wave of AI-inspired jobs, but for the subsequent waves that are still unknown.”

Detecting and Mitigating AI Use by Students

In many cases, we may wish that students refrain from consulting chatbot AIs when attempting our assignments. In those circumstances, we should design assignment prompts, and support students in every way to encourage them to do the right thing. But it may be necessary to determine if students are attempting to complete assignments themselves, or are simply submitting AI-generated text as their own.

WAS IT WRITTEN BY AI?

LLM AIs are designed to simulate people’s writing, but there are often signs that a text was written by AI:

  • Generative AIs will occasionally invent events, people, or other details as are needed to plausibly simulate a person writing about a topic. 
  • This can even include sources cited within the text.  AIs might misattribute real quotes to the wrong author or speaker. Require direct citations of copywrited scholarly content, including permalinks in databases such as JSTOR or the ProQuest series, and not just open-internet sources. Check these sources for accuracy.
  • AIs may become vague or evasive if they do not have access to sources sufficient to respond properly to a prompt.  This tends to a feature of weaker student writing, but then the grade result is the same.  
  • AIs may struggle with understanding a prompt, rather more than most real people.  So it may in effect answer the wrong question, in whole or part. Again, not uncommon in student work as well.
  • AI chatbots can produce text unique in substance and style, if the user invests time and effort into prompting. However, AIs might provide substantially similar answers to the same prompt, and in the absence of further prompting to modify, even weeks apart.

Turnitin and Other AI/Plagiarism Checkers

The popular plagiarism prevention and detection service Turnitin has a toolset for detecting AI-composed writing within student submissions. Turnitin’s (and many other AI and plagiarism checkers') tool is also powered by AI. A professor may activate Turnitin within their course dropboxes, and Turnitin’s AI detector will thereafter attempt to determine if student work submitted to the dropbox is AI-generated or not.

But just as Turnitin does, we strongly recommend faculty follow up on any suspected unauthorized AI use among students with further steps. Turnitin’s detector is not foolproof, and AI detection technology should be only one of several methods instructors use to identify inappropriate AI use. Assignment design and discussion with students should be strong preventative measures, as well as considering how appropriate AI use might become part of assignments.

As an example, you can request from Helpdesk a faux student account and a D2L Sandbox course. Then, create a Dropbox and activate Turnitin. Copy and paste the output from ChatGPT located here:

https://chatgpt.com/share/67336d7d-2aac-8004-a863-4b7becbe1b8f

to a Word doc. Then upload it to the appropriate Dropbox as your faux student. You may be surprised at the results.

Version History in Google Docs

By collecting assignments using Google Docs and Drive, it is possible for faculty to access a Version History. This would indicate if a student pasted in a text as a single or a few blocks, rather than typed it themselves. This might discourage use of AI for writing assignments.

Follow Up: A Conversation

If you suspect a student has simply submitted AI-generated text where in appropriate in an assignment, a common first step is to meet with the student. In conversation, you can likely tell how familiar they are with what they submitted, and its sources, evidence for claims, or even basis for reflections.

Bali, Maha. “A Compassionate Approach to AI in Education.” Knowledge Maze. April 29, 2024. Accessed August 29, 2024.

Bowen, José Antonio and Edward Watson. Teaching with AI: A Practical Guide to a New Era of Human Learning. Johns Hopkins University Press, 2024.

Ellis, Cathy and Jason Lodge. “Stop Looking For Evidence of Cheating With AI and Start Looking For Evidence of Learning.” Linkedin. July 8, 2024. Accessed August 29, 2024.

Furze, Leon. “Updating the AI Assessment Scale,” Leonfurze.com. August 28, 2024. Accessed August 29, 2024.

Guyla, Jason. “Goodbye, Papers. Hello, AI-Powered Portfolios!” The AI Adventure. June 26, 2024. Accessed August 29, 2024.

Kentz, Mike. “Grading the Chats: The Good, The Bad, and The Ugly of Student AI Use.” AI EduPathways. July 21, 2024. Accessed August 19th, 2024

Learning With AI Toolset. University of Maine.

Peters, Jay. “AI is Confusing – Here’s Your Cheat Sheet.” The Verge. July 22, 2024. Accessed August 30, 2024.