Intentional Pedagogy with AI Technology
Educator responses seem to range from excitement to exasperation to exhaustion. As one might expect, there is a similarly wide range of integration at Brown and elsewhere. Some describe this as making a decision about “No AI, OK AI, [or] Go AI” (Report of the Yale Task Force on Artificial Intelligence, 2024), while Perkins et al. (2024) describe instructor choice on a five-point scale:
- No AI: AI must not be used at any point of the assignment.
- AI-Assisted Idea Generation and Structuring: AI can be used for brainstorming and generating ideas but AI content is not allowed in the final assignment submission.
- AI-Assisted Editing: AI can be used for improvements to clarity. Original work should be included in an appendix.
- AI Task Completion, Human Evaluation: Students are asked to use AI to complete elements of the task, often by evaluating or commenting on AI-generated content. AI-generated content is attributed or cited.
- Full AI: AI is fully utilized for the assignment and the instructor specifically notes that attribution is not required.
The good news embedded in all of this choice is that good teaching practices (such as those that support critical thinking, knowledge acquisition, and skill proficiency) already position us to productively engage with AI. Additionally, Brown’s student population is well-positioned for this conversation as students who have chosen an Open Curriculum where they are intentional participants in their intellectual and personal growth.
Professor Steven Lubar (American Studies, History, and History of Art and Architecture) illustrates the process of developing an intentional pedagogy for use of AI:
In my Methods in Public Humanities (PHUM2020) course, I am, with some trepidation, encouraging students to use ChatGPT and similar tools. These new tools will be useful to them in their work after Brown, and we should help them learn to use them wisely.
Because of quickly changing norms and the range of teaching practices around AI, there are real and practical considerations to take into account as we move forward. This resource addresses three areas we encourage you to consider as you think through your approach with students this term.
Designing writing assignments
Students’ submission of AI-generated texts is among educators’ top concerns. Like the calculator and Wikipedia, AI-assisted writing seems likely to impact teaching practices for years to come. We ask students to write in our classes because we believe there is something of value in that exercise that cannot be found in multiple choice or other types of assessments. Writing assignment prompts and grading them is hard, so it is helpful to first spend time reflecting on why you assign writing.
Thinking about that “why” of writing will help you develop and potentially scaffold better writing assignments, create effective rubrics, and set generative expectations for students, even in large classes. You might also reflect on authentic assignment options best suited to the learning outcomes because student motivation is negatively associated with chatGPT use in writing (Playfoot, Quigley, & Thomas, 2024).
We encourage you to communicate your pedagogical rationale to students, including the importance of writing in your own education, profession, and life. You might do this in a class conversation, syllabus statement, or extra reading. It does not need to be long, but it does need to help students understand that writing in your class actively contributes to their learning and development in our community. Without such explicit framing, students can easily forget that writing is more than just a task when it is actually a way of learning, of thinking, and of doing.
Turnitin’s DraftCoach can also be used as a learning tool to help students develop awareness around plagiarism norms and skills in areas like citation/paraphrasing. This can also open up space for conversations around disciplinary norms in citations and literature reviews.
For graduate writers, Georgia Tech’s “Effective and Responsible Use of AI in Research” offers helpful guidance, framed by questions that see AI as a generative and brainstorming tool, such as “How can students use AI effectively as a tool to help generate research ideas and approaches?” However, it also addresses data sharing cautions, asking, “Is a student giving away valuable ideas or research results to an open platform (like ChatGPT) before the topic is peer reviewed and published? Will you lose your intellectual property rights, such as patents?” Students should also note that thoughts and perspectives on the use of generative AI can vary significantly across disciplines. Students should consult with their advisor(s) and Director of Graduate Studies to discuss discipline-specific expectations.
Academic integrity
AI-generated text has not historically been a part of how educators have thought about plagiarism. Because of changing norms and the wide variety of instructional practice, it is essential for instructors to be explicit to students about their own expectations.
For both undergraduates and graduate students, Brown’s academic code states,
A student’s name on any exercise (e.g., a theme, report, notebook, performance, computer program, course paper, quiz,or examination) is regarded as assurance that the exercise is the result of the student’s own thoughts and study, stated in his or her own words, and produced without assistance, except as quotation marks, references, and footnotes acknowledge the use of printed sources or other outside help.
Within this framing, consider how you might update your classroom guidelines or develop a new AI policy for students that details how they should, might, or cannot engage with it. While we certainly hope all students have read the academic code, they may not understand how the use of generative AI might complicate the assurance of one’s own work and/or require disclosure as outside help.
For example, Professor Monica Linden (Neuroscience), created a new syllabus statement about the ethical and effective use of ChatGPT in her course, NEUR 1930L: Neurobiology of Love. (See Prof. Linden's full original statement in this blog post.) Recently, she updated the statement to add:
I think we will have the opportunity to creatively use ChatGPT and other generative AI products in this class. However, it is important that we use them responsibly. You should not be using ChatGPT to do the assignments for you. Every assignment is intentionally developing skills aligned with the course learning objectives. If you use ChatGPT instead of doing the work yourself, you are cheating yourself out of this opportunity to learn. If you aren’t here to learn, this class isn’t for you.
(Also, this class is about love and attachment. Are you really going to let an AI-bot try to understand that for you??)
Professor Tara Nummedal’s (History and Center for Digital Scholarship) approach also places bounds on the use of AI:
We are all learning to understand how, when, and whether to use generative artificial intelligence (AI) in our teaching, learning, and research. I recommend that you read this brief introduction from the Library to better understand how tools like Chat GPT work (and don't work), especially the page about the difference between chatbots and search engines. In particular, please be aware of the phenomenon of "hallucinations"- that is, AI's surprisingly common habit of generating plausible but false information.
I have tried out ChatGPT with a range of queries related to this class, and concluded that our course material is too niche to make AI an effective research tool. Interestingly, it does tend to surface (that is, produce!) some common misperceptions about natural knowledge in early modern Europe as well as some of the deep master narratives that this course is designed to challenge. This is fascinating in its own right, especially since knowledge-making is a core theme of our work. In this class, therefore, we occasionally may experiment with ChatGPT together.
However, because of my reservations and because learning how "to craft, develop, articulate, and sustain an argument in a written paper, supporting claims with either original research or examples from lectures and assigned readings" is one of our learning outcomes, students will not be permitted to use ChatGPT or similar tools in the graded writing assignments (including the midterm) submitted for this class. AI-generated submissions, even if properly cited, will be treated as plagiarism and a violation of the Academic Code. If you want to propose a project incorporating AI, however, please come talk with me about permission to obtain an exception to this rule.
In contrast, a model syllabus statement for Brown’s Master in Technology Leadership Program allows instructors class-level discretion on use of AI. However, if the tools are used, they should be cited through the following conventions:
- Clearly identify the use of AI-based tools in the work. Any work that utilizes AI-based tools must be clearly marked as such, including the specific tool(s) used. For example, if you use ChatGPT-3, you must cite "ChatGPT-3. (YYYY, Month DD of query). "Text of your query." Generated using OpenAI. https://chat.openai.com/"
- Be transparent in how the AI-based tool was used, including what work is the student’s original contribution.
- Ensure the use of AI-based tools does not violate any copyright or intellectual property laws.
-Curricular statement for Master in Technology Leadership Program
Additional options for citation may include asking students to attach a log of prompts used with the AI tool. Bowen & Watson (2024) offer a number of ways to request AI acknowledgements, such as, “I used AI to do an outline/first draft which I then edited. Describe the nature of your contribution:” or “I used AI/friends/tutor to help me generate ideas. Describe that process:”.
Other examples of syllabus statements can be found on the Sheridan Center's Creating a Brown University Syllabus page (see "Syllabus Statements Addressing Emergent Issues"). Even though it is past the first day of class, you can still publish an official update of your syllabus with revised language to reflect these changes, and ongoing classroom discussions around specific assignments are also useful. A Sheridan Center newsletter on Inclusive Practices for Addressing Academic Integrity offers additional ideas. Please reach out to Sheridan_Center@brown.edu if we can facilitate the development of this statement.
Ethics of use and cost
In addition to academic integrity, discussions with students about AI might include ethical issues such as authorship, data acquisition, labor issues, student privacy, and performance stability. While popular chatbot tools offer free versions, users who can afford the paid versions gain a competitive edge through enhanced features like faster response times, advanced data analysis, unlimited queries, and better graphic content creation. The option of enhanced services by fee threatens to exacerbate existing inequities.
Consider how you might address issues of use, cost, authorship, and ethics with your students. For example, Professor Shriram Krishnamurthi (Computer Science) encourages use of GPT in CSCI 1710 (Logic for Systems) but requires students to work through a series of guided questions first, some of which address ethical and privacy-related issues. (Professor Krishnamurthi notes that others are welcome to use and adapt this prose, with attribution.) Such prompted reflection can productively address these issues and potential implications for students as individuals and as members of a learning community dedicated to advancing diversity, equity, and inclusion.
While generative AI offers new and exciting possibilities across campus and disciplines, instructor attitudes towards integration into student work vary considerably–even within shared departments or disciplines. Clear, articulated guidelines will support your students’ learning most effectively.
Other resources to support Brown instructors and students include:
- Consultation: Please contact Sheridan_Center@brown.edu if you would like a confidential consultation on implications for your own teaching.
- Roundtable Recording: In March 2023, the Sheridan Center hosted a forum for instructors to discuss views and practices on AI in the classroom. Members of the Brown community may view a recording of this session by completing this brief request form.
- Canvas Commons Module for Students: Sheridan Center for Teaching and Learning and Brown University Library staff have designed a Canvas Commons resource on artificial intelligence ready for import into Canvas course sites. This learning resource helps students to develop an understanding of generative artificial intelligence technology, the social impact of this emerging technology, and the impact it may have on the writing process. Instructors who wish to use this Canvas Commons module can do so by using the "Commons" on the left menu of their Canvas page and then looking for "Writing and Citing Critically: An AI Guide for Informed Students." While the module designers would encourage you to use all five sections (Introduction, Generative AI, Social Impact, AI and Research, AI Impact on Writing, and Conclusion), faculty are able to select only specific modules for import.
- Coming Soon! A Canvas module for faculty on a range of approaches to AI in assignment design.
References
Bowen, J., & Watson, E. (2024). Teaching with AI: A Practical Guide to Human Learning. Johns Hopkins Press: Baltimore, MD.
Georgia Tech Office of Graduate and Postdoctoral Education. (2024, July 10). Effective and Responsible Use of AI in Research: Guidance for Performing Graduate Research and in Writing Dissertations, Theses, and Manuscripts for Publications. Available: https://grad.gatech.edu/ai-research-and-teaching
Perkins, M., Furze, L., Roe, J., MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(6).
Playfoot, D., Quigley, M., & Thomas, A.G. (2024). Hey ChatGPT, Give me a title for a paper about degree apathy and student use of AI for assignment writing. The Internet and Higher Education, 62. https://doi.org/10.1016/j.iheduc.2024.100950
Report of the Yale Task Force on Artificial Intelligence. (2024, June 18). Available: https://provost.yale.edu/news/report-yale-task-force-artificial-intelligence
This resource was originally authored in January 2023 by Dr. Jenna Morton-Aiken, Senior Associate Director for Writing and English Language Support; and Lecturer, Department of English. The resource was updated in September 2024 by Mary Wright. Thank you to Kristi Kaeppel, Anne Kerkian, and Christine Baumgarthuber for feedback on drafts and discussions of the topic.
DRAFTED February 2023; UPDATED SEPTEMBER 2024