Regardless of whether your program has integrated artificial intelligence into learning, your students have access to generative AI tools, which many already use. For faculty and administrators, the rapid evolution of these tools may evoke excitement about the possibilities for the technology to enhance learning. At the same time, this innovation prompts concerns about academic integrity and critical thinking. Both these responses are valid, and both point to the importance of an open dialogue with students about the place of AI in their education.
While many educators feel how urgent these conversations have become, they are uncertain how to approach the conversation. If you or your faculty members are in this position, explore these strategies for talking about AI use with students.
The AI talk should happen sooner rather than later, even if the picture of AI’s place in your program is still coming into focus. There are three reasons for this urgency:
Once your faculty members are on the same page about the need to discuss AI use in academics, you can implement a strategy for how to talk about AI with students. Keep these 10 tips in mind for a productive conversation on student AI use.
Administrators and faculty should experiment with and learn about AI tools firsthand to bring an informed perspective to the conversation. That said, there’s no pressure to become an expert. It’s sufficient to know the main tools available, some principles for generating helpful prompts, how other institutions have incorporated AI into academics, and some background information about how AI models are trained. Modeling responsible curiosity in learning about AI can be valuable, so emphasize to students that faculty members can offer guidance and explain institutional policies, but are also on the learning journey alongside them.
Approach the AI talk with empathy and open-mindedness. Some students may find AI intimidating and confusing. Others are eager adopters, but anxious about the possibility of disciplinary action if they think your institution opposes all generative AI use. To meet students where they are, distribute anonymous surveys with questions like:
Student responses to these questions can help you understand the experience and questions students already have. These insights can help structure the conversation and highlight the resources students need to navigate AI. Consider using student survey software for increased response rates and data visualization to get the most from these questionnaires.
Whether faculty approve of a given tool or not, it’s best to assume students know about or will discover it. This makes it crucial to acknowledge current and emerging AI tools they may encounter, while pointing out the strengths, weaknesses, and risks of each. Students are more likely to avoid a tool your institution disapproves of if they know about it, understand why they should steer clear of it, and can access approved alternatives.
Depending on a program’s learning outcomes, faculty, and students can discuss various use cases for AI to enhance the learning experience. When students understand these and how they relate to generative AI’s strengths and limitations, they are more likely to avoid improper uses. AI use cases to consider for your program include:
Help students recognize improper AI use cases, as well as the shortcomings AI tools have even with appropriate use. A sober perspective on AI should acknowledge:
Give students opportunities to learn about AI through guided experimentation. One type of experiment involves asking students to critique AI responses to in-class tasks, similar to constructive peer-to-peer feedback. For example, a math class could ask AI to solve a problem and show its work. Students would then need to check whether the steps are sound and whether the conclusion is accurate. If not, the teacher can ask students to explain where the tool went wrong. This is also an opportunity to discuss how large language models work and what causes their computational shortcomings.
An English class could apply the same principle by, for example, asking AI to write sonnets in the styles of Shakespeare and Petrarch. They could then discuss whether AI followed the conventions of these sonnets and captured the voice of each writer well, challenging themselves to improve on the AI-generated sonnets.
Leverage multiple, diverse perspectives to define AI’s place in a program. This can be a dynamic, ongoing process of stakeholder engagement, including input from faculty members, students, and industry partners. One way to do this is through a collaborative, outcome-oriented approach to learning.
For example, let students know the intended learning outcomes for an upcoming assignment. One such outcome could be developing their research skills for finding and analyzing sources. The lecturer could distribute a questionnaire or facilitate a class discussion about how AI tools could work for or against this outcome. Some students may propose using AI as a search tool to create a list of potential sources. This would be an opportunity for the lecturer to discuss the risk of AI hallucinations, and how this use case may not work for the learning outcome.
Other students might suggest finding their own sources and then asking an AI tool to summarize the research methodologies and group similar studies for comparison. This may be a more viable use case. Either way, this kind of engagement promotes creative and critical thinking about AI’s potential uses.
Technological innovation and ongoing stakeholder engagement mean any detailed code of ethics for AI use must be a living document. That said, defining ethical boundaries is vital for guiding students in responsible AI use and holding them accountable. An AI ethics code should address:
Once your institution has a preliminary code for AI ethics, best practices, and disciplinary procedures, gradual integration can begin. This could begin with responsible AI workshops and the incorporation of AI into in-class exercises.
As AI integration proceeds, invite lecturers and students to share their reflections through surveys and midterm course evaluations. Engaging with diverse perspectives about how AI integration can improve can help each program adapt to meet student needs. This may mean introducing new use cases, creating helpful resources, or updating disciplinary policies.
Navigating the AI conversation with students requires balancing openness to technological innovations with an abiding commitment to student success. As your institution considers how to integrate AI into learning, the Watermark Educational Impact Suite can provide a solid software foundation.
EIS is a centralized system of software solutions for higher education. It integrates with your preferred learning management system (LMS) and other Watermark products to provide valuable insights for assessment, accreditation, curriculum management, student success, surveys, and more.
As you explore what your students know about AI, try using Watermark Course Evaluations & Surveys within the EIS to make questionnaires accessible and improve response rates by 70 percent or more. If you want to model positive AI use, lean on Watermark Student Success & Engagement’s AI-powered predictive analytics to support the students who need help most. These are just two of the tools the Watermark EIS offers you for harnessing data-driven insights as you consider AI’s potential place in your programs.
Request a free demo of the Watermark EIS to explore how it can support your institution’s success.