While many of our conversations have focused on what generative AI means for student assignments and learning outcomes, there’s another question faculty are asking—often individually and quietly: How can we leverage AI in our own academic and administrative work? And more importantly, should we?
The answer, I believe, lies in using AI to help clear space for the work only we can do—the collaboration, connection, and critical guidance that makes education transformative.
That doesn’t mean that we simply use AI as a crutch for answering emails or summarizing meetings. In fact, I believe the true promise of AI comes from using it, in Ethan Mollick’s words, as a “genuine intellectual partner,” one that can enhance classroom discussions, assist with creating engaging instructional materials, and even help develop sophisticated problem sets or simulations that previously required extensive preparation time. As Mollick says, “the focus needs to move from task automation to capability augmentation.”
AI offers many potential applications for faculty work. While faculty should continue to prioritize the importance of maintaining human connection, empathy, and support in our teaching practice, we need to consider other ways AI can augment our work. Perhaps one way is in the design of our courses, the assignments and activities that chart student progress across content and outcomes. But rather than asking AI to develop prompts or notes for us, we can use AI as a tool to help develop our work in surprising ways.
Works in Theory, Wobbles in Practice
We’ve all fallen in love with that one key discussion question or written assignment prompt that just fizzles in the classroom. Despite our best intentions, we may not provide enough information, or we fail to anticipate a blind spot that leads students down fruitless paths. One of the challenges of course design is that all our work can seem perfectly clear and effective when we are knee-deep in the design process, but everything somehow falls apart when deployed in the wild. From simple misunderstandings to complex misconceptions, these issues typically don’t reveal themselves until we see actual student work—often when it’s too late to prevent frustration.
Bridging this gap requires iterative refinement—recognizing that what works in theory or in controlled conditions needs real-world testing, adaptation, and continuous improvement. It’s not just about designing something that works in the lab but ensuring our designs are resilient, adaptable, and responsive enough to thrive in the wild.
While there’s no substitute for real-world testing, I began wondering if AI could help with this iterative refinement. I didn’t want AI to refine or tweak my prompts. I wanted to see if I could task AI with modelling hundreds of student responses to my prompts in the hope that this process might yield the kind of insight I was too close to see.
The Process: AI-Assisted Assignment Stress Testing
After experimenting with systems like Claude and ChatGPT, I’ve discovered they can effectively analyze and refine writing prompts through the creation of simulated student responses. The basic approach works like this. First, provide the AI with information about your course and key characteristics of your student population. Then, share the assignment prompt. The AI internally generates multiple simulated student responses across different skill levels. After, it provides a comprehensive analysis identifying potential issues and opportunities.
You might specify that the analysis include common misinterpretations students might make or any structural or organizational challenges in the prompt. But the AI can also identify content development patterns and potential issues as well as population-specific concerns based on your student demographics. Finally, the AI can even suggest refinements to the prompt.
Seeing What You’re Not Seeing
To test this approach, I uploaded a personal narrative prompt that asks students to connect their life experiences to their academic goals—a common assignment in first-year writing courses.
The AI analysis revealed several blind spots in my prompt design. For instance, I hadn’t considered how non-traditional students might struggle with “choice of major” language, since many are career-changers. The AI modeled responses also revealed that students might have difficulty transitioning between personal narrative and academic analysis sections. Most valuable was seeing how different student populations might interpret the same instructions. Career-changers might focus too heavily on work experiences, while others might struggle with how much personal information to share. These insights allowed me to add clarifying language and support materials before any real students encountered these challenges.
The entire process took about 30 minutes but potentially saved hours of student confusion and faculty clarification emails. Of course, AI responses aren’t identical to human student responses, and we should be cautious about viewing AI as an infallible expert or source of absolute truth. But used as an additional lens when developing assignments, this approach can grant course designers a different perspective, one that triggers valuable insights and potentially reduces workload.
If you’d like to try this approach yourself, here’s a template prompt you can use with AI systems.
Course Design Multiplier
This process allowed me to develop targeted support materials for predicted problem areas before students struggle, building proactive scaffolding into course design from the beginning. And by sharing insights gained through AI analysis, departments could collectively improve assignment design practices—particularly valuable for multi-section courses where consistency matters. Over time, we could build a practical library of “what works” that faculty could draw from, including analyses explaining why certain assignments succeed with particular student populations and learning objectives.
AI-assisted assignment analysis offers a promising tool that respects our expertise while expanding our ability to anticipate student needs. While the technology isn’t perfect and will never replace insights gained from direct student interaction, it provides a valuable perspective that helps identify blind spots before students encounter them. This represents just one way thoughtfully implemented AI can help us do more of what matters: creating meaningful learning experiences. By using AI for the predictive work of assignment design, we free more time and energy for the deeply human work of guiding and connecting with our students—the work that only we can do.
Dr. Nathan Pritts is a leader in higher education, specializing in faculty development, instructional innovation, and the integration of emerging technologies in teaching and learning. As Professor and Program Chair for First Year Writing at the University of Arizona Global Campus, he has spearheaded initiatives in strategic implementation of online learning technologies, comprehensive faculty training programs, and the creation of scalable interventions to support both faculty and students in online environments. As author and researcher, Dr. Pritts has published widely on topics including digital pedagogy, AI-enhanced curriculum design, assessment strategies, and the future of higher education.