Abstract
Leadership education scholarship increasingly calls for direct assessment of leadership behaviors rather than reliance on proxy measures (Banks et al., 2023). This poster reports a validation study situated in a project-based leadership course at a midsized university. While teams meet regularly throughout the semester-long project, three meetings are digitally recorded, transcribed and analyzed to provide students with objective feedback based on the content of their communication. Team meetings are coded using an a priori framework grounded in emotional intelligence (Salovey & Mayer, 1990; Lee & Wong, 2017), with codes reflecting self-awareness, self-regulation, self-motivation, social awareness, and relational management as enacted in talk and interaction. Because manual coding is slow and limits timely formative feedback, we leverage a large language model (LLM) for semantic, context-sensitive coding aligned to the codebook. Using turn-of-talk as the unit of analysis, human criterion labels were established through double-coding (Cohen’s kappa) and three-coder consensus. The LLM receives code definitions, positive/negative exemplars, and a deterministic prompt template, and outputs code assignments, brief rationales, and an uncertainty flag that triggers human-in-the-loop review. We compare LLM outputs to human consensus using agreement metrics. Findings will inform a replicable workflow for scalable, behavior-based leadership assessment and feedback.
Faculty Advisor
Brent Goertzen
Department/Program
Leadership
Submission Type
in-person poster
Date
4-13-2026
Rights
Copyright the Author(s)
Recommended Citation
Hesford, Jace; Moy, Magdalene; Will, Ryan; and Goertzen, Brent
(2026)
"Validating LLM-Assisted Coding of Student Leadership Teams for Formative Feedback,"
SACAD: Scholarly Activities: Vol. 2026, Article 88.
Available at:
https://scholars.fhsu.edu/sacad/vol2026/iss2026/88
Included in
Educational Technology Commons, Leadership Studies Commons, Scholarship of Teaching and Learning Commons