For the last few months we've been working away on creating a simple interface for teachers or questionnaire administrators to label or categorise student responses to questions. Perhaps unsurprisingly, the first real-world application for this has been applying the technique to free-text student responses to student evaluation of teaching (SET) questionnaires.
We've written up the results of our initial explorations in our paper, McDonald, J., Moskal, A., Goodchild, A., Stein, S & Terry, S. (2019) Advancing text-analysis to tap into the student voice: a proof-of-concept study. Assessment & Evaluation in Higher Education
The full article is available at:https://www.tandfonline.com/eprint/azwTbS9CkSFmxZnJXcWy/full?target=10.1080/02602938.2019.1614524
We reproduce the article abstract below:
Student evaluations of teaching and courses (SETs) are part of the fabric of tertiary education and quantitative ratings derived from SETs are highly valued by tertiary institutions. However, many staff do not engage meaningfully with SETs, especially if the process of analysing student feedback is cumbersome or time-consuming. To address this issue, we describe a proof-of-concept study to automate aspects of analysing student free text responses to questions. Using Quantext text analysis software, we summarise and categorise student free text responses to two questions posed as part of a larger research project which explored student perceptions of SETs. We compare human analysis of student responses with automated methods and identify some key reasons why students do not complete SETs. We conclude that the text analytic tools in Quantext have an important role in assisting teaching staff with the rigorous analysis and interpretation of SETs and that keeping teachers and students at the centre of the evaluation process is key.