Beyond the Mean: FAU Researchers Offer New Way to Read Course Evaluations
Thursday, May 14, 2026
Course evaluations are among the most common forms of data collected in higher education, yet the way data are analyzed often falls short. A new study by researchers in the department of Educational Leadership and Research Methodology contend that traditional summaries—means, response counts and percentages—can obscure important information about how students actually respond to Likert-type items in course evaluations and other surveys.
In a paper presented at the 2026 meeting of the American Educational Research Association, Professor Emeritus John D. Morris, Ph.D., Associate Professor Mary G. Lieberman, Ed.D., and Associate Professor Maria D. Vásquez-Colina, Ph.D., introduce an objective method for examining polarity and disagreement in Likert-type response distributions, demonstrating that these characteristics can be tested directly using response percentages alone.
Likert-type items are frequently used to gauge whether respondents hold positive or negative views about a given statement. However, this binary framing overlooks whether these responses significantly lean in one direction, and whether respondents meaningfully disagree with one another.
“Interpreting Likert responses has traditionally relied upon subjective assumptions about response levels,” Morris said. “An objective method makes survey results, including course evaluations, more accurate and informative.”
These analyses are not available via any commercial software. To address this shortfall, the team developed a first-of-its-kind excel-based program that calculates exact or multinomial-equivalent probabilities, along with effect size estimates. The spreadsheet enables testing of individual items and overall response patterns across multiple items, providing a unique analytical depth not available in routine evaluation reports.
Using instructor course evaluations as an example, the study shows how items with similar means can yield very different interpretations once asymmetry and disagreement are examined. In some cases, items that appear favorable on average reveal substantial disagreement among students, an insight that may be lost in standard reporting.
While course evaluations are used to illustrate the method, the researchers emphasize that the technique applies to any Likert-type item in educational, psychological or social science research. By moving beyond surface-level summaries, the approach encourages more careful, transparent interpretation of survey data, particularly when evaluation results inform high-stakes decisions.