Using Indirect vs. Direct Measures in the Summative Assessment of Student Learning in Higher Education

Main Article Content

Christine Luce
Jean P Kirnan

Abstract

Contradictory results have been reported regarding the accuracy of various methods used to assess student learning in higher education. The current study examined student learning outcomes across a multi-section and multi-instructor psychology research course with both indirect and direct assessments in a sample of 67 undergraduate students. The indirect method measured student perceived knowledge and abilities on course topics, while the direct method measured actual knowledge where students answered test questions or solved problems reflecting course content. Both measures independently demonstrated increases from pretest to posttest; however the indirect measure did not correlate with final course grades. Results also showed respondents scoring lower on the direct measure were overconfident (as measured by indirect score) in their perceived knowledge and ability, the Dunning-Kruger Effect. Based on our findings, we concluded that the indirect method was not an accurate measure of student learning, but may have benefits as an instructional tool.

Downloads

Download data is not yet available.

Article Details

How to Cite
Luce, C., & Kirnan, J. P. (2016). Using Indirect vs. Direct Measures in the Summative Assessment of Student Learning in Higher Education. Journal of the Scholarship of Teaching and Learning, 16(4), 75–91. https://doi.org/10.14434/josotl.v16i4.19371
Section
Articles

References

Banta, T. W. (2004). Developing assessment methods at classroom, unit, and university-wide levels. Paper prepared for the Scottish Higher Education Agency. Retrieved October 30, 2012 from: http://www.bmcc.cuny.edu/iresearch/upload/Banta.pdf

Bell, P., and Volckmann, D. (2011). Knowledge surveys in general chemistry: Confidence, overconfidence, and performance. Journal of Chemical Education, 88(11), 1469-1476.

Bowers, N., Brandon, M., and Hill, C. (2005). The use of a knowledge survey as an indicator of student learning in an introductory biology course. CBE Life Sciences Education, 4(4), 311-322. doi:10.1187/cbe.04-11-0056.

Clauss, J., and Geedey, K. (2010). Knowledge surveys: Students’ ability to self-assess. The Journal of Scholarship of Teaching and Learning, 10(2), 14-24.

Cross, K. P., and Angelo, T. A., (1988). Classroom assessment techniques: A handbook for faculty. Ann Arbor, MI: National Center for Research to Improve Postsecondary Teaching and Learning, University of Michigan.

Eder, D. (2010, October). “Closing the loop: No money, no time, and I’m expected to assess, too?” Presented at the Assessment Institute, Indianapolis IN.

Fort, A. O. (2011). Learning about learning. Liberal Education, (Winter) 56-60.

Hacker, D.J., Bol, L., Horgan, D. D., and Rakow, A. R. (2000). Test prediction and performance in a classroom context. Journal of Educational Psychology, 92(1), 160-170.

Huffman, L., Adamapoulos, A., Merdock, G., Cole, A. and McDermid, R. (2011). Strategies to motivate students for program assessment. Educational Assessment, 16, 90-103.

Kruger, J. and Dunning, D. (1999). Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.

Kuh, G. D., Jankowski, N., Ikenberry, S. O., and Kinzie, J. (2014). Knowing what students know and can do: The current state of student learning outcomes assessment in US colleges and universities. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA).

Liu, O. L., Bridgeman, B. and Adler, R. M. (2012). Measuring learning outcomes in higher education: Motivation matters. Educational Researcher, 41(9), 352-362.

Mabe, P. A. and West, S. G. (1982). Validity of self-evaluation of ability: A review and metaanalysis. Journal of Applied Psychology, 67(3), 280-296. doi: 10.1037/0021-9010.67.3.280

Martell, K. (2007). Assessing student learning: Are business schools making the grade? Journal of Education for Business, 82(4),189-195.

Nuhfer, E., and Knipp, D. (2003) The knowledge survey: A tool for all reasons. Retrieved October 31, 2015 from: http://pachyderm.cdl.edu/elixr-stories/resource-documents/knowledgesurvey/KS_a_too_for_all_reasons.pdf

Office of the Provost (2015). Outcomes Assessment. University of Wisconsin-Madison. Retrieved October 31, 2015 from: http://www.provost.wisc.edu/assessment/manual/manual2.html#a3

Pedersen, D. E., and White, F. (2011). Using a value-added approach to assess the sociology major. Teaching Sociology, 39(2), 138-149.

Price, B. A., and Randall, C. H. (2008). Assessing learning outcomes in quantitative courses: Using embedded questions for direct assessment. Journal of Education for Business, 83(5), 288-294.

Rampel, C. (2011, July 14). A history of college grade inflation. The New York Times. Retrieved October 31, 2015 from: http://economix.blogs.nytimes.com/2011/07/14/the-history-of-collegegrade-inflation/

The College of New Jersey. (2015). Psychology Major & Specializations. Retrieved October 31, 2015 from: http://psychology.pages.tcnj.edu/academic-programs/psychology-major-specializations/

Wirth, K.R., and Perkins, D. (2005). Knowledge surveys: An indispensable course design and assessment tool. Innovations in the Scholarship of Teaching and Learning. Retrieved from www.macalester.edu/geology/wirth/WirthPerkinsKS.pdf