ASSESSMENT OF RUBRIC-BASED EVALUATION BY NONPARAMETRIC MULTIPLE COMPARISONS IN FIRST-YEAR EDUCATION IN A JAPANESE UNIVERSITY

Authors

  • Yasuo Nakata Faculty of Health Sciences, Kobe Tokiwa University, Kobe, Japan
  • Yasuhiro Kozaki Faculty of Education, Osaka Kyoiku University, Osaka, Japan The Center for Early Childhood Development, Education, and Policy Research, The University of Tokyo, Tokyo, Japan
  • Taion Kunisaki Faculty of Education, Kobe Tokiwa University, Kobe, Japan
  • Tetsuhiro Gozu Faculty of Education, Kobe Tokiwa University, Kobe, Japan
  • Kenya Bannaka Department of Oral Health, Kobe Tokiwa College, Kobe, Japan
  • Kunihiko Takamatsu Faculty of Health Sciences, Kobe Tokiwa University, Kobe, Japan Center for the Promotion of Excellence in Research and Development of Higher Education, Kobe Tokiwa University, Kobe, Japan Life Science Center, Kobe Tokiwa University, Kobe, Japan

DOI:

https://doi.org/10.20319/pijss.2018.41.631641

Keywords:

Normalizing Rubric Evaluation, First-Year Education, Nonparametric Multiple Comparison, Steel-Dwass Estimation

Abstract

The rubrics have become a widely referenced and utilized form of assessment on campuses across internationally. But rubric can be an asset in any classroom and at any education level but it needs to be implemented correctly. Our research question in this study is whether students were evaluated consistently and equally from teacher to teacher using rubric. To answer this research question, we performed statistical estimation using nonparametric multiple comparisons. This article reports on a normalizing rubric evaluation by nonparametric multiple comparisons in a first-year course called “Manaburu I” offered at Kobe Tokiwa University. “Manaburu” is a word coined by us: “manabu” ‘learn’ in Japanese + English able. Thus, “Manaburu” refers to Self-Directed Learning I. In the course, about 20 teachers teach about 350 students (16–17 students per teacher). Students are organized into groups of about 6. It is of course difficult for 20 teachers to evaluate their students consistently among them, making this course an appropriate site for the evaluation. We constructed a rubric for the course, under which teachers were meant to evaluate students, and presented it to both teachers and students. Our research question was whether teachers evaluated students consistently and equally according to the Steel–Dwass estimation method, a strict statistical estimation method for nonparametric multiple comparisons. The results show that teachers do not evaluate students equally. Suggestions for future research, more attention to validity and reliability, a closer focus on learning and research on rubric use in higher education.

References

Association of American Colleges & Universities. (2018, April 20). Retrieved from https://www.aacu.org/value

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, VA: Association for Supervision & Curriculum Development.

Cox, G. C., Morrison, J., & Brathwaite, B. H. (2015). The rubric: An assessment tool to guide students and Markers. 1st International Conference on Higher Education Advances, HEAd’15, 26-32. http://dx.doi.org/10.4995/HEAd15.2015.414

Kirimura, T., Takamatsu, K., Bannaka, K., Noda, I., Mitsunari, K., & Nakata, Y. (2018). Design the basic education courses as part of the innovation of management of learning and teaching at our own university through collaboration between academic faculty and administrative staff. Bulletin of Kobe Tokiwa University, 11, 181-192.

Kirimura, T., Mitsunari, K., Kunisaki, T., Gozu, T., Takamatsu, K., Bannaka, K., & Nakata, Y. (2018). Effectiveness of first year experience’s course “Manaburu” at Kobe Tokiwa University for university students by using textual analysis. Bulletin of Kobe Tokiwa University, 11, 193-208.

Kirimura, T., Takamatsu, K., Bannaka, K., Noda, I., Mitsunari, K., & Nakata, Y. (2017). Innovation: the management of teaching and learning at our own university through collaboration between academic faculty and administrative staff. Bulletin of Kobe Tokiwa University, 10, 23-32.

Mansilla, V. B., Duraisingh, E. D., Wolfe, C. R., & Haynes, C. (2009). Targeted assessment rubric: An empirically grounded rubric for interdisciplinary writing. The Journal of Higher Education. The Journal of Higher Education, 80 (3), 334-353. https://doi.org/10.1080/00221546.2009.11779016

Mitsunari, K., Kirimura, T., Kunisaki, T., Gozu, T., Takamatsu, K., Bannaka, K., & Nakata, Y. (2018). Paradigm shift in education from teaching to learning: Focus on the implementation of academic skills and deep learning I. Bulletin of Kobe Tokiwa University, 11, 7-16.

Spady, W. G. (1994). Outcome-based education: critical issues and answers. Alexandria, VA: American Association of School Administrators.

Takamatsu, K., Murakami, K., Kirimura, T., Bannaka, K., Noda, I., Yamasaki, M., Lim. R.-J. W., Mitsunari, K., Nakamura, T., & Nakata, Y. (2017). A new way of visualizing curricula using competencies: Cosine similarity, multidimensional scaling methods, and scatter plotting. Advanced Applied Informatics (IIAI-AAI), 2017 6th IIAI International Congress On. IEEE, http://doi.ieeecomputersociety.org/10.1109/IIAI-AAI.2017.29.

Downloads

Published

2018-05-22

How to Cite

Nakata, Y., Kozaki, Y., Kunisaki, T., Gozu, T., Bannaka, K., & Takamatsu, K. (2018). ASSESSMENT OF RUBRIC-BASED EVALUATION BY NONPARAMETRIC MULTIPLE COMPARISONS IN FIRST-YEAR EDUCATION IN A JAPANESE UNIVERSITY. PEOPLE: International Journal of Social Sciences, 4(1), 631–641. https://doi.org/10.20319/pijss.2018.41.631641