Total views : 3536

Comparison of Web-based and Face-to-face Standard Setting Using the Angoff Method

Affiliations

  • Educational Testing Service, MS 16-R, 660 Rosedale Road Princeton, NJ 08541, United States
  • Educational Testing Service, MS 16-R, 660 Rosedale Road Princeton, NJ 08541

Abstract


Web-based standard setting holds promise for reducing the travel and logistical inconveniences of traditional, face-to-face standard setting meetings. However, because there are few published reports of setting standards via remote meeting technology, little is known about the practical potential of the approach, including technical feasibility of implementing common standard setting methods and whether such an approach presents threats to validity and reliability. In previous work, we demonstrated the feasibility of implementing a modified Angoff methodology in a virtual environment (Katz, Tannenbaum,&Kannan, 2009). This paper presents results from two studies in which we compare cutscores set through face-to-face meetings and through the web-based approach on two operational tests, one of digital literacy and one of French.

Keywords

Angoff Methodology, Distance-based meetings, Standard Setting, Virtual meetings

Full Text:

 |  (PDF views: 1161)

References


  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC.
  • Buckendahl, C. W., Smith, R. W., Impara, J. C., & Plake, B. S. (2002). A comparison of Angoff and Bookmark standard setting methods. Journal of Educational Measurement, 39, 253–263.
  • Cizek, G. J., Bunch, M. B., & Koons, H. (2004). Setting performance standards: Contemporary methods. Educational Measurement: Issues and Practice, 23, 31–50.
  • Clauser, B. E., Mee, J., Baldwin, S. G., Margolis, M. J., & Dillon, G. F. (2009). Judges’ use of examinee performance data in an Angoff standard‐setting exercise for a medical licensing examination: An experimental study. Journal of Educational Measurement, 46, 390–407.
  • Davis, S. L., Buckendahl, C. W., Chin, T. Y., & Gerrow, J. (2008, March). Comparing the Angoff and Bookmark methods for an international licensure examination. Paper presented at the National Council on Measurement in Education, New York.
  • Espinosa, J. A., Slaughter, S. A., Kraut, R. E., & Herbsleb, J. D. (2007). Familiarity, complexity, and team performance in geographically distributed software development. Organization Science, 18, 613–630.
  • Harvey, A. L., & Way, W. D. (1999, April). A comparison of web-based standard setting and monitored standard setting. Paper presented at the annual conference of the National Council on Measurement in Education, Montreal, Canada.
  • Harvey, A. L. (2000, April). Comparing onsite and online standard setting methods for multiple levels of standards. Paper presented at the annual conference of the National Council on Measurement in Education, New Orleans, LA.
  • Kane, M. (1994). Validating the performance standards associated with passing scores. Review of Educational Research, 64, 425–461.
  • Katz, I. R., Tannenbaum, R. J., & Kannan, P. (2009). Virtual standard setting. CLEAR Exam Review, 20(2), 19–27.
  • Lorié, W. (2011, June). Setting standards remotely: Conditions for success. Paper presented that the CCSSO National Conference on Student Assessment, Orlando, FL.
  • Nichols, P., Twing, J., Mueller, C. D., & O’Malley, K. (2010).Standard-setting methods as measurement processes. Educational Measurement: Issues and Practice, 29, 14–24.
  • Olsen, J. B., & Smith, R. (2008, March). Cross validating modified Angoff and Bookmark standard setting for a home inspection certification. Paper presented at the annual meeting of the National Council on Measurement in Education, New York.
  • Olson, G. M., & Olson, J. S. (2000). Distance matters. Human-Computer Interaction, 15, 139–178.
  • Reckase, M. D. (2006). A conceptual framework for a psychometric theory for standard setting with examples of its use for evaluating the functioning of two standard setting methods. Educational Measurement:Issues and Practice, 25, 4–18.
  • Schnipke, D. L., & Becker, K. A. (2007). Making the test development process more efficient using web-based virtual meetings. CLEAR Exam Review, 18, 13–17.
  • Tannenbaum, R.J., & Katz, I.R. (2008). Setting standards on the core and advanced iSkills™ assessments (ETS Research Memorandum No. RM-08-04). Princeton, NJ: Educational Testing Service.
  • Tannenbaum, R. J. (2011). Alignment between the TOEIC® test and the Canadian Language Benchmarks. Final report. Princeton, NJ: ETS.
  • Tannenbaum, R. J., & Kannan, P. (in press). Consistency of Angoff-based standard-setting judgments: Are item judgments and passing scores replicable across different panels of experts? Educational Assessment.
  • Tannenbaum, R. J., & Katz, I. R. (2013). Standard setting. In K. F. Geisinger (Ed.), APA handbook of testing and assessment in psychology: Vol 3. Testing and assessment in school psychology and education (pp. 455–477). Washington, DC:American Psychological Association.
  • Wilson, T. D., & Schooler, J. W. (1991). Thinking too much: Introspection can reduce the quality of preferences and decisions. Journal of Personality and Social Psychology, 60, 181–192.
  • Zieky, M. J. (2001). So much has changed: How the setting of cutscores has evolved since the 1980s. In G. J. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives (pp. 19–51). Mahwah, NJ: Lawrence Erlbaum.
  • Zieky, M. J., Perie, M, & Livingston, S. A. (2008). Cutscores: A manual for setting standards of performance on educational and occupational tests. Princeton, NJ: Educational Testing Service.

Refbacks

  • There are currently no refbacks.