Total views : 1103

A Proposed Framework of Test Administration Methods

Affiliations

  • Assessment Systems Corporation 2233 University Ave., Ste 200 Saint Paul, MN 55114, United States

Abstract


The widespread application of personal computers to educational and psychological testing has substantially increased the number of test administration methodologies available to testing programs. Many of these mediums are referred to by their acronyms, such as CAT, CBT, CCT, and LOFT. The similarities between the acronyms and the methods themselves can be a source of confusion to testing professionals. This purpose of this paper is to suggest a hierarchical classification system for test administration methods and clarify what each entails while also briefly discussing the similarities and differences of each.

Full Text:

 |  (PDF views: 384)

References


  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (1999). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
  • Downing, S.M. (2006). Twelve steps for effective test development. In Downing, S.M. & Haldyna, T.M. (Eds.) Handbook of Test Development (pp. 3-26). Mahwah, NJ: Erlbaum.
  • Embretson, S.E. & Reise, S. (2000). Item response theory for psychologists. Mahwah, NJ: Erlbaum.
  • Fan, X. (1998). Item response theory and classical test theory: an empirical comparison of their item/person statistics. Educational and Psychological Measurement, 58 (3), 357-381.
  • Feng, Y. (1994). From the Imperial Examination to the National College Entrance Examination: the Dynamics of Political Centralism in China's Educational Enterprise. Paper presented at the Annual Meeting of Association for the Study of Higher Education, Tuscon, AZ.
  • Folk, L.C., March, J.Z., Hurst, R.D. (2006) A Comparison of Linear, Fixed-Form Computer-Based Testing versus Traditional Paper-and-Pencil-Format Testing in Veterinary Medical Education. Journal of Veterinary Medical Education, 33(3), 455-464.
  • Jones, J.P. (2000). Promoting stakeholder acceptance of CBT. Journal of Applied Testing and Technology, Volume 2. Accessed 5/22/08 from http://www.testpublishers.org/jattart.htm.
  • Pucel, D.J. and Anderson, L.D. (2003). Developing computer simulation performance tests: Challenges and criteria. Computers and Advanced Technology in Education (pp. 170-174). ISBN: 0-88986-361-X. International Association of Science and Technology Development: Calgary, AB.
  • Sheehan, K., & Lewis, C. (1992). Computerized mastery testing with nonequivalent testlets. Applied Psychological Measurement, 16, 65-76.
  • Stage, C. (2003). Classical test theory or item response theory: the Swedish experience. Educational Measurement No 42. Umeå, Sweden: University of Umeå, Department of Educational Measurement.
  • Stocking, M.L., Smith, R., & Swanson, L. (2000). An investigation of approaches to computerizing the GRE subject tests. Research Report 00-04. Princton, NJ: Educational Testing Service.
  • Weiss, D.J., & Kingsbury, G.G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361-375.
  • Yoes, M.E. (1995). An updated comparison of microcomputer-based item parameter estimation procedures used with the 3-parameter IRT model (Technical Report 95-1). Saint Paul, MN: Assessment Systems Corporation.

Refbacks

  • There are currently no refbacks.