Total views : 1269
A Method for Generating Educational Test Items that are Aligned to the Common Core State Standards
The demand for test items far outstrips the current supply. This increased demand can be attributed, in part, to the transition to computerized testing, but, it is also linked to dramatic changes in how 21st century educational assessments are designed and administered. One way to address this growing demand is with automatic item generation. Automatic item generation involves the process of using models to generate items with the aid of computer technology. The purpose of this study is to describe and illustrate a methodology that permits the generation of huge number of diverse and heterogeneous test items that are closely aligned to the Common Core State Standards in Mathematics.
Automatic Item Generation, Test Development, Testing and Technology
- Bejar, I. I. (1990). A generative analysis of a three-dimensional spatial task. Applied Psychological Measurement, 14,237–245.
- Bejar, I. I. (1996). Generative response modeling: Leveraging the computer as a test delivery medium (ETS Research Report 96-13). Princeton, NJ: Educational Testing Service.
- Bejar, I. I. (2002). Generative testing: From conception to implementation. In S. H. Irvine & P. C. Kyllonen (Eds.), Item generation for test development (pp.199–217). Hillsdale, NJ: Erlbaum.
- Bejar, I. I., Lawless, R., Morley, M. E., Wagner, M. E., Bennett, R. E., & Revuelta, J. (2003). A feasibility study of on-the-fly item generation in adaptive testing. Journal of Technology, Learning, and Assessment, 2(3). Available from http://www.jtla.org.
- Bennett, R. (2001). How the internet will help large-scale assessment reinvent itself. Educational Policy Analysis Archives, 9, 1–23.
- Breithaupt, K., Ariel, A., & Hare, D. (2010). Assembling an inventory of multistage adaptive testing systems. In W. van der Linden & C. Glas (Eds.), Elements of adaptive testing (p. 247–266), New York, NY: Springer.
- Drasgow, F., Luecht, R. M., & Bennett, R. (2006). Technology and testing. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 471–516). Washington, DC: American Council on Education.
- Embretson, S. E. (2002). Generating abstract reasoning items with cognitive theory. In S. H. Irvine & P. C. Kyllonen (Eds.), Item generation for test development (pp. 219–250).Mahwah, NJ: Erlbaum.
- Embretson, S. E., & Yang, X. (2007). Automatic item generation and cognitive psychology. In C. R. Rao & S. Sinharay (Eds.) Handbook of Statistics: Psychometrics, Volume 26 (pp. 747–768). North Holland, UK: Elsevier.
- Gierl, M.J., &Haladyna, T. (2013). Automatic item generation: Theory and practice. New York: Routledge.
- Gierl, M. J., & Lai, H. (2013). Using automated processes to generate test items. Educational Measurement: Issues and Practice, 32, 36–50.
- Gierl, M. J., Zhou, J., & Alves, C. (2008). Developing a taxonomy of item model types to promote assessment engineering. Journal of Technology, Learning, and Assessment, 7(2). Retrieved from http://www.jtla.org.
- Gütl, C., Lankmayr, K., Weinhofer, J., & Höfler, M. (2011). Enhanced Automatic Question Creator – EAQC: Concept, development and evaluation of an automatic test item creation tool to foster modern e-education.Electronic Journal of e-Learning, 9, 23–38.
- Haladyna, T., &Shindoll, R. (1989). Items shells: A method for writing effective multiple-choice test items. Evaluation and the Health Professions, 12, 97–106.
- Higgins, D. (2007). Item Distiller: Text retrieval for computer-assisted test item creation. Educational Testing Service Research Memorandum (RM-07-05). Princeton, NJ: Educational Testing Service.
- Higgins, D., Futagi, Y, & Deane, P. (2005). Multilingual generalization of the Model Creator software for math item generation. Educational Testing Service Research Report (RR-05-02). Princeton, NJ: Educational Testing Service.
- Hively, W., Patterson, H. L., & Page, S. H. (1968). A “universe-defined” system of arithmetic achievement tests. Journal of Educational Measurement, 5, 275–290.
- Irvine, S. H., &Kyllonen, P. C. (2002). Item generation for test development. Hillsdale, NJ: Erlbaum.
- LaDuca, A., Staples, W. I., Templeton, B., &Holzman, G. B. (1986). Item modeling procedures for constructing content-equivalent multiple-choice questions. Medical Education, 20, 53–56.
- Lai, J., Gierl. M. J., & Alves, C. (2010, April).Using item templates and automated item generation principles for assessment engineering.In R. M. Luecht (Chair), Application of assessment engineering to multidimensional diagnostic testing in an educational setting. Paper presented at the annual meeting of the National Council on Measurement in Education, Denver, CO. Leighton, J. P., & Gierl, M. J. (2011). The learning sciences in educational assessment: The role of cognitive models. Cambridge, UK: Cambridge University Press.
- Minsky, M. (1974). A framework for representing knowledge. MIT-AI Laboratory Memo 306.
- Mislevy, R. J., &Riconscente, M. M. (2006). Evidencecentered assessment design. In S. M. Downing & T. Haladyna (Eds.), Handbook of test development (pp.61-90). Mahwah, NJ: Erlbaum.
- Reiter, E. (1995). NLG vs. templates. Proceedings of the Fifth European Workshop on Natural Language Generation (pp. 95-105). Leiden, The Netherlands.
- Rudner, L. (2010). Implementing the Graduate Management Admission Test Computerized Adaptive Test. In W. van derLinden & C. Glas (Eds.), Elements of adaptive testing (p.151-165), New York, NY: Springer.
- Schmeiser, C.B., & Welch, C.J. (2006). Test development. In R. L. Brennan (Ed.), Educational measurement (4th ed., pp. 307-353). Westport, CT: National Council on Measurement in Education and American Council on Education.Singley, M. K., & Bennett, R. E. (2002). Item generation and beyond: Applications of schema theory to mathematics assessment. In S. H. Irvine & P. C. Kyllonen (Eds.), Item generation for test development (pp. 361-384). Mahwah, NJ: Erlbaum.
- There are currently no refbacks.