Total views : 1602

Assessing Higher-Order Cognitive Constructs by Using an Information-Processing Framework

Affiliations

  • National Council of State Boards of Nursing, 111 East Upper Wacker Drive, Suite 2900, Chicago, IL 60601-4277, United States
  • Pearson VUE

Abstract


Designing a theory-based assessment with sound psychometric qualities to measure a higher-order cognitive construct is a highly desired yet challenging task for many practitioners. This paper proposes a framework for designing a theorybased assessment to measure a higher-order cognitive construct. This framework results in a modularized yet unified assessment development system which includes elements spanning from construct conceptualization to model validation. The paper illustrates how to implement this framework by using the construct of nursing clinical judgement. Using this framework, many difficult design decisions can be made with strong theoretical rationales. The framework is also flexible to accommodate modifications and extensions that will be required to be made to the assessment as new knowledge related to the construct is generated over time. The goal of this framework is to provide practitioners with a practical and accessible methodology to assess sophisticated constructs on the ground of cognitive theories of the construct, especially by using technology enhanced items.

Keywords

Assessment Design, Higher-Order Cognitive Construct, Information-Processing Framework, Nursing Clinical Judgment, Technology Enhanced Item.

Full Text:

 |  (PDF views: 1036)

References


  • Ames, A. J., & Penfield, R. D. (2015). An NCME instructional module on item-fit statistics for item response theory models. Educational Measurement: Issues and Practice, 34, 1 –10.
  • Benner, P. D. (2000). From novice to expert: Excellence and power in clinical nursing practice, commemorative edition. Upper Saddle River, NJ: Prentice-Hall.
  • Bertling, M., Jackson, G. T., Oranje, A., & Owen, V. E. (2015). Measuring argumentation skills with game-based assessments: Evidence for incremental validity and learning. In Artificial Intelligence in Education (pp. 545–549). New York, NY: Springer.
  • Birnbaum, A. (1968). Some latent train models and their use in inferring an examinee’s ability. In F. M. Lord & M. R. Novick (Eds.), Statistical theories of mental test scores (pp. 395-479). Reading, MA: Addison-Wesley.
  • Bradshaw, L., Izsák, A., Templin, J., & Jacobson, E. (2014). Diagnosing teachers' understandings of rational numbers: Building a multidimensional test within the diagnostic classification framework. Educational Measurement: Issues and Practice, 33, 2-14.
  • Brennan, T. A., Leape, L. L., Laird, N. M., Hebert, L., Locaio, A. R., Lawthers, A. G., Hiatt, H. H. (1991). Incidence of adverse events and negligence in hospitalized patients: Results of the Harvard Medical Practice Study I. New England Journal of Medicine, 324, 370–376.
  • Chapelle, C., Grabe, W., & Berns, M. S. (1997). Communicative language proficiency: Definition and implications for TOEFL 2000. Princeton, NJ: Educational Testing Service.
  • Elstein, A. S., Shulman, L. S., & Sprafka, S. A. (1978). Medical problem solving: An analysis of clinical reasoning. Cambridge, MA: Harvard University Press.
  • Graf, E. A., & Arieli-Attali, M. (in press). Designing and developing assessments of complex thinking in mathematics for the middle grades. Theory Into Practice.
  • Gierl, M. J., & Lai, H. (2013). Instructional topics in educational measurement (ITEMS) module: Using automated processes to generate test items. Educational Measurement: Issues and Practice, 32, 36–50.
  • Harbison, J. (2001). Clinical decision making in nursing: Theoretical perspectives and their relevance to practice. Journal of Advanced Nursing, 35, 126–133.
  • Hodgetts, T. J., Kenward, G., Vlackonikolis, I., Payne, S., Castle, N., Crouch, R., … Shaikh, L. (2002). Incidence, location and reasons for avoidable in-hospital cardiac arrest in a district general hospital. Resuscitation, 54, 115–123.
  • Huff, K., Steinberg, L., & Matts, T. (2010). The promises and challenges of implementing evidence-centered design in large-scale assessment. Applied Measurement in Education, 23, 310–324.
  • Kane, M. T. (2006). Validation. In R. L. Brennan (Ed.), Educational measurement (Vol. 4, pp. 17-64). Lanham, MD: Rowman & Littlefield Publishers.
  • Leape, L. L. (2000). Institute of Medicine medical error figures are not exaggerated. Journal of American Medical Association, 284, 95–97.
  • Luecht, R. M. (2013a). Assessment engineering task model maps, task models and templates as a new way to develop and implement test specifications. Journal of Applied Testing Technology, 14, 1-38.
  • Luecht, R.M. (2013b). An introduction to assessment engineering for automatic item generation. In M. J. Gierl & T. M. Haladyna (Eds.), Automatic item generation (pp. 59–76). New York: Routledge.
  • Luecht, R. M., Gierl, M. J., Tan, X., & Huff, K. (2006). Scalability and the development of useful diagnostic scales. Paper presented in the annual meeting of the National Council on Measurement in Education, San Francisco, CA.
  • Luo, X., Becker, K., Sutherland, K., & de Jong, J. (2015). Constructing a framework for scoring innovative test items. Paper presented in the annual meeting of the National Council on Measurement in Education, Chicago, IL.
  • Masters, G. N. (1982). A Rasch model for partial credit scoring. Psychometrika, 47, 149–174.
  • McDonald, R. P. (1997). Normal-ogive multidimensional model. In W. van der Linden & R. K. Hambleton (Eds.), Handbook of modern item response theory (pp. 257–269). New York, NY: Springer.
  • Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741-749.
  • Mislevy, R. J., & Haertel, G. D. (2006). Implications of evidencecentered design for educational testing. Educational Measurement: Issues and Practice, 25, 6-20.
  • Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (2003). Focus article: On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3–62.
  • Muntean, W. J. (2012). Nursing clinical decision-making: A literature review. Paper commissioned by the National Council of State Boards of Nursing.
  • Muntean, W. J. (2015). Evaluating clinical judgment in licensure tests: Applications of decision theory. Paper presented in the annual meeting of the American Educational Research Association in Chicago, IL.
  • Oppenheimer, D. M., & Kelso, E. (2015). Information processing as a paradigm for decision making. Annual Review of Psychology, 66, 277-294.
  • Phaneuf, M. (2008). Clinical judgment—An essential tool in the nursing profession. Retrieved on June 8, 2015 from http://www.infiressources.ca/fer/Depotdocument_anglais/ Clinical_Judgement–An_Essential_Tool_in_the_Nursing_Profession.pdf
  • Rasch, G. (1960). Probabilistic models for some intelligence and achievement tests. Copenhagen, Denmark: Danish Institute for Educational Research, 1960.
  • Rupp, A. A., Templin, J., & Henson, R. A. (2010). Diagnostic assessment: Theory, methods, and applications. New York: Guilford.
  • Saintsing, D., Gibson, L. M., & Pennington, A. W. (2011). The novice nurse and clinical decision-making: How to avoid errors. Journal of Nursing Management, 19, 354-359.
  • Tanner, C. (2006).Thinking like a nurse: A research based model of clinical judgment in nursing. Journal of Nursing Education, 45, 204–212.
  • Sparks, J. R., & Deane, P. (2015). Cognitively based assessment of research and inquiry skills: Defining a key practice in the English language arts. ETS Research Report Series.
  • Weir, C. J. (2005). Language testing and validation. London, UK: Palgrave Macmillan.

Refbacks

  • There are currently no refbacks.