Total views : 1089

Identifying less Accurately Measured Students

Affiliations

  • University of Minnesota, Twin Cities

Abstract


Some students are less accurately measured by typical reading tests than other students. By asking teachers to identify students whose performance on state reading tests would likely underestimate their reading skills, this study sought to learn about characteristics of less accurately measured students while also evaluating how well teachers can make such judgments. Twenty students identified by eight teachers participated in structured interviews and completed brief assessments matched to characteristics their teachers said impeded the students' test performance. Researchers found information from evidence provided by teachers, teacher and student interviews, and student assessments that confirmed teacher judgments for some students and information that failed to confirm or was at odds with teacher judgments for other students. Along with observations about student characteristics that affect assessment accuracy, recommendations from the study include suggestions for working with teachers who are asked to make judgments about test accuracy and procedures for confirming teacher judgments.

Full Text:

 |  (PDF views: 568)

References


  • Ægisdóttir, S., White, M. J., Spengler, P. M., Maugherman, A. S., Anderson, L. A., Cook, R. S., Nichols, C. N., Lampropoulos, G. K., Walker, B. S., Cohen, G., Rush, J. D.(2006). The meta-analysis of clinical judgment project: Fifty-six years of accumulated research on clinical versus statistical prediction. The Counseling Psychologist, 34, 341-382.
  • Abedi, J. (2006). Language issues in item-development. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 377-398). Mahwah, NJ: Erlbaum.
  • Abrams, L. M., Pedulla, J. J., & Madaus, G. F. (2003). Views from the classroom: Teachers’ opinions of statewide testing programs. Theory Into Practice, 42(1), 18-29.
  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Eeducation (1999). Standards for educational and psychological testing: Washington, DC: American Educational Research Association.
  • American Psychological Association. (2008). Assessment centers help companies identify future managers. Psychology matters. Retrieved August 29, 2008, from http://psychologymatters.apa.org/
  • Bailey, A., & Drummond, K. (2006). Who is at risk and why? Teachers’ reasons for concern and their understanding and assessment of early literacy. Educational Assessment, 11(3 & 4), 149-178.
  • Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5 (1), 7–74.
  • Bradley, D. F., & Calvin, M. B. (1998). Grading modified assignments: Equity or compromise? Teaching Exceptional Children, 31(2), 24-29.
  • Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and uses. Educational Measurement: Issues and Practice. 22(4), 5-12.
  • Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin. 56, 81-105.
  • Cizek, G. J. (2001). More unintended consequences of high-stakes testing. Educational Measurement: Issues and Practice. 20(4), 19-27.
  • Cizek, G. J., Fitzgerald, S. M., & Rachor, R. A. (1995). Teachers’ assessment practices: preparation, isolation, and kitchen sink. Educational Assessment, 3(2), 159-179.
  • Cleveland, L. (2007). Surviving the reading assessment paradox. Teacher Librarian. 35(2), 23-27.
  • Coladarci, T. (1986). Accuracy of teacher judgments of student responses to standardized test items. Journal of Educational Psychology, 78, 141-146.
  • Demaray, M. K., & Elliott, S. N. (1998). Teachers’ judgments of students’ academic functioning: a comparison of actual and predicted performances. School Psychology Quarterly, 13(1),8-24.
  • DeStefano, L., Shriner, J. G., & Lloyd, C. A. (2001). Teacher decision making in participation of students with disabilities in large-scale assessments. Exceptional Children, 68, 7 – 22.
  • Dolan, R. P. & Hall, T. E. (2001). "Universal Design for Learning: Implications for Large-Scale Assessment." IDA Perspectives 27(4), 22-25.
  • Eckert, T. L., Dunn, E. K., Codding, R. S., Begeny, J. C., & Kleinmann, A. E. (2006). Assessment of mathematics and reading performance: An examination of the correspondence between direct assessment of student performance and teacher report. Psychology in the Schools, 43, 247-265.
  • Feinberg, A. B., & Shapiro, E. S. (2003). Accuracy of teacher judgments in predicting oral reading fluency. School Psychology Quarterly, 18, 52-65.
  • Fuchs, L. S., Fuchs, D., Eaton, S. B., Hamlett, C., Binkley, E., & Crouch, R. (2000). Using objective data sources to enhance teacher judgments about test accommodations. Exceptional Children, 67, 67-81.
  • Fuchs, L. S., Fuchs, D., & Capizzi, A. M. (2005). Identifying appropriate test accommodations for students with learning disabilities. Focus on Exceptional Children, 37, 1-8.
  • Fuchs, L., S., & Fuchs, D. (2001). Helping teachers formulate sound test accommodation decisions for students with learning disabilities. Learning Disabilities Research & Practice, 16, 174-181.
  • Gresham, F. M., MacMillan, D. L. & Bocian, K. M. (1997). Teachers as "tests": differential validity of teacher judgments in identifying students at-risk for learning difficulties. The School Psychology Review, 26(1), 47–60.
  • Haladyna, T. M., & Downing, S. M. (2004). Construct-irrelevant variance in high-stakes testing. Educational Measurement: Issues and Practice, 23(1), 17–27.
  • Helwig, R., & Tindal, G. (2003). An experimental analysis of accommodation decisions on large-scale mathematics tests. Exceptional Children, 69, 211-225.
  • Hoge, R. D., & Coladarci, T. (1989). Teacher-based judgments of academic achievement: A review of literature. Review of Educational Research, 59, 297-313.
  • Hollenbeck, K., Tindal, G., & Almond, P. (1998). Teachers’ knowledge of accommodations as a validity issue in high-stakes testing. The Journal of Special Education, 32, 175-183.
  • Howe, K. B., & Shinn, M. M. (2002). Standard reading assessment passages (RAPs) for use in general outcome measurement: A manual describing development and technical features. Accessed November 1, 2008 at: http://www.aimsweb.com.
  • Johnstone, C. J., Thompson, S. J., Bottsford-Miller, N. A., & Thurlow, M. L. (2008). Universal design and multi-method approaches to item review. Educational Measurement: Issues and Practice, 27(1), 25-36.
  • McDonnell, L. M., & McLaughlin, M. J. (1997). Educating one & all: Students with disabilities and standards-based reform. Washington, DC: National Academy of Sciences, National Research Council.
  • Meehl, P. E. Clinical versus statistical predication: A theoretical analysis and review of the evidence. Minneapolis: University of Minnesota Press, 1954.
  • Minnesota Department of Education (2008). Minnesota manual of accommodations for students with disabilities in instruction and assessment – A guide to selecting, administering, and evaluating the use of accommodations: Training guide. Roseville: Author.
  • Moen, R. E., Liu, K. K., Thurlow, M. L., Lekwa, A., Scullin, S., & Hausmann, K. (in process). Studying less accurately measured students. Minneapolis, MN: University of Minnesota, Partnership for Accessible Reading Assessment.
  • Moss, P. A. (2003). Reconceptualizing validity for classroom assessment. Educational Measurement: Issues and Practice. 22(4), 13-25.
  • New England Compact (2007). Reaching students in the gaps: A study of assessment gaps, students, and alternatives (Grant CFDA #84.368 of the U.S. Department of Education, Office of Elementary and Secondary Education, awarded to the Rhode Island Department of Education). Newton, MA: Education Development Center, Inc.
  • Perry, N. E., & Meisels, S. J. (1996). How accurate are teacher judgments of students' academic performance? National Center for Education Statistics Working Paper Series [No. 96-08]. U.S. Department of Education, Office of Educational Research and Improvement. National Research Council. (2003). Assessment in support of instruction and learning: Bridging the gap between large-scale and classroom assessment. Workshop report. Committee on Assessment in Support of Instruction and Learning. Board on Testing and Assessment, Committee on Science Education K-12, Mathematical Sciences Education Board. Center for Education. Division of Behavioral and Social Sciences and Education. Washington, DC: The National Academies Press.
  • Noble, J., & Sawyer, R. (2002). Predicting different levels of academic success in college using high school GPA and ACT composite score. ACT Research Report Series, 2002-4.
  • Ornstein, A. C. (1994). Grading practices and policies: An overview and some suggestions. NASSP Bulletin, 78(561), 55-64.
  • Perry, N. E., & Meisels, S. J. (1996). How accurate are teacher judgments of students' academic performance? National Center for Education Statistics Working Paper Series [No. 96-08]. U.S. Department of Education, Office of Educational Research and Improvement.
  • Pike, G. R., & Saupe, J. L. (2002). Does high school matter? An analysis of three methods of predicting first year grades. Research in Higher Education, 43(2), 187-207.
  • Popham, W. J. (2007). Instructional Insensitivity of Tests: Accountability's Dire Drawback. PhiDelta Kappan, 89(2), 146-150.
  • Price, F. W. & Kim, S. H. (1976). The association of college performance with high school grades and college entrance test scores. Educational and Psychological Measurement, 36, 965 – 970.
  • President’s Commission on Excellence in Special Education. (2002). A new era: Revitalizing special education for children and their families. Washington, DC: U.S. Department of Education, Office of Special Education and Rehabilitative Services.
  • Prime Numbers. (2006). Teacher Magazine, 17(5), 5-10.
  • Shepard, L.A. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.
  • Shinn, M. R. & Shinn, M. M. (2002). AIMSWeb training workbook. Eden Prairie, MN:Edformation, Inc.
  • Sireci, S. G., Scarpati, S. E., & Li, S. (2005). Test accommodations for students with disabilities: An analysis of the interaction hypothesis. Review of Educational Research, 75 (4), 457-490.
  • Thompson, S. J., Thurlow, M. L., & Malouf, D. (2004). Creating better tests for everyone through universally designed assessments. Journal of Applied Testing Technology, 6 (1). Accessed November 1, 2008 at: http://www.testpublishers.org/jattmain.htm.
  • Thurlow, M. L., Johnstone, C., & Ketterlin Geller, L. (2008). Universal design of assessment. In S. Burgstahler & R. Cory (Eds.), Universal design in post-secondary education: From principles to practice (pp. 73-81). Cambridge, MA: Harvard Education Press.
  • Thurlow, M. L., Moen, R. E., Liu, K. K., Scullin, S., Hausmann, K., & Shyyan, V. (2009). Disabilities and reading: Understanding the effects of disabilities and their relationship to reading. Minneapolis, MN: University of Minnesota, Partnership for Accessible Reading Assessment.
  • Thurlow, M. L., Thompson, S. J., & Lazarus, S. S. (2006). Considerations for the administration of tests to special needs students: Accommodations, modifications, and more. In S. M. Downing & T. M. Haladyna (Eds.), Handbook of test development (pp. 653-673). Mahwah, NJ: Lawrence Erlbaum.
  • Thurlow, M. L., Ysseldyke, J. E., & Silverstein, B. (1995). Testing accommodations for students with disabilities. Remedial and Special Education, 16 (5), 260-270.
  • Valencia, S. W., & Buly, M. R. (2004). Behind test scores: What struggling readers really need. The Reading Teacher, 57 (6), 520-531.
  • Washington Office of Superintendent of Public Instruction. (2008). Washington state’s accommodations guidelines for students with disabilities. Olympia, WA: Author.
  • Willingham, W. W., & Breland, H. M. (1982). Personal qualities and college admissions. New York: College Entrance Examination Board.
  • Willingham, W. W. (1985). Success in college: The role of personal qualities and academic ability. New York: College Entrance Examination Board.
  • Wood, G. H., Darling-Hammond, L., Neill, M., & Roschewski, P. (2007). Refocusing accountability: Using local performance assessments to enhance teaching and learning for higher order skills. Briefing paper prepared for members of the Congress of the United States. Accessed November 1, 2008 at: http://www.forumforeducation.org.

Refbacks

  • There are currently no refbacks.