Minding the Gate: Data-Driven Decisions about the Literacy Preparation of Elementary Teachers

Published: 
Mar. 27, 2009

Source: Journal of Teacher Education, Volume 60 Number 2, March/April 2009. 131-141.
(Reviewed by the Portal Team)

The paper examines data from nine statewide administrations of the Idaho Comprehensive Literacy Assessment (ICLA) over three-years. The ICLA measures pre-service teachers' knowledge of research-based content and pedagogy related to reading instruction and assessment.

Description of the ICLA—Format and Content

The ICLA has three standards; each standard is administered separately, untimed, and typically on a separate day. Standard I, Language Learning and Literacy Development, addresses emergent literacy, phonological and phonemic awareness, phonics and structural analysis, sight vocabulary, morphemic analysis, and research-based instructional practices for developing accurate and automatic decoding. Standard II, Reading Comprehension Research and Best Practices, focuses on fluency, vocabulary development, comprehension instruction, and text genres. Standard III, Literacy Assessment and Intervention, deals with common assessment procedures, interpretation of assessment results, and instructional activities for struggling readers. Each standard has three sections.

The purpose of this paper was first to examine pre-service candidates' performance on areas of literacy knowledge.

Candidates

The elementary and special education undergraduate candidates in this project attended one of seven Idaho teacher preparation institutions—four public and three private.
Between April 2004 and December 2006, 2,593 candidates took Standard I of the ICLA; 2,182 took Standard II; and 2,077 took Standard III. The numbers included candidates
who retook a form or forms. The ICLA committee set statewide test administration windows 3 to 4 weeks long during the final 5 weeks of the fall and spring semesters and in late July
into early August. Each institution arranged for the ICLA to be proctored and the completed tests returned to a central location for scoring and recording.

Candidates scored highest when matching literacy terms to definitions; they were mildly less successful matching terms to descriptions of research-based instructional activities; moderately less successful when asked for words containing specified phonic patterns from a passage; and least successful when addressing essay-formatted scenario questions. Idaho literacy instructors have used this information to inform them of their teaching effectiveness. A second purpose of this paper was to highlight the challenges and benefits for faculty and programs interested in adopting a similar testing model. The article also points out the organizational and political constraints that can delay adoption and use.

Updated: Mar. 25, 2009
Print
Comment

Share:

Facebook comments:

Add comment: