The first research question would be characterized as a Stage 2 educational intervention research question, “Can scientifically based mathematical intervention strategies delivered by the IPDM and supported by technology demonstrate better achievement as measured by the Iowa Test of Basic Skills (ITBS) at the classroom level when compared with appropriate, randomly selected comparison groups?” Classrooms were the units of analysis for these efforts. The second research question at this stage, “Did the intervention strategies work in closing the achievement gap between the proficient and non-proficient at the experimental school?” The third research question would be characterized as a Stage 1 educational intervention research question, “Can we demonstrate the overall impact of the Iowa Professional Development Model, use of technology, and technology integration on student mathematics achievement over a three year period as measured by the appropriate sections of the (ITBS)?” Linked to this question is the issue of frequency of access to professional development activities (educational interventions) and whether it is a predictor of the frequency with which the classroom teacher will implement these strategies. The target behavior is a measure of student achievement in reading and mathematics as expressed as national standard scores from the ITBS.
The following report is organized into four major sections reflecting the scaling up of four initiatives in the following areas: 1) elementary school mathematics; 2) elementary school reading; 3) middle school mathematics; and 4) middle school reading. The elementary school initiatives focused on fourth grade math or reading achievement and the middle school initiatives focused on eighth grade math or reading achievement.
There were two elementary math consortia, three elementary reading consortia, three middle school math consortia, and three middle school reading consortia. The number of participating schools varied across consortia. Each consortium was established and organized during the first year of the three-year funding cycle. During the first year, students’ achievement was measured in the spring (2003-2004 academic year). Achievement was assessed during the spring for each of the remaining two years of the funding cycle (2004-2005 and 2005-2006 academic years).
The key to scaling up initiatives is the planning and organizational efforts that begin prior to the funding cycle. Approximately six months prior to the start of the fall semester of the first funding year, most of the planning and organizational efforts were 95% complete. Also, when signing on to a consortium initiative, all participating schools had to: 1) agree to receive common professional development from the professional team located in the Area Educational Agency; 2) agree to implement consortium educational interventions and activities; and 3) encourage individual teachers to report out monthly on the implementation rate of consortium educational interventions, report on professional development activities, and the use of technology to support professional development within consortiums. Once schools joined a consortium, they could not change course in the middle of the three-year cycle by using the ESEPT/E2T2 money to support other initiatives. The only option was for a school to discontinue participation in which case that schools funding went back to the consortium for redistribution to continuing participants.
In the detailed data summaries that are provided for each consortium, the focus is on improvement in student math and reading achievement at either the fourth or eighth grades. In all cases, these analyses are longitudinal in that only students for whom a pretest score and a posttest score are available are included in the analyses. This insures that participating children were in the classroom to receive the impact of the teacher implemented educational interventions that the teacher received through the professional development system. Thus, the focus was on change in student achievement in reading and or math during the fourth or eighth grades for two consecutive academic years 2004-2005 and 2005-2006 (years two and three of the funding cycle). This provided the opportunity for a replication of initial results which may strengthen confidence in initial findings when the design is quasi-experimental.
Individual consortium data collection and analyses were developed as quasi-experimental designs. The independent variable was teacher implementation of educational intervention strategies and activities. The dependent variable was either total math score or total reading score from the Iowa Test of Basic Skills (ITBS), a commercial standardized norm-referenced achievement test. Data were analyzed at two levels, the school level where consortium buildings were compared to a state-wide comparison group and the student level where growth in achievement was compared for proficient and non-proficient students from the participating consortium schools. Since the state-wide comparison group of schools was made up of schools that did not participate in the ESEPT/E2T2 project, the comparison group was not only selected in order to be representative of the state as a whole, but also served as a no-treatment control for the consortium specific educational interventions and activities. Thus, throughout the following document we use the terms control group or comparison group interchangeably.
The systematic approach to change taken by the ESEPT/E2T2 project contains several interrelated elements: 1) professional development, 2) technology integration, 3) teacher implementation, and 4) change in student achievement. This involves the combining of student and teacher data in order to determine the paths of impact within the model. Consequently a single pilot study is included at the end of this report that tests for the goodness of fit of the model we refer to as the Iowa Professional Development Model (IPDM).
School Level of Analysis
At the school level of analysis there were mixed findings. In some cases, by the second or third year, experimental schools were performing at a level similar to the state-wide comparison group. In other cases, differences were still rather large. This simply reflects the fact that when experimental schools identify the subject area (math or reading) that was in greatest need of improvement, it will take two or three years for noticeable change to occur in student achievement scores. Further, in some cases the second year progress may not be replicated during the third with a new set of students in the same schools. Also, affecting the replicability of data is the introduction of new participating schools during the third year of the funding cycle.
Student Level of Analysis
This is where the data are most impressive. When the progress of students identified as non-proficient at the beginning of the school year are compared with proficient students (from the same schools and grades), data consistently demonstrate a “closing of the gap” between the two groups of students. This is a rather consistent finding across consortia. Further it is a highly replicable finding within a consortium during the second and third years of the funding cycle. In other words, growth or change in student achievement is occurring for the non-proficient reader or math student in an impressive manner that may be missed by only an analysis of school level Performance.
As will become evident by reading further, neither the rate of improvement nor the effect of the educational interventions were the same across elementary school math, elementary school reading, middle school math and middle school reading initiatives. For example the average effect size (Cohen d) estimating the impact of the educational interventions on non-proficient students’ progress during the academic year was: 1) elementary math, d = .71 (large impact); 2) elementary reading, d = .77 (large impact); 3) middle math, d = .76 (large impact); and 4) middle reading, d = .25, (small impact). Generally speaking, the math initiatives were more effective in promoting non-proficient student progress than was the case for the reading initiatives. However, the beginning reading initiatives at the elementary level seem to be much more effective than middle school initiatives in this regard.
Last but certainly not least is the question of the effectiveness of the Iowa Professional Development Model (IPDM) that integrates teacher training on educational interventions with a technology to deliver and support student change at the classroom level. Is this an effective system that can link professional development activities to teacher implementation of educational interventions that facilitate change in student achievement?
A single consortium was evaluated in order to test the model. A Structural Equation Modeling (SEM) approach was taken to addressing the systems question. The data must be considered pilot data at this time and requires replication within the consortium as well as across consortia. Regardless, the path analysis model does provide a good fit for the data and does suggest that the integration of technology and teacher training can be successfully implemented if the effort is systematic.
Gary D. Phye
Iowa State University
John O’Connell Iowa Department of Education