The goal of incremental change is to “improve efficiency and effectiveness in existing structures of schooling, including teaching” (Cuban, 1996, p.76). The underlying assumption of incremental change is that the existing structures do not need to be dramatically changed, while fundamental change assumes that the structures and process need to be completely “overhauled” (p.76). Although changes may start out as fundamental, they commonly become incremental changes. Changes and reform are not the same, according to Cuban, who asserted that the tendency to equate the two had created confusion.
This study is primarily a feasibility study that empirically tested the scaling-up of implementation, however, it will also involve elements of replication. During the second year of requested funding, replication and scaling-up of first year impact studies involving middle school reading and mathematics interventions were conducted. In the third year, full implementation of the dissemination of best practices to all schools needing and desiring this information.
The impetus for addressing this problem are the need to address the NCLB requirements within the context of the federal government’s educational entitlement programs, the need to discuss the problem of evaluating the effectiveness of technology in meeting the NCLB requirements and elements of the Iowa Professional Development Model, and to develop and evaluate a communication plan related to the dissemination of effective strategies and best practices to other schools both within and outside the State of Iowa. Iowa receives approximately $114 million a year through these programs. This total reflects, on average, 4% of the state’s educational budget. Thus, maintaining this source of funding and effectively using these resources is a high priority of the Iowa Department of Education.
The professional development model scaled-up is the Iowa Professional Development Model (IPDM). The development of credible data about best practices and their impact on student achievement has traditionally been called program evaluation. However, our two-factor research design model was developed by integrating recent advances in research design elements with classical experimental logic to produce credible data for driving educational decisions and policy. This model provides credible accountability and impact data that can drive decision making and policy at all educational levels and demonstrates both internal (Cook & Campbell, 1979; Levin & O’Donnell, 1999; Slavin, 1999; Stanley & Campbell, 1966) and external validity (Boruch,1997; Mosteller & Boruch, 2002; Shadish, Cook & Campbell, 2002).
In our two-factor research design model, we first focus on producing credible data at the classroom level. Here the focus is on the development of data collection procedures that guards against sources of internal validity that threaten the assumption that professional development activities and implementation, positively impacts student progress in mathematics, reading, and science. The second factor in the design model focuses on developing data collection procedures using randomized clinical trials that provide the basis for generalized causal inference. This last factor addresses the scaling-up issue within the limited implementation of the IPDM, the generalizability of credible classroom practices, and the use of this data at successively higher administrative levels within the hierarchical education structure [Local Education Agency (LEA), Area Education Agency (AEA), Iowa Department of Education (IDE).