Priorities

PETI Evaluating Educational Technology Effectiveness

Methodology/Validity and Reliability Studies
This section provides background information on the technical work conducted to validate the SETDA tools commissioned from Metiri Group to gather information about the SETDA Common Data Elements. The tool set includes teacher, building, and district surveys, as well as site reviewer protocols (field guides, training, and scoring guides). The pilots were conducted sequentially in three Tiers over the course of an eight-month period from November 2003 through June 2004. The final versions of the surveys are the result of design, development, and validation procedures, including:

  • Framework Development
  • Item Writing
  • Item Review
  • Tier I Field Test (Talk Alouds)
  • Tier II Field Test, associated technical review, and item revision
  • Tier III Field Test, associated technical review, and item revision

The SETDA/Metiri suite of tools is based on prior instruments developed by the Metiri Group. The full Word Icon PETI Technical Manuel is available online.

Framework Development
The tools are based on the Common Data Elements developed by the SETDA Common Data Elements Committee. The committee was charged with developing instruments and methods for tracking state progress on the technology-related goals of the No Child Left Behind Act. The specific data elements were vetted at the 2002 and 2003 National Leadership Institutes. The Common Data Elements were built with reference to prior national frameworks and standards in education technology, including but not limited to the NCREL enGauge framework, Seven Dimensions for Gauging Progress, and the ISTE NETS and NETS-T standards.

From the framework, Metiri Group and the Common Data Elements Committee developed a set of key questions.

Item Writing and Review
Metiri Group then developed specific survey items (questions) designed to gather information that would answer the key questions. Items were drawn from existing sources where applicable and permissible, but were modified for the specific demands of the SETDA framework and the key questions. Items were written to provide multi-item scales to address each key question, so a scale can, for these purposes, be seen as a score for any given question. Each survey item is assigned a score, and where there are multiple items that answer part of a key question, these are averaged together in order to provide one score for the scale. The key questions were also allocated to the respondent groups most likely to have accurate information, and were in some cases not asked of other groups. For example, regarding time spent doing certain technology-related activities with students, we ask only teachers. Regarding details on building infrastructure, we ask only the building survey respondent.

The entire survey sets were then reviewed by SETDA staff and the Common Data Elements committee, resulting in significant revision and reduction of questions. Four different processes were used in this analysis: Inter-item correlations, reliability analysis, and overall review. With this iterative process, it took approximately six months to finalize the first version of the surveys that were field-tested. The full PETI Technical Manuel is available online.

LEADERSHIP - TECHNOLOGY - INNOVATION - LEARNING
© 2024 SETDA, All Rights Reserved
TOP