Educational accountability systems have focused the attention of schools and districts on students’ performance on end-of-year state achievement tests. Whether or not a student scores on these tests at a level that meets or exceeds the proficiency requirement has consequences for superintendents, principals, and teachers. A whole industry has grown up around the provision of assessments that can be administered during the school year to identify those students at risk of failing to score proficient on the end-of-year exam.
Critics point to the amount of time this interim assessment activity is taking away from instruction while advocates point to the usefulness of assessing during the school year when there is still time to give extra support to those students who need it.
With primary funding from the U.S. Department of Education and the National Science Foundation, researchers at Worcester Polytechnic Institute and Carnegie Mellon University developed the Web-based ASSISTments system to address this issue. ASSISTments combines online learning assistance with assessment capabilities (Feng, Heffernan, and Koedinger 2009). ASSISTments teaches middle school math concepts and at the same time uses information from learners’ interactions with the system to provide educators with a detailed assessment of students’ developing math skills.
When students respond to ASSISTments problems, they receive hints and tutoring to the extent they need them. The system helps students break hard problems down into sub-parts. Questions associated with the sub-parts are designed to elicit student responses that will reveal the reason why the student initially gave the wrong answer to the problem. Students can ask for stronger and stronger hints as needed to arrive at a correct problem solution.
The ASSISTments system treats information on how individual students respond to the problems and how much support they need from the system to generate correct solutions as assessment information. From the student perspective, they are using ASSISTments to learn; there is not a time when learning stops for test taking.
The ASSISTments system gives educators detailed reports of students’ mastery of 100 middle school math skills, as well as their accuracy, speed, help-seeking behavior, and number of problem-solving attempts.
ASSISTments research (Feng, Heffernan, and Koedinger 2009) has found that information on how students respond after an initial wrong answer predicts performance on the end-of-year state examination better than the number of problems a student got correct on his or her first try (the measure used by conventional interim assessments). By combining information on the number of items correct on the first try and the way the student worked with the system after a wrong answer, ASSISTments produced predicted state achievement test scores with a .84 correlation to the scores the students actually obtained at the end of the year.
What are the implications for the design of state accountability testing systems of online learning environments that can predict student’s performance on end-of-year exams?
If the amount of help students seek during learning and the number of attempts they make on a problem predict math achievement so well, why don’t we measure these factors more often?
Source: Barbara Means based on Feng, Heffernan, and Koedinger (2009).