Like the sequencing of topics and subtopics in regular instruction, sequencing in online courses is usually based on the judgments of experienced instructors. Often times, online courses use a mastery-based approach, allowing students as much time as they need to learn a skill or concept to mastery before moving them to new content. This individualized pacing can be particularly helpful for learners who need more time, but there can be a downside to this approach; students can get stuck on a particular topic and become discouraged or bored if not allowed to move on.
The science content in the course included 8,400 learning objectives in topics including physics, biology, chemistry and organic chemistry organized into 331 subtopics. Because students take the course online along with embedded assessments, Kaplan had something on the order of 50 million test taker responses to 10,000 assessment items. Analysts were able to compute contingent probabilities (the chances of getting a subtopic B item correct for those who did and did not get a subtopic item correct). Data visualizations were used to illustrate the contingencies among different subtopics.
The analysis showed that physics learning is not as linear as many people thought. While chances of getting some subtopics correct do depend on having mastered other subtopics previously, there are many cases where alternative paths through the curriculum appear to be equally productive. For example, it appears that there are many inter-connected physics concepts that can be learned equally well in any order.
The subtopic sequencing based on this empirical analysis of assessment responses differed significantly from the instructional sequence most commonly recommended by expert teachers. It was much closer to the concept maps obtained when expert teachers were asked “How do you think about these topics in your own mind?” than when they were asked, “How do you think these topics should be sequenced for instruction?”
Getting the data into a form that could be analyzed was much more labor intensive than the item analysis per se.
Can general insights into learning be derived from this kind of analysis?
What would be necessary to make it possible for online learning providers to contribute to the research knowledge base?
Submitted December 4, 2011, 10:00 pm by Barbara Means