Using Expert Interviews, Automated Search and Natural Language Processing to Identify Content for Instructional Materials and Assessment Development
UCLA’s Center for Research on Evaluation, Standards, and Testing (CRESST), working with its partner organization, the Center for Advanced Technology in Schools (CATS), has developed an innovative approach to identifying the conceptual content for inclusion in instruction and assessment.
With funding from the Institute of Education Sciences, CATS is developing learning games focused on acquiring pre-algebra schema and skills. For this effort, CRESST designed approaches to build in valid outcome development and then strategies to compare means that lead to the most effective learning.
To develop the goals for the games and to make them congruent with Common Core State Standards, the R&D team used the design of ontology to translate the verbal standards statements into network representations that define the universe of content and cognition included in the standards. This translation involves the iterative participation of experts in math content and pedagogy, as well as automated search and inclusion from documents using natural language processing approaches. The result is a network representation that shows the relationship among nodes, for instance, a concept like rational number equivalence, and sub concepts, principles, or procedures, for instance, equations or functions. The relationships (or links) among nodes indicate the relationship (predicate) between any pair, such as “part of” and “causes.” The resulting representation results in differential frequencies of connections for each node, resulting in a visual way to see which concepts are most connected and how the structure seems to be arrayed. The high-frequency nodes suggest the essential content learning that should be taught through a game, indicating which tasks will be routinely embedded in game tasks, which tasks should govern the leveling up process (player advancement to more difficult play), and the design and development of outcome and transfer performance studies.
The benefit of this representation is that it allows discussion among teachers and designers about the content and structure of the network. It helps specify both the content and the sequence of instruction and assessment. A deeper function of the network (ontology) is that it forms the major initial structure of a database. This database can be populated by relevant student performance data and can be analyzed by Dynamic Bayes Nets. Empirical findings, collected over time through experimental comparisons of different forms of instruction and analysis of relationships between student characteristics and performance, can be used to refine the ontology.
How does the method for determining the content to include in these games differ from the way content for educational products is usually selected?
Should learning technology developers be asked to reveal and validate their method for selecting content?
Submitted December 4, 2011, 10:22 pm by Eva L. Baker, Gregory K.W.K. Chung, and Markus Eseli