April 19, 2012
Anthony E. Kelly, Office of Educational Technology, U.S. Department of Education, and George Mason University
Having attended sessions on and talked informally with people about learning analytics and big data, I am convinced that these techniques represent a major opportunity for education research. Marcia Linn’s presentation is an example of this.
Marcia Linn’s presentation, “Designing Assessments to Track Student Progress in Understanding the Complex Roles of Energy in Photosynthesis,” showed how it is possible to track whether students have understood foundational concepts that will support the comprehension of complex topics, such as photosynthesis. Pre- and post-tests that focus only on the narrow definitions of photosynthesis can miss the prior supporting, and more general ideas, such as energy. She showed how the Web-based Inquiry Science Environment (WISE) uses student-generated concept-mapping to track students’ understanding not only of photosynthesis, but of foundational concepts as well.
April 10, 2012
Karen Cator, Director, Office of Educational Technology, U.S. Department of Education
Commerce, entertainment, and social life are amplified more and more across the Web and the resulting amount of data being generated is skyrocketing. Commercial entities are harvesting this data stream to provide personalized advertisements, and the public discourse is trending toward questions like “What data am I creating, where is it going, and what are we getting from it?”
Big data, it seems, is everywhere—even in education. Researchers and developers of online learning systems, intelligent tutoring systems, virtual labs, simulations, games and learning management systems are exploring ways to better understand and use data from learners’ activities online to improve teaching and learning.
March 25, 2012
Anthony E. Kelly, Office of Educational Technology, US Department of Education, and George Mason University
This post builds on analyses of two use cases by Barbara Means of SRI. One case describes the testing of a multimedia intervention in a college-level course. The other case describes course redesign using longitudinal data analysis.
Drawing on these two cases, I propose some new approaches to design-based research, including increased collaboration through shared design and data repositories. I seek feedback on these nascent analyses, and welcome similar cross-case analyses using these and other use cases.
March 12, 2012
William R. Penuel, University of Colorado
The National Educational Technology Plan lays out a vision of learning as life-long, and life-wide. But most technology-supported innovations focus on promoting learning in a single setting (usually schools), and researchers focus their efforts on gathering evidence of short-term impacts on end-of-year accountability tests.
It’s important also to consider innovations that reach young people in multiple sites as well (including as part of online communities) and to develop evidence about how such innovations shape outcomes. One such innovation is YouMedia, which occupies a physical space at the Harold Washington Library Center in downtown Chicago, as well as a virtual place—a website—dedicated to YouMedia users.
January 31, 2012
Barbara Means, SRI International
Current Department of Education funding programs reflect the traditional model of education R&D in their three stages of research: small investigations testing the principle behind an intervention, somewhat larger studies testing the efficacy of the intervention under ideal conditions, and then effectiveness studies (large-scale randomized field trials). Positive findings from the prior stage of R&D are a prerequisite for the next stage with larger funding.
Under this model, the maturity of the technology-based intervention, the scope at which it has been implemented, and the extent of evidence concerning its impact grow together over time. It is common for investigators to take their innovation through several rounds of small-scale studies—taking place over multiple years—before concluding that it is ready for large-scale implementation and impact testing.
January 13, 2012
Chris Dede, Harvard Graduate School of Education
How can our field measure quality for an innovation that seems promising but is not yet fully developed? Well-established methods are available for determining initial potential and for proving full effectiveness—but researchers are still developing measures of quality to use for the in-between stages in development, during the evolution of a new educational approach. Design-based research and designing for scale, two important strategies for the middle stages, are not yet widely used or widely understood.
In the early stages of implementing developing an educational intervention, designers conduct pilot studies to see whether, under ideal conditions, a substantial benefit is realized. The goal is to determine if the innovation shows sufficient promise to merit further development. To achieve this limited objective of assessing potential, the evidence gathered is largely anecdotal. This approach presents numerous challenges to validity and generalization, but is nonetheless appropriate at this stage: designers are making a quick, small investment to determine the potential benefit of moving forward with refining their innovation and implementing it under a variety of conditions.
December 4, 2011
Karen Cator, Director of the Office of Educational Technology, U.S. Department of Education
In a world where most professionals and many students have a device in their pocket with more computing power than the early supercomputers, new technology-supported learning moments arise on a daily basis. Common sense suggests that some of these activities will be worthwhile while others are not. We look to research to help us distinguish between effective and ineffective uses of technology, but conventional research paradigms often provide little guidance because of their limits in terms of timeliness, scale, and generalizability.
Policymakers and practitioners point out that they have to make decisions today on how to implement new technologies and technology-supported activities and can’t wait three years to examine the (often-inconclusive) outcomes of a study. Technology designers largely ignore academic research and proceed with their own rapid usability testing to refine their designs and then ultimately let the market evaluate the results.