This report offers summaries of descriptive statistics about the courses, registrants, and activity in HarvardX and MITx over their first year. We have already placed considerable emphasis on the limitations of these data. Here, we conclude with general recommendations, reflecting in part on our experiences not only with these quantitative data but with course teams designing courses. These recommendations may help to guide both research and the design of technology-mediated learning environments.
HarvardX and MITx registrants are not “students” in a conventional sense, and they and their behavior differ from traditional students in K-12 and post-secondary institutions. Registration requires no cost or commitment, thus traditional metrics, like certification rates and enrollment rates, miss many new facets of course engagement, such as skilled learners dropping in to learn one specific aspect of a course. This report and its companion reports are an effort to broaden the discussion and perspective on open online learning, and the data encourage more nuanced consideration of broadly used terms like “students” and “learning.”
Registrant activity differs considerably within and across courses. Registrants are engaging with courses in diverse ways, and many instructors are deliberately building courses that honor diverse forms of participation. Certificate earning is one possible learning pathway. Others include simply watching videos or reading text. Some registrants sample a couple of chapters and then take their interests elsewhere, only to register in other courses and sign up for the second instance of courses. Some registrants focus on assessments to test themselves. Nearly any way that one can imagine a registrant using a course to learn is actually revealed in the data. Certification rates are a misleading representation of this diversity.
There will be no grand unifying theory of MOOCs. A national discussion has emerged over the past two years concerning Massive Open Online Courses (MOOCs). Our results suggest that describing MOOCs as though they are a monolithic collection of courses misses the differential usefulness and impact MOOCs may have from sector to sector. Courses from professional schools like the Harvard School of Public Health exemplify how strategies are likely to differ. That these registrants are more highly educated and that higher percentages of registrants from outside the US should hardly be surprising, and the public policy implications of these efforts should be evaluated in a context differently than an introductory computer science course. The implications of courses in each sector are different and need to be considered in context.
Given how different some of these courses and sectors are, their commonalities are surprising. In every course, people use resources in diverse ways. In every course, we see registrants who are active but not assessed, assessed but hardly active, and those who do both to extremes. Regardless of course and enrollment times, most registrants leave within a week or two of their entering the course, but remaining registrants are far less likely to leave in subsequent weeks. Certification rates are similar on average and in their variability across HarvardX and MITx, in spite of the substantial differences between HarvardX and MITx courses.
Asynchronicity is a defining feature of open online learning, with implications for how we study it. Open enrollment periods and unrestricted use of course resources raise important questions for analysis and design. Registrant trajectories through the course depend upon at least three timeframes, 1) a registrant-oriented timeframe that references each registrant’s enrollment date, 2) a course-oriented timeframe that references curricular milestones in the course, and 3) a calendar-oriented timeframe that acknowledges days of the week, holidays, and weeks in the year. Longitudinal research in these courses requires specification of the time or times relevant for analysis, and results are likely to depend on the choice.
Measuring learning requires a greater investment in assessment and research. This problem is inherited from parent institutions. Some fields have well established large-scale assessments, but most areas of higher education do not. Online courses can offer rich, real-time data to understand and improve student learning, but current data describes activity more often than learning gains or desired future outcomes. We need to invest more in high-quality, scalable assessments, as well as research designs, including pretesting and experiments, to understand what and how registrants are learning.
Open online courses are neither useless nor the salvation of higher-education. Large-scale, “low-touch” learning platforms will have sectors and niches where they are very useful and others where they are less so. Our understanding of tradeoffs and our ability to identify new opportunities will improve with continued research. Thoughtful instructors and administrators in schools and universities will take advantage of resources that can be saved by using these technologies and redeploy those resources to places where “high touch” matters. The results we present here and in companion reports can begin to frame this discussion, as well as set a baseline for evaluating the expansion of our efforts that is already well underway.