MOOCs are at an interesting phase in their evolution. With MOOC mania subsiding somewhat, the field is coalescing around aspirational goals to make MOOCs more engaging, interactive, personalized, and sustainable.
Some thought leaders are calling it MOOC 2.0. Just as MOOC 1.0 research stimulated a healthy debate about the market for free online courses, MOOC 2.0 is motivating a debate about best practices in teaching and learning at a massive scale. Studies on retention rates and the unique MOOC audience of lifelong learners reshaped our thinking, and now we have a bit more data to go on when it comes to bringing teaching back to the fore in higher ed (yes, ed stands for education)!
At HarvardX we’re increasingly interested in how MOOC data can guide us towards better teaching. In fact, we aim to learn from research on every MOOC that we build to continuously improve our teaching on campus and online. But to understand what helps students learn and what doesn’t, to get a handle on cause and effect, we have to shift from observational studies to building experimentation into the very DNA of our courses.
This necessitates increasing collaboration between the faculty, professional learning researchers, and instructional designers. For course teams and professors with little experience in educational research, the task seems daunting: Where do I even begin?
Having ruminated on this myself, this is what I’ve learned so far.
Learn to ask the big questions. You should aim to do work that addresses one or more of salient questions such as: How much are MOOC students actually learning? What makes a course the most engaging? What assessment types promote learning? Do different disciplines benefit from different styles of online teaching? How does the type of content (e.g. text vs. video) affect learning? How does user experience and the production value of videos impact student retention and learning? What types of student interactions promote learning and retention? Can instruction be personalized in a way that promotes learning? A recent study of student learning in a physics MOOC is a good example of research that addresses one of these questions.
Don’t reinvent the wheel. There is a rich history of pedagogical research that predates MOOCs and online learning in general. Reading summaries of evidence-based best practices is a great way to learn about this prior literature. It is also a great way to improve your practice. HarvardX research fellow Joseph Williams has assembled a comprehensive list ofintroductions, reviews and overviews. I’ve learned a great deal from my own sampling of these readings. There is also a substantial body of discipline-based educational research (DBER), particularly in STEM fields. It is worth spending some time familiarizing yourself with that literature. It will help you learn about best practices in your field and give you a sense for pre-existing resources that you can use in your course and research project.
Talk to faculty, education researchers and other course developers. Research is increasingly collaborative, and your study will be much better with the advice and input of others. The faculty can give great advice as experts in research and the subject matter, and often have insight into how student behavior might affect your study. For example, I have worked closely with a faculty member to brainstorm methods to increase student participation in a formative assessment that will be used in a research project. My own conversations with the researchers at HarvardX have been a source of ideas and inspiration, and frequently serve as a needed reality check.
Determine whether your research is technically feasible. MOOC boosters rightly emphasize the promise of technology. But in my mind, MOOCs are defined by the possibilities, and limitations, of technology. Some things aren’t currently possible, or only possible with great effort. For example, EdX doesn’t currently support personalized learning paths, so a study on personalization would require new tools and expertise. If your project requires bespoke tools and infrastructure, maybe it is time to go back to the drawing board.
Poke holes in your study. Instead of trying to convince yourself of all of the reasons that a project will work, poke holes in it to figure out the ways that it won’t work. The more you do this on the front end, the more likely you are to have useable and interesting data when the work is done. For example, the majority MOOC students don’t complete every component of a course (even if they gain a certificate), so if your study will analyze student behavior over the arc of the course, you may have much lower numbers than a small experiment embedded in the first week of the course.
After the above reality checks, trash or refine most of your ideas. Let’s face it, not every idea is a good idea. It might be redundant with prior work, it might not be technically possible, or it might not answer any important question. Don’t get too attached to a study until you’ve convinced yourself, and many others, of its merits.
That’s about as far as this process has taken me. There are still many important considerations, particularly ethical considerations, institutional review, and data management and analysis. I’ll post an update as I learn more.
Marshall Thomas recently completed a PhD in Biology at Harvard and now works as a Project Lead at HarvardX. Coming from a research background in a different field, he is just beginning to learn about MOOC and educational research.
Guest Post: Marshall Thomas
28 Mar 2019
14 Jun 2017