Saturday, January 10, 2015

MOOCs and Results

In a recent article in Science there was a discussion about obtaining performance data from MOOCs. Two things struck me:

1. The fact that it seems that there was limited if any thought as to how well they MOOC performed. One suspects that performance measures would have been at the heart of the effort. Namely is all this money worth it, and worth what? Apparently not.

2. Also the article in my opinion seems to ramble almost everywhere except in articulating any semblance of content or context. One wonders what the purpose of all those words were.

For example: 

Using engagement data rather than waiting for learning data, using data from individual courses rather than waiting for shared data, and using simple plug-in experiments versus more complex design research are all sensible design decisions for a young field. Advancing the field, however, will require that researchers tackle obstacles elided by early studies.These challenges cannot be addressed solely by individual researchers. Improving MOOC research will require collective action from universities, funding agencies, journal editors, conference organizers, and course developers. At many universities that produce MOOCs, there are more faculty eager to teach courses than there are resources to support course production. Universities should prioritize courses that will be designed from the outset to address fundamental questions about teaching and learning in a field. Journal editors and conference organizers should prioritize publication of work conducted jointly across institutions, examining learning outcomes rather than engagement outcomes, and favoring design research and experimental designs over post hoc analyses. Funding agencies should share these priorities, while supporting initiatives—such as new technologies and policies for data sharing—that have potential to transform open science in education and beyond. 

 OK, now try to parse this one. First what is engagement data? Second what is learning data? I have now played around with over a dozen MOOCs. Some are good, most are horrible. My last attempt was a Materials Science course at MIT. The lectures were spent watching the instructor write the lecture notes, which we had, on a large chalk board in total silence except for the clicking of the chalk. So why? Second the tests were really tests in reading comprehension, did you use the right units, and did you copy the value properly. Any errors were errors of transcription and not comprehension. How is that measured?

Frankly it seems that MOOC management has been stumbling around in some directionless manner. If there is no way to determine what the "return" on the investment is then why invest.

Now the biggest problem I have with MOOCs is the discussion functions. In my experience I saw anonymous  discussants who were a few steps from shall we say rather anti-social actions. The article states: 

The most common MOOC experimental interventions have been domain-independent “plug-in” experiments. In one study, students earned virtual “badges” for active participation in a discussion forum . Students randomly received different badge display conditions, some of which caused more forum activity than others. 

My experience, and anyone seeking proof need look no further than any anonymous discussion on the web, is that it facilitates the worst and possible near real anti-social behavior. Why would anyone want to participate. I tried once to make an observation and some, in my opinion, socially in-adept person made comments that had me remove my remark. The remark from another made one shudder! Then there is the near diabolical "peer review" method of having people who know nothing from disparate cultures attack others work. One wonders who invented that scheme!

Thus one may wonder if the writer of the piece has had any hands on experience. Apparently from what is written the answer is they have not. Yet the ability to measure effectiveness is critical. When will someone credibly address that issue?