Tuesday, April 1, 2014

So how do we assess the casual MOOC student?

 This is actually a continuation of the post below.

I have continued playing around with the idea of student participation in MOOCs and have come up with some thoughts about the nature of student involvement and motivation. I am wondering, now that I have been in three different MOOCs, if we might be measuring success using the wrong criteria. In general, success is measured by retention and grades. From my limited experience so far it seems to me that many people do not necessarily come into a MOOC with the intention of finishing. Instead they are present for the solutions to specific questions. If they are only there for a short period, but get from the materials and the discussions what they joined the course for, then how do we measure that success? If they complete then that is reflected in the conventional sense, but to not complete does not necessarily mean that the individual’s objectives were not met. Perhaps the measurements are too linear? Or perhaps they depend too much on conventional academic “trip wires”. So how do we build in a means of measuring the success of a course that does not necessarily have to be completed in order to meet the learner’s needs?

I think the most obvious would be a survey of some type, but that too is very conventional and since there is no requirement, because of the nature of the free MOOC, for a student to respond to a survey, then there is really no guaranteed means of returning useful data for evaluation of course success. Another thought that was rattling around in my skull re this problem was some sort of embedded assessment. Let’s say something similar to what is in this course, where there is discussion in support of a set of concepts that culminates in a product of some sort. One could put some kind of assessment at the end or perhaps embedded in the materials itself to gauge understanding and satisfaction, Likert scale maybe? But that really doesn’t do it either. The student does not have to respond to that either.

Then it occurred to me that one of the best methods might be in the discussions themselves.  That is the permanent persistent record. After all it is just one large database of information. If one could export all of the database fields related to the discussions, sort these by the user, date, and initial post vs. response, and then run some sort of search for key terms within it, then that could indicate from within the discussions that there was a question/solution within that discussion. That might be one way to handle evaluating MOOCs for success. You would not be able to do that with any of the major private MLS providers because the database schema is proprietary, however it could be accomplished if the course were in Moodle, Claroline, or Sakia, (and probably others) which use an open source database as a backend.

The key could be in designing a series of searches for key words or phrases that  demonstrate a question and an answer or series of answers. I don't know just how fuzzy these searches would have to be, but probably the more potential "action words" the better. If the search were run against both post and response then a pattern might emerge that indicates that learning has taken place. If the response that most closely answers the question of the original poster is the last post that the OP makes, then there is a probability that the answer has been found. Likewise, if we could search the posts that the OP accesses, we might be able to find, based upon the last few that were viewed, that the OP has found what they came into the MOOC for.

 I am thinking that the casual MOOC student has a great deal to draw upon from the community that is established by those who are in the course for the long term. These people, as I posited in an earlier post, become the community of practice/learning community, and so act as a pre-fab resource for the casual participant. They become a kind of corporate memory that the casual user can use to answer those specific questions they are participating for, complete with background information provided by the preexisting discussion posts. Who the casual user responds to and in  what context might also be helpful in developing a sense of what defines success for them. That again might be something that could be mined from the database of discussions. 

No comments: