When George Siemens opened up a massive online open course in Learning Analytics ( LAK11 ), I was keen to join both because I respect the calibre of people in a very international course and because I really want to find out if learning analytics is anything more than an institutional indulgence. Do all those ANOVA tables of retention rates do anything more than record history? Can learning analytics assist current learners, help adapt courses fast enough for employers and meet wider future economic needs through appropriate research and development of social capital? At its most basic, if a learner is paying for a degree, how much of that money is going into learning analytics and how does that benefit that learner directly?
Two of the resources in the first week of the course show how the application of analytics in educational institutions is evolving. Goldstein (2005) sees analytics mainly as a management tool although concedes it may serve to check learning outcomes. Leaving aside that checking learning outcomes is really the job of assessment and all analytics can do is analyse the recorded results, Goldstein focuses on whether or not an institution provides effective training – not education. His 2005 approach to analytics was industrial. Now, in 2011, Elias summarises a more learner-centred approach that may lead to timely action (Norris, 2008) and even follow the learner through the entire student lifecycle (Dawson, 2010). Yet are we there?
I have two major questions at this point in the course: the first practical and the second more conceptual.
From a teaching professor’s point of view, I question whether an LMS really can ‘easily capture’ (Elias, 2011) the amount of time reading – or any other learning activity – in a course. I have downloaded the materials for this course but does that alone mean I have read them? I have students in other distance-learning programmes who, for a variety of reasons, just can not use their LMSs as intended so we resort to use of Skype, personal email, messages from friends and good old-fashioned telephone and then have to work out what ‘the system’ needs in order for institutional demands to be fully satisfied and grades correctly awarded. These extra-LMS learning transactions are often in a form that pre-dates sophisticated LMSs and, ironically, are what the learning designers were trying to mimic when setting up the LMS in the first place – but many is the institution that insists ‘all’ communication be through the LMS so that the analytic tail seemingly wags the learning dog.
The conceptual question relates to the scope of the framework within which learning analytics is applied. Elias (2011) attributes Baker (2007) with the long-used data-wisdom continuum and that continuum begs the questions: information for whom and knowledge held by whom creates wisdom for whom? Are the entities the same? Elias’s analytics improvement cycle can be applied to the 7-stage analytics model at various points and from a variety of perspectives (international, national government, professional body, educational institution, community, individual learner) but the improvement cycles from each of the perspectives need to mesh as finely as the teeth on a Swiss watch if learning analytics is to reach its ultimate goal of being useful to the learner in both the short-term and for their future. One could argue with Dawson (2010), that the ‘entire student lifecycle’ right through graduation and into employment can be measured and that the results would therefore be of use to employers, governments, etc but that lies in the face of a current reality of more part-time students, adult returners and mobile or credit-transfer students. Measuring only traditional students will not give a full picture and may be highly misleading in some sectors with highly mobile workforces (e.g. IT/IS, construction, military).
The art of learning analytics seems to me to be one of selecting as few metrics as possible for management purposes and focusing on those which enable timely intervention to maximise individual student learning and future success, whether that success be in industry or research. That inevitably requires some macro-economic crystal-ball gazing but predictive testing is becoming more accurate in some sectors (see the work of Eduardo Cascallar and others). Above all, learning analytics needs to assist, not hinder, the individual learner not just for ethical reasons but for the most fundamental institutional reason of the lot; without successful students, there is no money.
Dawson, S. (2010) ‘Seeing’ the learning community: An exploration of the development of a resource for monitoring online student networking. British Journal of Educational Technology, 41(5), 736-752. doi:10.1111/j.1467-8535.2009.00970.x
Elias, T. (January 2011) Learning Analytics: Definitions, Processes and Potential Retrieved January 10th from http://groups.google.com/group/LAK11/browse_thread/thread/658ec009e3fc7347
Goldstein, P. J. and Katz, R. N. (2005). Academic Analytics: The Uses of Management Information and Technology in Higher Education, ECAR Research Study Volume 8. http://groups.google.com/group/LAK11/browse_thread/thread/658ec009e3fc7347
Also, as cited in Tanya Elias’s paper above:
Baker, B. (2007). A conceptual framework for making knowledge actionable through capital formation. D.Mgt. dissertation, University of Maryland University College, United States – Maryland. Retrieved October 19, 2010, from ABI/INFORM Global.(Publication No. AAT 3254328).
Norris, D., Baer, L., Leonard, J., Pugliese, L. and Lefrere, P. (2008). Action Analytics:Measuring and Improving Performance That Matters in Higher Education, EDUCAUSE Review 43(1). Retrieved October 1, 2010 from http://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume43 /ActionAnalyticsMeasuringandImp/162422