Identifying Learners at Risk

Identifying learners at risk is a subject lots of schools and vendors are tackling. I reckon much of what is “intervention” or “adaptive” is buzz for marketing, so, I’m going to try and blog our journey as we try to figure this out.

Over the past year I’ve been chatting to my friends Andrew (Queen’s) and Ben (at HT2) about ways to effectively identify learners struggling and then develop smart interventions to help them. Due to my previous work with xAPI at HT2 and current experiments around learning analytics at Queen’s, it’s something I get asked about.

The way I see it, there are two approaches to help identify learners who are struggling.

  1. Retrospective: a learner sits an end of term exam and fails or grades badly on a mid-term paper at which point you intervene.
  2. Preemptive: build up a range of data points from the beginning of a program or course that you then use to predict if a learner will run into trouble and intervene before they’re actually in trouble.

Retrospective

This is what most people do when talking about learner intervention and there’s definitely value in adopting a retrospective approach. While not as powerful or desirable as preemptive it’s worth building out a automated, retrospective, strategy especially given most institutions can’t preemptively intervene with a high level of accuracy, or worse, are doing nothing until it’s too late and the student drops out.

The retrospective approach I’m exploring with Andrew is based around the course gradebook. Through focusing on a learner’s overall grades, plus grades from specific items, we’re building up an automated intervention strategy.

basic_reaction_flow

Preemptive

This is much harder but ultimately it’s the goal. I feel, with the right data and models, it’s possible to predict the learners who’ll run into trouble if they continue on their current path.

It’s not all about exam results and grades but a whole range of touch points from attendance in the library, attendance in lectures, participation in online groups, times they log into the LMS, quality of contribution within discussions, submission times, engagement across connected services, background data (previous courses, grades, feedback etc).

A range of data is required to develop the models that will predict those needing help, as early as possible.

Are you engaged?

For our initial experiments, we’ve segmented learners into 3 categories:

  • Engaged
  • Moderate engagement
  • Unengaged

It can be difficult working out which category a learner falls into so we build out the engagement metric as a standard deviation against a cohort of peers.

  • If activity was more than 1. S.D. below the mean, the learner was ‘not engaged’
  • +1 S.D above the mean, the learner was ‘engaged’

This approach means we don’t need to make engagement a specific number, instead it’s a benchmark against a learner’s peers.

What constitutes engagement? This is up for debate but for the purpose of creating a baseline we’re including metrics such as number of logins, comments contributed to discussions, attendance and submission times.

personas

9 Step Rubric

Based on our engagement flows above, the next step was to devise a 9 point rubric which lets our system know, problematically, which action to carry out. Ben has done a bunch of work in this space so it was possible to combine the flows above with his work to begin thinking about an initial direction for the rubric.

intevention_rubric

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s