Identifying Learners at Risk

Identifying learners at risk is a subject lots of schools and vendors are tackling. I reckon much of what is “intervention” or “adaptive” is buzz for marketing, so, I’m going to try and blog our journey as we try to figure this out.

Over the past year I’ve been chatting to my friends Andrew (Queen’s) and Ben (at HT2) about ways to effectively identify learners struggling and then develop smart interventions to help them. Due to my previous work with xAPI at HT2 and current experiments around learning analytics at Queen’s, it’s something I get asked about.

The way I see it, there are two approaches to help identify learners who are struggling.

  1. Retrospective: a learner sits an end of term exam and fails or grades badly on a mid-term paper at which point you intervene.
  2. Preemptive: build up a range of data points from the beginning of a program or course that you then use to predict if a learner will run into trouble and intervene before they’re actually in trouble.


This is what most people do when talking about learner intervention and there’s definitely value in adopting a retrospective approach. While not as powerful or desirable as preemptive it’s worth building out a automated, retrospective, strategy especially given most institutions can’t preemptively intervene with a high level of accuracy, or worse, are doing nothing until it’s too late and the student drops out.

The retrospective approach I’m exploring with Andrew is based around the course gradebook. Through focusing on a learner’s overall grades, plus grades from specific items, we’re building up an automated intervention strategy.



This is much harder but ultimately it’s the goal. I feel, with the right data and models, it’s possible to predict the learners who’ll run into trouble if they continue on their current path.

It’s not all about exam results and grades but a whole range of touch points from attendance in the library, attendance in lectures, participation in online groups, times they log into the LMS, quality of contribution within discussions, submission times, engagement across connected services, background data (previous courses, grades, feedback etc).

A range of data is required to develop the models that will predict those needing help, as early as possible.

Are you engaged?

For our initial experiments, we’ve segmented learners into 3 categories:

  • Engaged
  • Moderate engagement
  • Unengaged

It can be difficult working out which category a learner falls into so we build out the engagement metric as a standard deviation against a cohort of peers.

  • If activity was more than 1. S.D. below the mean, the learner was ‘not engaged’
  • +1 S.D above the mean, the learner was ‘engaged’

This approach means we don’t need to make engagement a specific number, instead it’s a benchmark against a learner’s peers.

What constitutes engagement? This is up for debate but for the purpose of creating a baseline we’re including metrics such as number of logins, comments contributed to discussions, attendance and submission times.


9 Step Rubric

Based on our engagement flows above, the next step was to devise a 9 point rubric which lets our system know, problematically, which action to carry out. Ben has done a bunch of work in this space so it was possible to combine the flows above with his work to begin thinking about an initial direction for the rubric.


Harness your Learning Locker data to automate events

I’ve been working on the Reach experiment for a while trying to fine tune it into something that could provide value for folks using Learning Locker. After several iterations and feedback sessions the experiment is taking shape.

Reach is now an official Learning Locker app that uses statements to trigger follow on actions, deliver personalized content and automate workflow.

The concept is straight forward and centres around the notion of events. These events consist of a trigger (currently a Learning Locker statement forward) and a series of subsequent steps (actions) to carry out.


A trigger could be anything that generates an xAPI statement such as a learner completing an exam, submitting an assignment, asking for help in a discussion forum or completing a course.

Steps could include things like the delivery of personalized content, sending out training evaluations, updating a CRM record, pushing details of a compliance quiz to an HR platform, or activating a MailChimp campaign to notify people about the next course.

The possibilities are endless as long as Reach can connect to a given service’s API. If you’ve used tools such as Zapier or IFTTT before, the general idea will sound familiar.

Single vs Multi

Once a trigger has activated within Reach the subsequent event can be a single or multi-track experience. Single track means that all learners go through the same flow. Multi-track allows administrators to set up several different tracks that will activate based on details contained within the incoming xAPI statement. A good example might be a learner’s score on a quiz or the successful, or otherwise, completion of a given task.

If a learner successfully completed task A, activate track A. If they didn’t successfully complete task A, activate track B.


Thanks to xAPI, Learning Locker is gathering a bunch of experience data in a standard format, now, that data can be used to trigger follow on actions. So, if you would like to send personalized content, notify a tutor of a student struggling or update other systems within your technology stack in real-time, Reach is trying to make the process of doing this easier and more efficient.

If you’re using Learning Locker and would like to give Reach a try, head over to to register for a demo.

A Learning Locker connector for Reach

Reach is a platform that lets you track data from multiple sources and use it to automate custom journeys. These journeys might be delivering personalized content or extending step capabilities for a Salesforce journey.

We are in the process of building out a series of connectors. This post is about a connector for Learning Locker.

Learning Locker is a Learning Record Store (LRS) that consumes xAPI and helps you derive insights from the data gathered.

Journeys in Reach can be triggered by an outgoing event (initiated by Reach) or via an incoming payload received from an external source such as Salesforce or Learning Locker.

In Learning Locker v2, they introduced a new feature called ‘Statement forwarding’ which allows you to forward xAPI statements, that meet a given criteria, to an external endpoint (similar to how a webhook works). It’s now possible for your LRS to push filtered statements that trigger a personalized user journey in Reach. Pretty neat.

As Learning Locker works with xAPI it means that Reach journeys can be triggered by any platforms that emit xAPI statements consumed by the LRS. With this one integration – Learning Locker – we can trigger journeys with data from multiple sources such as Moodle, Storyline or any other platform that can talk xAPI.

Combined with Reach’s ability to determine journey tracks based on personas, this provides a compelling option for those interested in the delivery of timely, personalized and actionable feedback.

An LRS and xAPI

Having been heavily involved in the early days of Learning Locker, I am often asked what an LRS (Learning Record Store) and xAPI are.

At its core an LRS is a database and xAPI is the standard format used to encapsulate experiences which are then stored in the LRS.

An important consideration when thinking about xAPI (or indeed any type of data capture) is the notion of garbage in, garbage out. In of itself, xAPI (and an LRS) will not provide much value unless you are capturing the right data and then putting that data to work.

There is an argument to suggest you can’t know what data you’ll need for analysis down the road, or, without specialist training you won’t know what constitutes useful data to capture and this is fair.

So, if you have the storage capacity by all means capture and store everything. Just be aware that data alone doesn’t magically provide insight, it requires a lot of specialist work before that can happen and even then there is no guarantee especially if the data that a data scientist (for example) has to work with is not good enough.

Working towards adaptive feedback

While building Quire, I found myself being drawn to one component: feedback. Goals and check-ins can be powerful (Check out Red Panda if you’re interested in goal setting within a learning context) but the continuous feedback component, and in particular adaptive feedback, is an area I want to explore in more detail.


The next iteration of the Quire experiment (currently in development) is dropping goals and check-ins and doubling down on personalized feedback. The initial focus is around courses to see if we can provide timely, personalized feedback to students as they progress through a course.

After using services like Zapier and IFTTT (If this, then that) to automate various workflows, I’m adopting this approach in Quire to provide teachers, professors and trainers with the ability to define their own IFTTT rules that will inform the core algorithm, helping determine the best feedback to send to individual students and the best time to send it. We’re also experimenting with Google’s machine learning services.

Thanks to xAPI (and Learning Locker), the service harnesses a range of data points for each student such as their participation in an LMS through to attendance and then uses this data, in conjunction with IFTTT rules, to work out what feedback to send and when. Whenever feedback is triggered for a student the criteria responsible is appended to the student’s persona, which is constantly building over time.

Students will be able to provide opinion on the feedback they receive; was it helpful, annoying etc which will help the instructor (and Quire) build up an effective bank of feedback options.

Similar to the first iteration of Quire, feedback can be pushed via email, sms or a range of messaging apps.

As is the nature of experiments, I’m not sure yet if this approach will provide any real value, or even work, but I feel it’s worth exploring and would welcome anyone interested to get involved, just reach out @davetosh. Thanks.

Learning Locker wins MongoDB’s Open Source Innovation Award 2016

Yesterday, Learning Locker was announced as the winner of MongoDB’s Open Source Innovation award 2016. Last year’s winner was Facebook so we’re in good company.

Learning Locker is a Learning Record Store (LRS). It’s open source, generating revenue, and growing from strength to strength. This award is testament to all those involved both within HT2 and the wider open source community.

When I was hired by HT2 to help develop something new and innovative, the initial idea was not to build an LRS but a personal learning tool that would allow learners to take control of their learning data. We quickly realized that first there had to be a standard way for institutions to collect and store data so we pivoted and built an open source LRS; turns out that was a good decision!

On a personal note, my current focus has shifted to a new HT2 product called Red Panda, a personal learning app that uses data stored in Learning Locker to recommend and guide individuals through personalized learning pathways. This new product works together with Learning Locker and Curatr (as well as other LMSs such as Moodle) to offer organizations a complete learning ecosystem that is personalized, social and driven by data.

Read more about the award over on Learning Locker’s blog.

Modern Command Line Part Two: rise of the bots


Following on from my post — Modern Command Lines — which talked about introducing slash commands to RedPanda (a new platform I work on), this post covers our work on bots.

Along with slash commands, we have introduced a native helper bot and hooks for external services, in our case Learning Locker, to provide a bot for users.

There are two things you do with these bots:

  1. Ask a question and get a reply.
  2. Issue a command and the bot goes off and does it.

Some background

Towards the end of last year, I spent a fair bit of time reading up on bots, command lines and conversational interfaces. The service that really piqued my interest was Slack; they have done a great job providing ways to integrate with the service from full blown apps through webhooks to slash commands and bots. They also brought the command line and text commands to a new audience who may not have used this approach before.

Given recent hype there will be people who are rightly wary; if the approach doesn’t gain traction outside of more technically minded users and products, then it will remain an option for power users only.

However, I think conversational interfaces stand a chance of wider adoption. Once you start using the command line it becomes natural, you just chat. Messaging apps are everywhere so the interface is familiar. No need to hunt around a UI trying to find what you need: just chat and get stuff done.

A key advantage from the product side is being able to provide users with a consistent interface when conversing with external services. I like the idea that services can disseminate their information through a simple, well defined, familiar interface instead of trying to hook into the menu and navigation structure of any given application. And for users, they don’t need to care, they just chat.

RedPanda and Pepper

Our initial experiment centers around a helper bot, Pepper, who is available to help users with a few tasks. These can be triggered via slash commands or through written messages:

  • When is my next task due?
  • Get me the popular #php resources this week?
  • @llbot Show me my latest test scores? (One for the Learning Locker bot)
  • What were my check in stats last week?
  • When is @bbetts next available to give a tutorial?
  • How do I change my notification settings?

We are also introducing a mechanism that generates shortcuts for the commands users regularly run so they don’t need to keep asking. This helps build up a level of personalisation and automation based on a user’s popular actions.

For example, if you login and always run the same command to access the latest learning resources, you could be prompted:

Pepper: I’ve notice you regularly access your latest resources, would you like me to do this for you upon login?

Replying ‘yes’ sets up a shortcut command that would automatically go and fetch the latest resources for the user when they login.

In the case of Learning Locker (LL), a user might like a weekly summary of their learning data. So, they would invite the LL Bot into the conversation and ask it to sort this out.

User: @llbot Can you send me a weekly summary of my learning data every Friday?

LLBot: Done.

The bot has picked this up and is off to work building, curating and setting up the summary for delivery each week, on Friday.

These are trivial examples but hopefully illustrate the point if you’re not familiar with this concept.

I should point out that this is optional and not required to use RedPanda. For now it is aimed at those users who like to experiment.

Learning Locker, data and intelligent bots

The integration with Learning Locker is an exciting angle providing the opportunity to explore a new learning landscape powered by learning data, assisted by intelligent bots. I think this could open up new, innovative, application opportunities.

Using Learning Locker’s powerful event driven “if this, then that” functionality and having a customizable bot that helps users make use of their learning data in a friendly, simple, chat like interface, could be pretty neat. The bot might do things like help identify areas needing improvement, suggest resources and construct learning paths all underpinned by data.

A Personal Assistant

At the moment the focus is on building this functionality into RedPanda, however, working on this project has got me thinking about abstracting it out into a standalone, open, service. Something simple that lets people create and customise their own helper bot — an assistant — to aid in daily work, learning and training.

But, first things first, let’s see how it goes with RedPanda. At this stage we’re just experimenting and there is so much to learn. Will it work out as well as I imagine in my head? I’m not sure but it’s going to be fun trying.

If you would like to follow RedPanda development, check out our Twitter account @redpandaapp, or, if learning analytics are more your thing, check out @learning_locker.

Further reading