Using technology to predict the future

One of my clearest memories of teacher training is sitting down with my brilliant mentor after another unsuccessful lesson, and her saying to me that, if I wanted to improve the outcomes in my lessons, I needed to know what students were going to find difficult in what I was teaching. At the time I nodded in a non-committal way but internally I was fuming – how could I possibly know what students were going to find difficult before they’d had a go at the lesson?!

Of course I know now that my mentor was absolutely right, as she so often was: and that it is possible to predict which concepts, texts or activities are going to particularly challenge students in lessons. It is also part of our job as teachers to ensure that these difficulties don’t prevent the students from accessing the learning in the lesson. However, all teachers do this imperfectly, and some teachers barely do it at all. This issue of knowing our students and knowing the impact of our teaching is one which, I believe, technology is perfectly positioned to help tackle.

One of the strengths of technology is its capability to store, analyse and compare huge data-sets; something that teachers are good at is exploring complex topics with students. An issue at the moment is that we are, through blended learning and other similar approaches, allowing technology to do the exploring, whilst teachers are encouraged to spend hours crunching data to try and prepare differentiated, targeted lessons. That is not to say that blended learning is bad, but that perhaps technology and teachers could better play to their strengths.

What we need then is a system for gathering and analysing the rich data that teachers are bombarded by on a daily basis. The kind of data that it is just not possible for the human brain to store, process and make sense of for the number of students which most of us teach. Teachers’ difficulty in using this data is compounded by the fact that there is also seemingly a difference between learning and performance in lessons and that what students retain is not necessarily what they demonstrate: therefore actually knowing our students’ starting points each lesson is virtually impossible. It turns out then, that I’m only educated guesswork away from being in the same situation I found myself in my training year…

So, what could the role of technology in solving this problem be then? In this post I am going to outline what I believe would be some components of an ‘ideal’ system to achieve such a goal, in my next post I will describe what technological and time limitations have forced us to look into adopting in my local area.

Ideally, we need a method to capture student ‘performance’ at key points in every lesson: this could be via hinge questions, reading comprehension questions, true or false, match-up activities, multiple choice or exit tickets completed on 1:1 devices or BYOD. This would begin to provide a rich data set on what students were demonstrating in lessons in response to the learning activities – this is not dissimilar from a lot of assessment tools out there, only the data is not systematically gathered – so what to do with this data?

Companies like Knewton use a sophisticated algorithm to judge the mastery of questions set by looking at time taken, number of mistakes and student learning history. All responses feed back into Knewton’s ‘Knowledge Graph’ – a representation of how parts of the curriculum connect together and how this knowledge relates to itself (see Knewton’s White Paper for more information), creating an ever-growing understanding of how individual and general student learning connects and proceeds. In the first instance, using an engine like Knewton’s it would then allow us to automatically set homework to reinforce student specific learning or address misconceptions.

Perhaps more importantly, in the context of data gathering, over time a profile of students’ learning would be built up: showing their strengths and weaknesses against whatever our learning outcomes were for a particular unit. This would enable us to generate much better records of when students had demonstrated mastery of objectives in lesson and at home.

Of course, simply demonstrating mastery at one moment doesn’t mean that this stays with students: the computing power of such a system could also correlate the frequency of mastery of particular objectives with student performance in teacher-marked assessments – comparing what performance in lessons and homework (and even what order of activities) led to the best outcomes. Technology is also perfectly placed to prompt students to re-visit key threshold concepts at regular intervals, through quick tests or extra reading to embed their learning in the longer-term.

So far, so complex – how would this help IT agnostic teachers? Well, with a few years’ worth of data, or data gathered from lots of different schools, it would be possible to use a student’s position on a ‘knowledge graph’ to start predicting what activities, lessons and sequences of lessons would work best for them before a lesson was taught. This would be especially true each piece of ‘performance’ data for a student were linked to an online record of the teaching that had preceded it – be that a lesson plan, resource or homework task.

Imagine a moment when teachers can enter their unit goals for a particular class, and activities from a global repository are suggested for each student based on their past profile; or where teachers enter their activities and students are colour coded in terms of the likelihood of their accessing the activity based on their position on a ‘knowledge graph’. This would support teachers of all levels of experience in terms of predicting what their students would find difficult (or what the right level of difficulty would be) and help to avoid wasted learning time for students.

Of course such a system is way beyond the capabilities of a group of teachers adapting Google for Education or Office 365 and would require an enormous commitment from a technology firm with significant resources, not least because it would need to be intuitive, teaching and learning focused and reliable. There are some similar systems already in existence, such as Waggle (which is powered by Knewton) or Illuminate (which gathers and analyses teacher assessment data) but these are limited in their scope and scale: Waggle requires the central creation of paid resources and a constrained curriculum, and Illuminate still requires regular teach inputting of data – to have a transformative effect such a system would need to be more accessible to all and for all subjects.

The potential positive outcomes for education could be enormous: we could begin to overcome our issues in not knowing students’ starting points and more effectively differentiate our curricula. With enough schools involved we could remove the guesswork of how student learning best proceeds and what the threshold concepts are across our curriculum – imagine a school or country’s whole curriculum mapped as self-referencing ‘knowledge graph’! Researchers could also make use of the data generated to explore and identify effective teaching and learning, and teachers could focus on planning lessons without having to crunch the numbers beforehand.

What I have described in this post is only a small element of what a complete system might look like (including in-lesson adaptivity, the integration of teacher assessment and the use of the big data generated), but hopefully it gives some idea of how ambitious we can be for technology in education.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s