Tackling ‘Life after Levels’ with technology

I was recently asked by the Federation to which our school belongs to plan a ‘Life after Levels’ assessment system with some colleagues – and wanted to share here how embracing technology has influenced the design and implementation of the system. The account below is focused on the particular problems of assessing history – but hopefully some of the principles are relevant to other subjects.

We quickly identified the conflicting demands being placed on such a system – see the diagram below, with the blue box representing the immediate concerns of teachers, red those in the medium term and for heads of department and green for data managers:

Screen Shot 2016-01-31 at 21.04.44

We have attempted to prioritise the blue and red requirements, which has sometimes been difficult because of the power wielded by data managers in the current climate of high accountability.

However, we hope we’ve reached a reasonable compromise – much of what we’ve decided to create is based on Alex Ford’s excellent guidance, whilst also taking into account that different schools in the federation will (hopefully) want to tailor their curriculum to be coherent and connected across the key stages (as suggested by Michael Fordham).

The system we’ve designed looks like this and is all based in Access on Office 365, all of the data from the 11 schools in the federation will be anonymised and stored centrally to allow comparisons (this is our small attempt to make use of data in the way I mentioned in my last post):

Screen Shot 2016-01-31 at 21.07.43

I will produce future posts going into each component in more detail, but the design is supposed to deal with some of the key problems facing assessment in history:

Problem 1: A ‘best fit’ approach is attempted to ‘level’ work, often divorced of content:

The mark scheme generator uses a core set of marking criteria which can be mixed and matched according to the assessment question posed. They have to be customised with key content dependent upon the nature and scope of the assessment question. When marking, teachers then select the statements from the criteria which best match the work of their student. The relative ‘value’ of these statements is stored behind the scenes.

Problem 2: Residual knowledge is not well planned for (this is linked to the idea of spaced repetition and therefore may be relevant to other subjects also):

In history the choice of topics and concepts in a curriculum key, but this can be signposted by the use of regular fact tests which recap key previous content before linked topics. The idea is that students will become better able to connect key ideas from one period or topic to another. We’ve begun to experiment with Office Mix to make the creation of rich multiple-choice quizzes easy, but it is still in beta and therefore has limited functionality (what would be great is if this could link its analytics into Access on Office 365, but we won’t hold our breath!).

Problem 3: Teachers spend a lot of time writing similar comments on multiple assessments when they could be engaging with the specifics of student answers:

Crowd source the most effective comments into comment banks, teachers can then select the most appropriate from a suggested list based on the statements which they have already selected for the piece of work (or they can generate their own). These can be printed on stickers and the teacher is freed up to annotate the work and set improvement tasks for the students.

Problem 4: Students and teachers are not well equipped to retain detailed information about performance in assessments, which makes it difficult to reinforce and apply feedback in the medium term (especially between years):

The comments from the comment banks can be stored in a student dashboard in Office 365, hopefully along with their fact test scores and, eventually, photos of their work. One of my colleagues is already experimenting with students reviewing their previous work via structured online tasks as homework prior to their next assessment to help them focus on their areas of improvement.

Problem 5: Meaningful comparison between classes and schools is difficult because of the blunt instrument of national curriculum levels:

The richer data generated using similar analytical mark-schemes should allow comparisons between teachers and schools, which may show groups of students performing significantly above the average, potentially provoking classroom research and opportunities for CPD.

Problem 6: At some point data managers want numbers attached to students’ work and teachers are under pressure to show ‘progress’:

As we are moving to a cohort referenced system at GCSE, it seems worth trying to avoid a criterion referenced system at KS3. By gathering together all of the data across 11 schools, we hope to be able to cohort reference students at any reporting point by combining all of the data available about their performance (formal assessments and multiple choice quizzes). We realise that this will require some statistical trickery, but hopefully the output can be in whatever form is required by individual schools, and will be more valid.

I am sure that such a system could not run without the help of technology. Whether it is better than any alternatives I am not sure – any and all feedback welcome.

Advertisements

Using technology to predict the future

One of my clearest memories of teacher training is sitting down with my brilliant mentor after another unsuccessful lesson, and her saying to me that, if I wanted to improve the outcomes in my lessons, I needed to know what students were going to find difficult in what I was teaching. At the time I nodded in a non-committal way but internally I was fuming – how could I possibly know what students were going to find difficult before they’d had a go at the lesson?!

Of course I know now that my mentor was absolutely right, as she so often was: and that it is possible to predict which concepts, texts or activities are going to particularly challenge students in lessons. It is also part of our job as teachers to ensure that these difficulties don’t prevent the students from accessing the learning in the lesson. However, all teachers do this imperfectly, and some teachers barely do it at all. This issue of knowing our students and knowing the impact of our teaching is one which, I believe, technology is perfectly positioned to help tackle.

One of the strengths of technology is its capability to store, analyse and compare huge data-sets; something that teachers are good at is exploring complex topics with students. An issue at the moment is that we are, through blended learning and other similar approaches, allowing technology to do the exploring, whilst teachers are encouraged to spend hours crunching data to try and prepare differentiated, targeted lessons. That is not to say that blended learning is bad, but that perhaps technology and teachers could better play to their strengths.

What we need then is a system for gathering and analysing the rich data that teachers are bombarded by on a daily basis. The kind of data that it is just not possible for the human brain to store, process and make sense of for the number of students which most of us teach. Teachers’ difficulty in using this data is compounded by the fact that there is also seemingly a difference between learning and performance in lessons and that what students retain is not necessarily what they demonstrate: therefore actually knowing our students’ starting points each lesson is virtually impossible. It turns out then, that I’m only educated guesswork away from being in the same situation I found myself in my training year…

So, what could the role of technology in solving this problem be then? In this post I am going to outline what I believe would be some components of an ‘ideal’ system to achieve such a goal, in my next post I will describe what technological and time limitations have forced us to look into adopting in my local area.

Ideally, we need a method to capture student ‘performance’ at key points in every lesson: this could be via hinge questions, reading comprehension questions, true or false, match-up activities, multiple choice or exit tickets completed on 1:1 devices or BYOD. This would begin to provide a rich data set on what students were demonstrating in lessons in response to the learning activities – this is not dissimilar from a lot of assessment tools out there, only the data is not systematically gathered – so what to do with this data?

Companies like Knewton use a sophisticated algorithm to judge the mastery of questions set by looking at time taken, number of mistakes and student learning history. All responses feed back into Knewton’s ‘Knowledge Graph’ – a representation of how parts of the curriculum connect together and how this knowledge relates to itself (see Knewton’s White Paper for more information), creating an ever-growing understanding of how individual and general student learning connects and proceeds. In the first instance, using an engine like Knewton’s it would then allow us to automatically set homework to reinforce student specific learning or address misconceptions.

Perhaps more importantly, in the context of data gathering, over time a profile of students’ learning would be built up: showing their strengths and weaknesses against whatever our learning outcomes were for a particular unit. This would enable us to generate much better records of when students had demonstrated mastery of objectives in lesson and at home.

Of course, simply demonstrating mastery at one moment doesn’t mean that this stays with students: the computing power of such a system could also correlate the frequency of mastery of particular objectives with student performance in teacher-marked assessments – comparing what performance in lessons and homework (and even what order of activities) led to the best outcomes. Technology is also perfectly placed to prompt students to re-visit key threshold concepts at regular intervals, through quick tests or extra reading to embed their learning in the longer-term.

So far, so complex – how would this help IT agnostic teachers? Well, with a few years’ worth of data, or data gathered from lots of different schools, it would be possible to use a student’s position on a ‘knowledge graph’ to start predicting what activities, lessons and sequences of lessons would work best for them before a lesson was taught. This would be especially true each piece of ‘performance’ data for a student were linked to an online record of the teaching that had preceded it – be that a lesson plan, resource or homework task.

Imagine a moment when teachers can enter their unit goals for a particular class, and activities from a global repository are suggested for each student based on their past profile; or where teachers enter their activities and students are colour coded in terms of the likelihood of their accessing the activity based on their position on a ‘knowledge graph’. This would support teachers of all levels of experience in terms of predicting what their students would find difficult (or what the right level of difficulty would be) and help to avoid wasted learning time for students.

Of course such a system is way beyond the capabilities of a group of teachers adapting Google for Education or Office 365 and would require an enormous commitment from a technology firm with significant resources, not least because it would need to be intuitive, teaching and learning focused and reliable. There are some similar systems already in existence, such as Waggle (which is powered by Knewton) or Illuminate (which gathers and analyses teacher assessment data) but these are limited in their scope and scale: Waggle requires the central creation of paid resources and a constrained curriculum, and Illuminate still requires regular teach inputting of data – to have a transformative effect such a system would need to be more accessible to all and for all subjects.

The potential positive outcomes for education could be enormous: we could begin to overcome our issues in not knowing students’ starting points and more effectively differentiate our curricula. With enough schools involved we could remove the guesswork of how student learning best proceeds and what the threshold concepts are across our curriculum – imagine a school or country’s whole curriculum mapped as self-referencing ‘knowledge graph’! Researchers could also make use of the data generated to explore and identify effective teaching and learning, and teachers could focus on planning lessons without having to crunch the numbers beforehand.

What I have described in this post is only a small element of what a complete system might look like (including in-lesson adaptivity, the integration of teacher assessment and the use of the big data generated), but hopefully it gives some idea of how ambitious we can be for technology in education.

Technology in education today

I’ve been inspired to begin this blog as a response to the ever increasing focus on technology in education. I’ve watched, I suspect like many colleagues, as schools have invested more time and resources in increasing their IT capacity. I’ve become uneasy about this without really being sure why – it’s certainly not because I don’t think it’s potentially worthwhile – so I decided to explore the issues further.

I was introduced to the SAMR model by a colleague and, as a result, wondered whether my unease was over staff simply substituting poster paper for powerpoint – this was a huge focus in my training where we were discouraged from using technology if it didn’t teach students both about history and technology at the same time – but I realised that this wasn’t really valid as staff were producing some excellent ICT based lessons.

Then I came across the TPAK model which made me wonder if it was an issue of training for staff: are we able to make informed decisions about when we should and shouldn’t be using technology? Or what technology we should use? Well actually – probably not, but again this still didn’t get to the heart of my concerns.

I did briefly consider whether we weren’t using technology in an ambitious enough way – there is some suggestion (mainly from the USA) that teachers are resisting a de-centralisation of education because of fear over their professional status. The suggestion is that we could solve disenfranchisement with education if we allowed students to focus on doing what they wanted to do and how they wanted to do it rather than having to mediate it all through teaching staff and their ideologies and experiences about education.

None of these reasons were adequate explanations for my unease however; I certainly don’t believe that technology should be revolutionising education in terms of completely scrapping the model that has been established for over 2000 years, a model which does work effectively in many schools in enormously challenging circumstances (especially as, as a history specialist, the American goals described above bring to mind the less than successful reforms of the Soviets after the 1917 revolution…!) . Instead, I came to realise that I believed that the problem was not that education hadn’t been sufficiently revolutionised by technology, but that technology hadn’t been sufficiently revolutionised by education.

Technology should be acting to enable more teachers to work like those who are performing most effectively in schools across the world, enabling all teachers to: quickly get to know students’ strengths and weaknesses, tailor and adapt their lessons to these needs, provide timely, focused feedback, share best practice with colleagues and, yes, engage students. But technology is not fulfilling on this possibility as it could be.

I see this as being for two reasons, both linked to the systems being developed around computing and education. The first reason is because of the big technology firms: whilst Google Education and Office 365 all have their evangelists, and undoubtedly offer some benefits, they are essentially repackaged enterprise products and therefore don’t offer anywhere near the functionality required to really transform education. What they rely on is the goodwill of the profession and huge amounts of teacher time to make them fit for purpose. Teacher time is always at a premium and this is therefore clearly not sustainable. This is compounded by the fact that these products were not designed from the ground up to solve all of the challenges facing teachers today in a co-ordinated way and therefore fall short of what technology could be doing.

The second reason is linked to the other big technology firm in education, Apple. With the adoption of iPads in schools, some parties in education have been encouraged to tackle the challenges facing teachers through the creation of standalone apps. Some of these are excellent, having as their starting point a clear challenge facing teachers that couldn’t be solved in the same way without the use of technology (an M or R in the SAMR model). However, this leads to atomised solutions to the myriad challenges faced by teachers, where they have to use multiple apps, which don’t communicate with each other, to deal with their day-to-day activities – once again wasting teacher time and energy.

There are some bright spots out there, Knewton for example, or Illuminate – who are focusing on the use of technology to solve a specific problem. However, even these have their limitations in terms of the breadth of functionality they offer and therefore miss some opportunities.

It has therefore become apparent to me that there are enormous opportunities in using technology in education, and some amazing work is being done across the globe. However, what frustrates me is that, despite technology having a previously unseen ability to link together the work of teachers and enable collaboration, we are still duplicating work and wasting teacher time through the flaws in the systems we have been forced to adopt. This blog will therefore have two purposes: firstly to share some small examples of solution focused use of ICT employing current technologies, and ask for feedback; and secondly to act as a sounding board for what teachers actually want from their technology. In this way, it will hopefully act as a spur to the big firms to design some systems for education that are fit for purpose: needless to say, everyone concerned would reap enormous benefits from this.