If there’s a weak link in digital higher education, it’s instructional media. Too often college students are presented with repurposed text, cheaply and quickly produced by people with limited skills, time and incentives. This is using the Internet as a distribution mechanism – nothing more.
A new company, Adapt Courseware, is taking a different approach to instructional media, though. I see great promise. I asked CEO John Boersma to provide some details.
Keith: To the extent that we’ve paid any attention to instructional media, our focus has been on costs (e.g. OER). Given the degree to which your model focuses on quality, am I right to assume that you see a growing recognition of the importance of quality on the horizon? What do we know about the instructional value of high-end instructional media?
John: In general Adapt Courseware strives to deliver the best online learning experience in all its aspects. Effective multimedia is indeed one of our five core design principles, along with mastery learning, optimal challenge, student choice, and social learning.
There is quite a rich body of empirical research defining and supporting the value of effective multimedia, which is well detailed in Richard Mayer’s excellent book Multimedia Learning. For example, Mayer shows that five out of five studies conclude that students learn better from graphics and narration alone than from graphics, narration, and on-screen text. These studies generally show a large effect, and yet to this day you see far too much redundant onscreen text in online curriculum. 14 out of 14 studies show that students learn better when extraneous visuals are removed, and yet much MOOC content coming out even in 2012 distracts the student with talking-head videos of professors, or even “writing-hand” videos.
So you put this point in just the right way – we start by focusing on the quality we are trying to achieve, and then determine the cost to support that. This is quite different from most online curriculum development, which starts with a fixed budget – one that is generally too small. Quality for us ends up being defined in terms of a mix of effective visual styles that let the eyes and ears work together: good animation with high quality voice over, not too much text on screen, minimal on-screen time for the presenter, and so on. Good instructional video should approach the design and quality of good documentary film, where an audience that is not captive must be engaged.
At Adapt Courseware, we are currently engaged in a vigorous internal debate about these issues, because it is expensive. We ask – are we at risk for providing a higher level of quality than the market will pay for, especially in a climate of understandable downward pressure on curriculum costs? My view is that the answer will emerge from our efficacy testing – that we need to continuously assess the costs of production against the quality of the learning outcomes. For now, we are making sure we err, if at all, on the side of quality. In the long run I feel confident that showing the benefits of what we do in producing better learning outcomes and higher course completion rates will support the needed costs.
Keith: “Adaptive” can mean many things. How exactly is Adapt Courseware adaptive?
John: “Adaptive” means that the learning environment adapts to the student, so that each student receives an individualized and optimized learning experience.
In a narrow sense, our platform is adaptive in that it delivers multimedia learning interactives adaptively as needed to support our mastery learning and optimal challenge design principles. We start by fine-graining course content – defining 200 or more learning topics for a typical three-credit course, each with its own set of learning objectives. Each topic gets its own multimedia suite – a three to six minute instructional video, an ebook section, and a set of learning interactives based on visuals from the video. These learning interactives are arranged in an “adaptive stack”. That is, they are arranged from the simplest single concept interactives – does a student remember and understand key terms – to the most complex, applied, multi-concept interactives. You can think of the adaptive stack aligning with Bloom’s taxonomy of learning, from simple remembering and understanding exercises to complex analysis, critical thinking, and application exercises.
As students demonstrate mastery of concepts by successfully completing multimedia activities, they accumulate points on a “mastery meter” for that topic. We find that this real-time feedback to students is highly motivational. It’s like a video game – students can’t resist running up the score! That’s how we support mastery learning and how we can provide highly measurable learning outcomes to institutions. If we have an excellent learner like you, Keith, the system will quickly adapt and provide you only the three or four most complex, multi-concept activities you need to demonstrate topic mastery. A more typical student might require five, 10, or even 15 activities to reach mastery.
This is possible because the interactives are primarily designed to instruct, and only secondarily to assess. Any incorrect response will produce a hint designed to coach the student to the correct answer. Mastery points can’t be earned in this way – they are only available if an activity is completed correctly on the first try – but the hint engine will deliver to a student the most helpful hint based on the student’s incorrect response. The hint engine is really a second way in which the system is adaptive – different incorrect responses produce different hints.
However, getting an activity correct on any of three tries means you move up the stack to more challenging activities. Getting it wrong altogether means you move down the stack to more elementary activities. This is how we strive for optimal challenge for students – since each student gets a set of activities that steers toward their ability level, we reduce the extent to which students are bored by work that is too easy, or frustrated by work that is too hard. This helps with student retention and course completion.
That’s how the adaptive stack works today – it’s a good, effective start – but we are working now on far more sophisticated algorithms based on the data generated by large student populations.
Keith: The focus of analytics in digital higher ed has, to date, centered on the needs and interests of the faculty and the institution, rather than the student. But your system also provides students with analytics. Explain?
John:You are right; the student analytics have three audiences – the instructor, the adaptive system itself, and the student. By providing a rich set of analytics to instructors – more than 20 real-time global metrics per student, as well as per student performance on each learning topic – we give instructors the tools they need to monitor performance closely and intervene where helpful. It’s part of a larger strategy to allow instructors to focus on higher-value activities. There’s a lot of power latent in these metrics – we anticipate some really exciting announcements on that in the future.
For the students, real time feedback is key. We effectively say to a student, for example: Ok, you are currently at a B minus level of understanding of your current topic. Happy with that? Then it can be beer time. Or, you can work another 10-20 minutes to get that understanding up to 100%. Students generally hang in there – real time feedback strongly increases active learning time, which is a foundational element of student success.
There’s a theme here – we are always thinking about student motivation and how to support it. Think about a typical online student – let’s call him Joe. Joe works all day at his job, and then when he comes home, it’s kid time. Kid time goes on until eight or ten at night, and then Joe sits down to work on his online course, so he can get a better job. A lot of online learning really does occur between ten in the evening and two in the morning. In many online courses today, Joe is confronted by a 30 page textbook chapter to read, perhaps online. It’s kind of brutal. No wonder course completion rates for general education courses aren’t great. Our approach is much, much more accessible.
Academic object analytics look at the same data through another lens – just how effective is that instructional video, text, or multimedia interactive across all students? We work on continuously improving our content based on this feedback loop.
Keith: There’s growing interest in competency-based learning in higher education; a shift away from “seat-time” and toward measuring learning. What role do adaptive learning systems have in competency-based learning?
John: This is a really fascinating area, and I think we are going to see change here at a pace that will surprise people.
The whole system of higher education today is based on authority: a professor says that a student has a B-level knowledge of accounting, say. A college says a student has learned enough to be educated at the Associate’s degree level. An accreditation body says a college has effective controls in place to make these claims. The system has all been based on authority precisely because actual learning was hard to measure – and it was hard to measure because it was all on paper.
Now, with online learning systems like Adapt Courseware, what the student has learned is objectively measured, defined in terms of hundreds of learning topics per course, and comparable across many institutions and large populations. We don’t need an authority to tell us that Joe has mastered depreciation in a financial accounting course but doesn’t really understand bonds. We can measure that directly, and so can Joe. Joe can then tell anyone he likes what he knows – precisely, accurately, and credibly.
Innovators like Western Governors University have taken important steps in this direction with their competency-based approach. The MOOCs are popularizing this idea in an important way. But only now are the tools needed to do this with a high degree of specificity becoming available. It’s going to be fascinating to watch the impact this trend will have on the accreditation model, and even on student assessment of the costs and benefits of traditional higher education altogether, over the next few years.
Once the system moves to a competency-based model, the goals of the institution flip from “has the student sat in the class long enough?” to “just how efficiently can this topic be taught?” If a student can master an introductory Psychology course in 75 hours, should they be required to do more hours of work? In a seat-time model, yes. In a competency-based model, no. If the curriculum can be improved so it only takes 60 hours, the student and society benefit. This is where adaptive systems come in – each student spends the time they need to reach mastery – no more, and no less.
Popular Related Posts
context + competition in digital higher education
an ode to content and first thoughts on adapt courseware
not quite right: higher ed’s business model and instructional technology
moocs: the prestige factor