Why Asynchronous Learning Often Fails, and How We Can Fix It

Two colleagues creative graphic designer working on colour selection and drawing on graphics tablet at workplace, Colour swatch samples chart for selection colouring.

As the higher education sector continues to face increasing financial pressures, many institutions are expanding their fully online course offerings. In that context, I’ve been thinking a lot about what makes asynchronous online learning work.

It’s not enough to upload content and hope students engage. If we want async learning to be effective, we need to rethink the entire learning experience from the ground up.

That’s why Philippa Hardman’s recent blog post really struck a chord with me. She outlines a simple but powerful model that puts learner agency at the heart of learning design. Hardman emphasises the importance of designing learning materials around the needs of students and then evaluating students’ learning using meaningful measurements.

Because let’s face it, uploading recorded lectures and calling it a module doesn’t cut it anymore.

 

Designing for Meaningful Learning

We might head into the design process knowing exactly what we want students to learn and why we think it matters. But that’s only half the story. The other half is about the students’ perspective: what’s in it for them? Will they see the value and want to engage with it?

Before we map out or structure content, we need to ask whether the learning outcomes we’re working towards will feel meaningful to the people taking the module. Will students understand what skills they are developing and why it’s worth their time?

This is particularly important in asynchronous modules, where there is less built-in structure and fewer opportunities to inspire students through in-person discussions. For this reason, the learning outcomes need to do more than tick a curriculum box, they need to motivate students by helping them see what they will be able to do, create, or decide by the end, and why it matters beyond the final assessment.

When this purpose is clear to those of us who are creating async content, it becomes much easier to design activities that guide students towards it. We can focus on where learners might need support, how to build momentum, and where to introduce opportunities for curiosity, challenge, and reflection.

This kind of learner-first analysis grounds the design in what really counts, not just what we want to deliver but what students will actually take with them.

 

Design & Development: From Passive to Purposeful

To make asynchronous learning work, we need to move beyond passive delivery. That means shifting the focus from ‘What content can I upload?’ to ‘What decisions will students make?’.

Hardman talks about prioritising active decision-making over passive consumption and recall, and I think that’s exactly the shift we need to make.

This could mean:

  • Giving students a choice in how they engage with the material or demonstrate their understanding
  • Embedding decision points into activities like case studies or applied tasks
  • Encouraging students to connect ideas to their own contexts, not just recall information

Good asynchronous design creates space for students to take ownership of their learning while still feeling supported and guided. That doesn’t happen by accident, it happens by design.

 

Evaluation & Iteration: Measuring What Matters

Too often, we fall back on surface metrics, for example how many students completed the quiz, how many clicked through the resources. But those numbers do not tell us much about learning.

Hardman’s emphasis on evaluating real-world applications really resonates. What matters more is whether students can do something meaningful with what they have learned. Can they apply it? Can they explain it in their own words? Can they use it to solve a problem or make a decision?

One of the most useful things we can build into async modules is regular, low-pressure reflection:
What have you learned this week? What questions do you still have? How might this apply to your life or work?

These simple prompts not only give us better feedback, they help students consolidate and personalise what they are learning.

 

Why This Matters More Than Ever

With the rapid advancement of Generative AI, we shouldn’t just be thinking about assessment design, we should be rethinking the design of the entire module.

With Agentic AI tools like BrowserAI and Magnus AI able to breeze through online quizzes, reflections, and even discussion forums, the challenge is clear: if our modules are built on predictable tasks, we are designing for automation, not for learning. And when learning lacks purpose or genuine engagement, we risk students outsourcing it to AI, not out of laziness, but because the design doesn’t give them a reason not to.

But this isn’t about building AI-proof modules. It is about raising the bar.

The real challenge and opportunity is to design learning experiences that are so engaging, relevant, and human that students will not want to hand them over to AI in the first place.

In a way, AI has highlighted something we have known for years: content-heavy, test-driven learning rarely leads to deep engagement. If we want students to stay motivated and involved, we need to design for curiosity, personal investment, and for real decision-making.

That’s the kind of learning worth showing up for.

 

Final Thoughts

Asynchronous learning has huge potential, but only if it is done with care.

It is not about content delivery. It is about designing meaningful experiences that give students a reason to show up, reflect, and take ownership.

And in a world where AI can do the easy stuff for us, we need to focus on the things only humans can do.

3 responses to “Why Asynchronous Learning Often Fails, and How We Can Fix It

  1. What have you learned this week? What questions do you still have? How might this apply to your life or work?

    I also feel this or similar (e.g 3 things you’ve learnt, 2 things confirmed that you thought you knew, 1 question you still have) often works better at the end of a session, than stating the LOs at the beginning, if you’re lucky, what you hoped they’d learn they do, but it can be interesting if most take a totally different message home

    1. I like those Emma, that kind of reflective prompt at the end of a session can be so much more powerful than simply stating learning outcomes at the start. It shifts the focus from “Here’s what we want you to learn” to “What did you actually take from this?” which feels much more aligned with how learning really works in practice.

      And as you say, sometimes what students take away can be quite different from what we intended, and that’s not necessarily a bad thing. It can reveal gaps or, even better, open up new avenues we hadn’t considered.

      Thanks Emma

  2. One idea I didn’t include in the post, but that really shapes how I think about learning design, comes from my background in sport. When you’re training seriously, you don’t go flat out every day. You vary the intensity. A hard interval session might be followed by an easy recovery run or a strength and conditioning day. That variation is essential. Without it, you plateau, or worse, burnout.

    I think the same principle applies to learning. If every week follows the same pattern, watch a video, do a quiz, students can switch to autopilot. But if we build in different types of activity, with varying intensity and purpose, we help students stay engaged and make real progress. One week might be a challenging problem-solving task, another a more reflective discussion or collaborative project.

    I’m writing a short follow-up post on this.

Leave a Reply