AI

Are eLearning companies serving fast food for the mind?

Are eLearning companies serving fast food for the mind?
Mar 20, 2026

The hard stuff is easy, the soft stuff is hard

Tom DeMarco once observed that the hard stuff is easy and the soft stuff is hard, meaning that the things we can measure, optimise and systematise tend to get labelled as difficult problems, while the things that actually matter (judgement, understanding, long-term outcomes) remain stubbornly resistant to clean measures of success. You can see this pattern everywhere once you look for it, but few places illustrate it more clearly than fast food.

Fast food did not take over because it was the best way to nourish people, it took over because it was quick, standardised, scalable and easy to optimise. It is a well optimised product (assuming you’re only measuring the hard stuff). A burger sold is a clean signal, a drive-through time is a clean signal, but nutrition is messy and long-term health even more so, so over time the industry drifted towards what was visible and trackable and away from what actually mattered.

eLearning is following the same incentives

A lot of eLearning now feels uncomfortably similar, not because anyone set out to lower standards, but because the same incentives are at play. Replace burgers with modules and fries with short clips and the pattern holds, what wins is engagement, watch time, completion, all neat numbers that move in the right direction when content becomes easier to consume. The problem is that these signals are, at best, partial proxies for learning, and often weak ones. Recent research into learning analytics shows that most systems measure observable behaviour like clicks and time on task, while the deeper cognitive side of learning remains largely invisible, and meta-analyses suggest the relationship between engagement and actual outcomes is only moderate at best. Engagement matters, but it is not understanding.

This is where product thinking quietly shapes the outcome, not through bad intent but through the metrics that define success. If your job is to reduce drop-off and increase completion, you will smooth the path, shorten the content, remove friction and simplify explanation until very little resistance remains. There is a study from edX showing that shorter videos perform far better on engagement, often recommending sub 6 minute segments, and while the authors themselves caution that engagement is only a necessary condition for learning, not a sufficient one, the measurable signal tends to dominate the decision. Once something can be tracked easily, it becomes the thing that is optimised, and everything else follows.

Understanding requires friction, even if it feels worse

The issue is that real understanding does not tend to emerge from frictionless consumption, it emerges from structure, effort and time spent working through ideas that do not immediately resolve. Research on whole-task learning consistently shows that people learn more effectively when they engage with coherent, structured problems rather than fragmented pieces, and studies on spacing and retrieval practice reinforce the same point. Learning deepens when it is revisited, tested and applied rather than simply consumed. None of this sits comfortably inside a model optimised for short bursts of attention.

There is a psychological trap here that becomes obvious once you actually look at the evidence, because what feels easy often gets mistaken for what is effective. In the PNAS study by Deslauriers et al., students placed in more active, effortful learning environments went on to perform better on assessments, yet consistently reported feeling as though they had learned less than those in more passive settings. This was largely because the process itself felt harder and less straightforward. That gap between perceived learning and actual learning matters, because if platforms optimise for what feels smooth and immediately rewarding, they are likely to filter out the very conditions that make knowledge stick over time.

AI is accelerating the trend

AI has pushed this dynamic further, because it allows people to bypass the process entirely and jump straight to outcomes. The data coming out of higher education is already telling a clear story, with the vast majority of students now using generative AI in some form and a meaningful proportion using it directly in assessed work. Researchers are increasingly warning that this shifts focus from understanding to output, from learning to performance. The same pattern is visible in software, where AI tools make it possible to produce working code without fully grasping the underlying system, and early experimental evidence suggests this comes at a cost, with AI-assisted learners performing worse on follow-up assessments and struggling more with debugging. The risk is not that people use these tools, it is that they never build the mental models required to operate without them.

This is an incentives problem, not just a product problem

At this point it is tempting to place the blame squarely on product managers, but that is only part of the story. The deeper issue is the measurement regime itself, because when success is defined by engagement, watch time and completions, the rational move is to design for those outcomes. The system does not reward depth, so depth erodes. Over time the platform starts to resemble a fast food chain for the mind, engineered for convenience and repeat consumption, while intellectual nutrition becomes harder to find.

A different direction

The alternative is not to reject modern formats or return to static textbooks as the default, it is to rethink how digital learning is structured so that it preserves the conditions required for understanding. That means coherent narratives rather than isolated fragments, progression that builds over time, space for reflection, and mechanisms that force recall and application rather than passive recognition. It means treating learning as something that unfolds, not something that is skimmed.

This is where approaches like ExpertEdge point in a different direction, by taking structured, long-form content and reshaping it into courses without stripping away the depth that gives it value. Instead of flattening books into disconnected assets, it builds around their inherent structure, layering video, assessment and interaction in a way that reinforces the original material rather than diluting it. The aim is not to make learning shorter, but to make it more navigable without losing substance.

If eLearning is to avoid becoming the fast food of education, it needs more of this thinking, systems that still respect attention but do not reduce learning to whatever is easiest to measure. Otherwise the trajectory is already clear, more engagement, more consumption, and less understanding.

Give your Team the edge

Packt, ACI Learning, Treehouse, and DataLab courses: one subscription, just $130 per month.