There is a pattern that most L&D professionals will recognise. You design a programme. You bring people together for a workshop. The feedback forms are positive. People say they learned something. And then, three weeks later, nothing has changed. The same meetings happen the same way. The same decisions get made using the same thinking. The training was good. The transfer was not.
This is not a new observation. It has been discussed in L&D circles for decades. But it remains one of the most persistent problems in the profession, and with organisations now spending more than ever on capability building, it is worth asking honestly: why does so much training fail to translate into real behaviour change?
The capability gap that matters most
Most L&D teams are very good at one thing: delivering content. They can build a slide deck, design a workshop agenda, and curate a learning pathway. What many teams are less equipped to do is change how people actually behave when they are back at their desks, under pressure, facing ambiguity, or working with people they find difficult.
And that is the gap that matters. Organisations do not need people who can recite a framework. They need people who can navigate a difficult conversation, reframe a problem when the first approach is not working, or sit with uncertainty long enough to find a genuinely good answer. These are not knowledge problems. They are behaviour problems. And behaviour does not change through information transfer alone.
This is where the real opportunity lies for L&D. Not in delivering more content, but in designing experiences that build lasting capability. The shift is subtle but significant: from “what do people need to know?” to “what do people need to be able to do, and how do we help them practise it until it sticks?”
Why this matters for L&D right now
Three things are converging to make this more urgent than it has ever been.
First, the skills that organisations need most are human skills. As AI takes over more analytical and administrative work, the remaining challenges are the ones that require judgement, creativity, collaboration, and adaptability. These are not skills you can learn from a video. They require practice, feedback, and reflection over time.
Second, people are busier than ever. Pulling someone out of their work for two days to attend a workshop is increasingly difficult to justify, especially when the evidence shows that much of what is covered will be forgotten within weeks. L&D needs to meet people where they are, in the flow of their work, not in a conference room once a quarter.
Third, leaders are asking better questions. They are no longer satisfied with “we trained 500 people this year.” They want to know what changed as a result. What decisions improved? What conversations got better? What capabilities grew? If L&D cannot answer those questions with evidence, it risks being seen as a cost centre rather than a strategic function.
The common mistake
The most common mistake in capability building is treating it as an event rather than a process.
A two day workshop can be a powerful experience. It can shift perspectives, build connections, and introduce new ways of thinking. But on its own, it is not enough. Without follow up, without opportunities to practise in real contexts, without reinforcement that connects back to daily work, the impact fades quickly.
Research on learning transfer consistently shows that the majority of what people learn in a training event is lost within 30 days if it is not reinforced. This is not because the training was bad. It is because the design assumed that understanding equals capability. It does not. Capability comes from repeated practice in real situations, with feedback that helps people adjust and improve.
The second common mistake is designing for knowledge rather than behaviour. Many programmes focus on teaching people about a concept, a framework, or a model, without spending enough time on what it actually looks like to apply it. People leave knowing the theory but unsure how to use it on Monday morning.
A better model
The programmes that actually change behaviour share a few things in common.
They start with a clear picture of where people are now. Before designing anything, they assess current capability. Not through self reporting, which tends to be unreliable, but through validated tools that reveal how people actually respond to challenge, complexity, and collaboration. This creates a shared starting point and makes development personal rather than generic.
They focus on a small number of specific behaviours. Instead of trying to develop ten competencies at once, they identify the two or three that will make the biggest difference in the current context. Can this team reframe problems more effectively? Can these leaders hold productive tension instead of rushing to consensus? Specificity matters.
They build practice into the rhythm of work. This means short, targeted interventions that happen regularly. A five minute reflection prompt after a meeting. A peer feedback exercise at the end of a sprint. A micro learning episode delivered by email that takes less time than a coffee break. These small moments, repeated consistently, build capability far more effectively than infrequent intensive sessions.
They measure what matters. Not satisfaction scores and completion rates, but observable changes in behaviour and performance. Did the team start making different kinds of decisions? Did collaboration improve? Did people get better at navigating uncertainty? This is harder to measure, but it is the only measurement that justifies the investment.
How to apply this in practice
If you are an L&D professional, a facilitator, or a manager responsible for team development, here is a practical way to think about designing for behaviour change.
Step one: Diagnose before you design. Use a validated assessment to understand where your people are now. What are their strengths? Where are the gaps? How do they currently respond to the kinds of challenges they face every day? This evidence should shape everything that follows.
Step two: Define the behaviour, not the topic. Instead of saying “we need a workshop on collaboration,” ask: “What does better collaboration look like in this team, specifically? What are people doing now and what do we want them to do differently?” The more specific you can be, the more targeted your design will be.
Step three: Design a journey, not an event. Start with a short, high impact session to introduce new thinking and create energy. Then follow it with a sustained series of micro interventions: reflection prompts, practice exercises, peer conversations, and short learning episodes. Space them out over weeks and months. This is how behaviour change actually works.
Step four: Create accountability without bureaucracy. Pair people up for regular check ins. Build reflection into existing team meetings. Use dashboards that show progress over time. The goal is to keep the development visible and active without adding administrative burden.
Step five: Measure and adjust. Run the assessment again after a period of development. Compare the before and after. Share the data with leaders. Use it to refine the programme. This creates a feedback loop that makes the whole system smarter over time.
What this looks like in one organisation
A global financial services company wanted to develop its mid level managers. The brief was familiar: “they need to be better at leading through change.” Previous programmes had focused on leadership models and case studies. Feedback was always positive. But the managers’ teams were not reporting any change in how they were being led.
The new approach started differently. Every manager completed a behavioural skills assessment. The results showed a clear pattern: the group was strong on planning and execution but significantly weaker on reflection and opportunity seeking. They were good at doing but less practised at stepping back, questioning their assumptions, or exploring alternative approaches.
Instead of a week long programme, the intervention was designed as a twelve week journey. It began with a half day activation workshop that introduced the skills framework and helped each manager interpret their own assessment results. Then, over the following weeks, they received two micro learning episodes per week, each taking around five minutes. The episodes were targeted to the specific skills the group needed to develop and connected directly to the challenges they were facing in their roles.
Midway through, managers came together for a facilitated session to share what they were noticing and practising. At the end of the twelve weeks, they retook the assessment.
The results were significant. Reflection scores increased by 28%. Opportunity seeking improved by 22%. And most importantly, their teams reported noticeable changes in how these managers were showing up in meetings, in one to ones, and in how they responded to unexpected challenges. The development had transferred into real behaviour.
A practical checklist for L&D teams
Here is a quick way to evaluate whether your current programmes are designed for lasting behaviour change or just for learning in the moment.
- Does your programme start with an evidence based assessment of current capability?
- Have you defined the specific behaviours you want to change, not just the topics you want to cover?
- Does the learning continue after the initial session, with regular micro interventions over weeks or months?
- Are you measuring behaviour change, not just satisfaction and completion?
- Is the learning connected to real work challenges, or does it live in a separate “training” context?
- Do managers and leaders know what their teams are developing, and are they reinforcing it?
- Can you show a before and after comparison of capability?
If you answered no to more than two of these, there is a good chance your programmes are delivering knowledge without building capability. That is not a failure. It is an opportunity to redesign.
Evidence and rationale
The approach described here is grounded in research on learning transfer, behaviour change, and skills development. Key findings that support this model include:
- The Ebbinghaus forgetting curve shows that without reinforcement, people forget roughly 70% of new information within 24 hours and up to 90% within a month.
- Research on spaced practice demonstrates that distributing learning over time produces significantly better retention and transfer than concentrated sessions.
- Dr Jeanne Liedtka’s decade long research programme at the University of Virginia’s Darden School of Business has shown that human centred skills, including reflection, opportunity seeking, and scientific reasoning, are learnable behaviours that improve with structured, sustained practice.
- Studies on microlearning indicate that short, focused learning interventions (under 10 minutes) delivered regularly outperform longer sessions for skill development, particularly when connected to real work contexts.
None of this means that workshops are obsolete. It means they work best when they are part of a longer journey, not a standalone event.