Research Methodology
Leadership Skill Retention: The Research Behind the Model
Leadership skill retention research spans decades, but translating lab findings into practical models for professional development requires careful synthesis. This page documents the methodology, sources, and design decisions behind our interactive leadership training retention model.
Download the Full Research (PDF)
No signup required. Share freely.
Why We Built This Model
The forgetting curve is one of the most well-established findings in cognitive science, yet it is rarely applied to leadership development in a way that practitioners can use. Most L&D teams know intuitively that training impact fades - but they lack a framework to visualize how fast, compare approaches, or make the case for sustained practice.
We built this model to bridge that gap. It is not a predictive tool - it is an exploratory model that lets practitioners and decision-makers test assumptions against research-grounded parameters and see the implications over a 24-month horizon.
Literature Review & Research Anchors
The model draws on six primary bodies of research:
Ebbinghaus and the Forgetting Curve (1885)
The foundational work on memory decay over time. While Ebbinghaus used nonsense syllables, subsequent research has confirmed that meaningful, contextualized learning decays more slowly - but still decays without reinforcement.
Cook et al. (2011)
A systematic review of instructional design variations in internet-based learning for health professions. Provides evidence on how different delivery methods affect knowledge retention, and how quickly skills decay in applied professional settings.
Tatel & Ackerman (2025)
A meta-analysis on skill retention and decay, with critical distinctions between open skills (adaptive, context-dependent - like leadership) and closed skills (consistent, repeatable). The finding that open skills decay faster than closed skills is central to our model's parameterization. The ~0.08 SD/month decay rate for open skills during active practice is the model's primary research-anchored parameter.
Lacerenza et al. (2017)
A meta-analysis of leadership training effectiveness published in the Journal of Applied Psychology. Provides evidence on which program design features - needs analysis, feedback, spaced sessions, practice - predict greater leadership behavior change.
Ericsson (2004, 2008)
The deliberate practice framework. Establishes that expert performance is maintained through ongoing structured practice, and that skill level degrades without it - regardless of initial mastery.
Jørgensen et al. (2025)
Research on spacing effects in professional skill development, informing the model's treatment of spaced practice intervals.
Model Architecture
The model compares six training approaches across a 24-month timeline. Each approach is modeled as a combination of declarative memory (knowledge of concepts and frameworks) and procedural memory (the ability to perform skills under realistic conditions). These two memory systems decay at different rates and are built through different mechanisms.
Key Modeling Decisions
Dual-memory system
Declarative memory decays faster than procedural memory. Workshops primarily build declarative knowledge with some experiential encoding. Simulation practice progressively converts declarative knowledge into procedural memory through repeated application in varied scenarios.
Procedural memory as a rising floor
As simulation practice accumulates, the model introduces a procedural memory floor - a baseline retention level that resists further decay. This floor rises with each practice session and decays slowly even when practice stops. This models the real-world observation that well-practiced skills are more durable than recently learned knowledge.
Open skill decay rates
Leadership skills are modeled as open skills - adaptive, context-dependent, requiring judgment. Open skills decay faster than closed skills when practice stops (Tatel & Ackerman, 2025). When the simulation practice curves show accelerated decay after practice cessation, this reflects the open-skill research.
Parameter Sources & Justifications
Every parameter in the model is tagged as either RESEARCH (directly from published data) or ESTIMATED (interpolated from best available evidence). Estimated parameters are user-configurable via sliders so that assumptions can be tested transparently.
This document explains the justification for every default value.
RESEARCHResearch-Anchored Parameters
Directly from published data.
| Parameter | Default | Range | Source & Justification |
|---|---|---|---|
| simActiveDecayRate | 0.07 | 0.04–0.12 | Tatel & Ackerman's 2025 meta-analysis in Psychological Bulletin (1,344 effect sizes, 457 reports) established that accuracy-based procedural skills decay at approximately 0.08 SD/month. The model defaults to 0.07 as a slightly conservative implementation, reflecting that active simulation practice provides some ongoing reinforcement between sessions that may slow decay marginally below the meta-analytic average for periods of complete nonuse. |
ESTIMATEDEstimated Parameters
E-Learning Curve
| Parameter | Default | Range | Justification |
|---|---|---|---|
| elearningDecayRate | 0.35 | 0.10–0.60 | No single meta-analysis provides a decay rate specifically for leadership e-learning. However, multiple convergent data points support this estimate. The PwC (2020) VR study found e-learners were far less confident in applying skills than classroom or VR learners, suggesting weak encoding. Sitzmann (2011) found simulation games produced only 11% higher declarative knowledge than comparison groups. A rate of 0.35/month produces ~50% retention at 2 months and ~15% at 6 months, aligning with commonly cited training transfer research showing most passive training content is lost within weeks. The range allows modeling from very sticky e-learning (0.10) to forgettable compliance videos (0.60). |
| elearningFloor | 10% | 5–25% | E-learning about meaningful, job-relevant topics leaves some residual trace - unlike nonsense syllables, leadership concepts connect to existing schemas and real-world experience. A 10% floor means that after 24 months, someone retains faint recognition-level awareness without any ability to perform the skill under pressure. This is deliberately set as the lowest floor in the hierarchy. The range accommodates high-quality interactive e-learning (up to 25%) or truly passive slide-based content (as low as 5%). |
Single Workshop
| Parameter | Default | Range | Justification |
|---|---|---|---|
| workshopExperientialRatio | 55% | 30–80% | A well-designed 2-day leadership workshop typically spends roughly half its time on exercises, role-plays, group activities, and case discussions (experiential) and half on frameworks, models, and facilitated lecture (declarative). The 55% default reflects a good-quality workshop with a practice orientation. A heavily lecture-based program might be 30%; a pure experiential workshop might reach 80%. This ratio matters because the two components decay at different rates. |
| workshopDeclDecay | 0.30 | 0.15–0.45 | The declarative component decays faster than the experiential component but slower than pure e-learning, because it was learned in a richer context - with discussion, examples, and social reinforcement. A rate of 0.30/month produces ~75% loss of the declarative portion at 4 months, which aligns with Arthur et al.'s (1998) finding of substantial skill loss within a year and the common observation that participants forget specific frameworks within months. |
| workshopExpDecay | 0.10 | 0.04–0.18 | The experiential component - what participants learned by doing during exercises and role-plays - decays significantly more slowly because it creates episodic memories and some initial procedural encoding. A rate of 0.10/month produces a half-life of approximately 7 months for the experiential portion, consistent with the Tatel & Ackerman procedural half-life of ~6.5 months for accuracy-based skills. The range allows for workshops with very strong experiential design (0.04) versus superficial exercises (0.18). |
| workshopFloor | 15% | 8–30% | A single 2-day workshop leaves a durable trace: participants remember the experience, retain some behavioral patterns from exercises, and maintain a general orientation toward the skill area. The 15% default is above e-learning (10%) because the workshop created experiential and episodic memories that passive content did not. The range accommodates transformative workshops with strong emotional impact (up to 30%) versus forgettable half-day sessions (as low as 8%). |
Multi-Workshop Program
| Parameter | Default | Range | Justification |
|---|---|---|---|
| workshopBoostSubsequent | 80% | 60–95% | When a second or third workshop occurs, it builds on partially-retained learning. Ebbinghaus's “savings” effect shows that relearning is faster and more efficient than initial learning. The 80% default means each subsequent workshop closes 80% of the gap between current retention and 100%. The range accommodates workshops that build strongly on each other (95%, e.g., a tightly sequenced program with pre-work) versus loosely connected sessions (60%). |
| multiWorkshopDecay | 0.12 | 0.08–0.25 | The multi-workshop decay rate is slower than the single workshop's blended rate because the spacing effect provides additional consolidation. Lacerenza et al. (2017) found spaced leadership training produced δ = 0.88 versus δ = 0.71 for massed formats - a ~24% advantage. A rate of 0.12/month reflects this spacing benefit. Jørgensen et al. (2025) found distributed practice superior to massed in 15 of 19 direct comparisons, further supporting a slower decay rate for spaced programs. |
| multiWorkshopFloor | 25% | 15–40% | Three spaced workshops create a meaningfully higher floor than a single event because: (1) multiple retrieval and re-encoding events strengthen the memory trace, (2) real-world application between workshops converts some learning to practiced behavior, and (3) the spacing effect produces more durable encoding. The 25% default is 10pp above the single workshop floor (15%). The floor hierarchy must hold: multi-workshop should always be above single workshop. |
Simulation Practice
| Parameter | Default | Range | Justification |
|---|---|---|---|
| simBoostBase | 5% | 2–10% | Each simulation session provides a flat boost to retention through retrieval practice and skill rehearsal. The 5% default is conservative - it represents a single 15-30 minute simulation practice session, not a full training day. The testing effect literature (Roediger & Karpicke, 2006) consistently shows that retrieval practice produces measurable retention benefits even in brief sessions. The range allows for short, low-intensity sims (2%) versus longer, highly challenging sessions with structured debriefing (10%). |
| simBoostAdaptive | 15% | 5–25% | When retention has decayed significantly, a simulation session produces a larger boost because there is more room for improvement and the retrieval difficulty is higher - which the desirable difficulties literature (Bjork & Bjork) shows produces deeper encoding. The 15% default means that if retention is at 60%, the adaptive boost adds 15% × (1 − 0.60) = 6% on top of the base 5%, for a total boost of 11%. This creates a natural self-correcting dynamic where sessions are most impactful when most needed. |
| proceduralConversionRate | 2.5% | 1–5% | This is the key mechanism differentiating simulation from passive learning: each practice session converts some declarative knowledge into procedural memory, raising the retention floor. The 2.5% default means that after 12 biweekly sessions (6 months), the procedural floor reaches approximately 30%. This is an estimated parameter with no direct meta-analytic anchor, but it is calibrated to produce outcomes consistent with the Northwestern mastery learning study (89% retention at 12 months) and the LDHF literature (65%+ at 6 months). |
| proceduralCeiling | 65% | 45–80% | Not all of an interpersonal skill can become fully procedural - unlike typing or bicycle riding, interpersonal skills always require some conscious decision-making and adaptive judgment (the “open skill” characteristic). The 65% ceiling reflects the idea that roughly two-thirds of the skill components can become automated while the remaining third always requires conscious, context-dependent judgment. The range allows for simpler skills (80%) versus highly complex adaptive skills (45%). |
| proceduralBaseDecay | 0.015 | 0.005–0.03 | Even during active practice periods, the procedural floor erodes very slowly - reflecting the fundamental durability of procedural memory (basal ganglia, cerebellum) versus declarative memory (hippocampus). A rate of 0.015/month means the procedural floor has a half-life of approximately 46 months (~4 years) during active practice, consistent with the literature showing procedural memory persists far longer than declarative memory. The range allows for very durable procedural encoding (0.005) versus faster-eroding open-skill procedures (0.03). |
| maintenancePotency | 75% | 50–100% | Maintenance sessions are modelled as somewhat less potent than intensive-phase sessions because: (1) they are typically shorter or less challenging, (2) the learner is already proficient so there is less stretch, and (3) the novelty effect has diminished. The 75% default means maintenance sessions provide three-quarters of the boost and procedural conversion of intensive sessions. The range allows for maintenance programs that are nearly as rigorous as initial training (100%) versus brief check-in sessions (50%). |
When Practice Stops (Open Skill Decay)
| Parameter | Default | Range | Justification |
|---|---|---|---|
| openSkillDecayRate | 0.10 | 0.06–0.15 | When simulation practice stops entirely, interpersonal skills (open skills) decay faster than the 0.07-0.08 rate observed during active practice, because: (1) there is no retrieval practice to counteract forgetting, (2) open skills require adaptive flexibility that erodes without varied challenge, and (3) Ericsson (2004, 2008) documented that even experts experience skill degradation when deliberate practice ceases. The 0.10 default is 25-40% faster than the active-practice rate. The 1992 overlearning meta-analysis found that cognitive task retention declined even with overlearning, while physical (closed) tasks did not. |
| openSkillProceduralDecay | 0.04 | 0.02–0.08 | When practice stops, the procedural floor erodes faster than during active practice (0.015) but much slower than declarative knowledge - reflecting the fundamental durability of procedural memory even without reinforcement. The 0.04 default gives a half-life of approximately 17 months for the procedural floor after practice ceases. This is slower than declarative decay but faster than closed-skill procedural decay, consistent with the open-skill distinction: interpersonal procedures require more maintenance than fixed motor sequences. |
| simStopsFloor | 32% | 20–45% | Even after all practice stops, someone who completed 6 months of intensive simulation practice retains a meaningful procedural base - they practiced skills dozens of times in varied scenarios, building automated response patterns that don't fully vanish. The 32% default is 7pp above the multi-workshop floor (25%), reflecting the additional procedural encoding from simulation practice. The floor hierarchy must hold: sim-stops should always be above multi-workshop. The range accommodates estimates from conservative (20%) to optimistic (45%). |
User-Choice Parameters
Design choices, not estimated from research.
| Parameter | Default | Options | Note |
|---|---|---|---|
| simFrequency | 2 weeks | 1, 2, 3, 4 wks | User's design choice for the intensive phase. Biweekly is the default based on LDHF literature and practical scheduling considerations. |
| maintenanceFrequency | 8 weeks | 4, 6, 8, 10, 12 wks | User's design choice for the maintenance phase. Bi-monthly (8 weeks) balances evidence that quarterly may be too infrequent (Matterson et al. 2018 found booster effects lasted ~2 months) with practical scheduling constraints. |
Summary of Floor Hierarchy
The model enforces a logical hierarchy of residual retention floors, where each additional investment in practice depth produces a higher long-term minimum.
| Training Approach | Default Floor | Rationale |
|---|---|---|
| E-learning only | 10% | Passive consumption, minimal procedural encoding |
| Single workshop | 15% | Some experiential/episodic memory from exercises |
| Multi-workshop program | 25% | Spacing effect, multiple retrieval events, real-world application between sessions |
| Workshops + sim (practice stops) | 32% | Accumulated procedural encoding from months of varied practice |
| Workshops + sim (maintenance) | No hard floor | Ongoing practice prevents decay to any fixed minimum |
| Workshops + sim (continuous) | No hard floor | Continuous practice maintains near-peak retention |
If any parameter adjustment causes a lower-investment approach to outperform a higher-investment approach at any time horizon, the parameters should be re-examined - that outcome contradicts both the research evidence and common sense.
Key Limitation
No single study has measured all of these parameters for leadership interpersonal skills training specifically. The model synthesizes findings from medical education (where simulation research is most advanced), military training, sports science, cognitive psychology, and the limited but growing body of corporate training research. The estimated parameters represent our best interpolation from this evidence base, not direct measurements. This is why they are user-configurable - so that different assumptions can be tested and the sensitivity of conclusions to specific parameter choices can be evaluated.
Limitations & Assumptions
This model is exploratory, not predictive. Important limitations include:
Individual variation
The model shows average trajectories. Individual retention depends on prior experience, motivation, learning context, practice quality, and organizational support - none of which are modeled.
Parameter estimation
Most parameters are estimated from adjacent research (health professions, motor skills, cognitive training) rather than directly measured in leadership development contexts. The model makes these estimates transparent and adjustable precisely because certainty is not available.
Linear practice effects
The model assumes consistent practice quality across sessions. In reality, practice quality varies, and poorly designed practice may produce minimal retention benefit.
Organizational context
The model does not account for organizational factors - culture, manager support, application opportunities - that significantly influence whether learned skills transfer to daily work.
We make these limitations explicit because intellectual honesty is more valuable than false precision. The model is designed to support better conversations about training retention, not to replace judgment.
Explore the Interactive Model
Test these findings yourself using the interactive leadership training retention model. Toggle curves on and off, adjust parameters, and compare retention trajectories across approaches.
Download the Full Research (PDF)
No signup required. Share freely.
Full Reference List
Cook DA, Levinson AJ, Garside S, et al. (2011). Instructional design variations in internet-based learning for health professions education: A systematic review and meta-analysis. Joint Commission Journal on Quality and Patient Safety.
Ebbinghaus H. (1885). Über das Gedächtnis: Untersuchungen zur experimentellen Psychologie. Leipzig: Duncker & Humblot.
Ericsson KA. (2004). Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Academic Medicine, 79(10), S70-S81.
Ericsson KA. (2008). Deliberate practice and acquisition of expert performance: A general overview. Academic Emergency Medicine, 15(11), 988-994.
Jørgensen M, et al. (2025). Spacing effects in professional skill development.
Lacerenza CN, Reyes DL, Marlow SL, Joseph DL, Salas E. (2017). Leadership training design, delivery, and implementation: A meta-analysis. Journal of Applied Psychology, 102(12), 1686-1718.
Tatel CE, Ackerman PL. (2025). Skill retention and decay: A meta-analysis. Psychological Bulletin.
If you are evaluating leadership development approaches and want to understand how sustained practice changes long-term outcomes, explore how AI leadership training platforms integrate simulation into existing programs. LeaderCoreAI provides the AI leadership training software that operationalizes the continuous and maintenance practice models - including multilingual leadership training for global organizations.
Related Pages