Mapping the evidence to evaluate the effectiveness of continued medical education: A pilot evidence gap map of systematic reviews

Mette Andersen Nexø, Tue Helms Andersen, Kirsten Lomborg, Ole Norgaard, Ulla Bjerre-Christensen

Abstract

Background
Numerous systematic reviews have been carried out to evaluate educational programs, but the studies vary greatly in design, type, and complexity of the educational activity. Rooted in systematic review methodology, gap maps are a new tool that illustrate available evidence in visual maps. An evidence gap map can highlight the effectiveness of educational programs and identify knowledge gaps and, in so doing, help health care providers make evidence-based decisions to maintain and improve quality of care
Summary Of Work
We did a pilot search for any systematic review of ‘continuing education’ (MesH term), year 1946-2021, in MEDLINE (Ovid). Two researchers screened and compared studies. Studies were included/excluded in accordance to criteria: Systematic reviews that evaluated continued medical education targeting health care professionals, in chronic care settings. We mapped evidence from systematic reviews in an evidence gap map. The x-axis of the map includes five evaluation outcomes: Kirkpatrick’s four levels (‘Reaction’, ‘learning’, ‘behavior’, ‘results’) and ‘process evaluations’. The y-axis includes study characteristics of the educational program (location, timeframe, type of intervention, learning design). Filters were used to further refine the categories.

Summary Of Results
We identified 389 systematic reviews of which 31 were included for mapping; 14 were systematic reviews of quantitative studies (three included meta-analysis); 6 qualitative studies; 11 included both qualitative and qualitative studies. Most of the available evidence focused on the impact of educational programs on competency-based outcomes (knowledge, skills, attitudes) and few on practice outcomes (e.g. quality of care, medical practice) or process outcomes. Vital study characteristics, such as learning design illuminating how the educational programs may be effective, were mostly missing from the evaluation designs.

Discussion And Conclusion
This pilot evidence gap map illustrates that the available evidence for the effectiveness of medical education is mostly limited to competency-based outcomes and identifies vital knowledge gaps that, if illuminated, help understand the complex nature of continued medical education research. A more comprehensive search will update and complete the evidence gap map.
Take Home Messages
This pilot evidence gap map shows that most of the available evidence for the effectiveness of continued medical education focused on competency-based outcomes, without illuminating why and how it works, and under which circumstances.
OriginalsprogEngelsk
Publikationsdato2022
StatusUdgivet - 2022
BegivenhedAssociation for Medical Education in Europe AMEE 2022 - Europe, Lyon, Frankrig
Varighed: 27 aug. 202231 aug. 2022

Konference

KonferenceAssociation for Medical Education in Europe AMEE 2022
LokationEurope
Land/OmrådeFrankrig
ByLyon
Periode27/08/202231/08/2022

Fingeraftryk

Dyk ned i forskningsemnerne om 'Mapping the evidence to evaluate the effectiveness of continued medical education: A pilot evidence gap map of systematic reviews'. Sammen danner de et unikt fingeraftryk.

Citationsformater