Abstract
Background:
Open book tests have been suggested to lower test anxiety and promote deeper learning strategies. In the Aarhus University medical program, ¼ of the curriculum assess students’ medical knowledge with ‘open book, open web’ (OBOW) multiple choice examinations. We found little existing evidence about the validity of OBOW multiple choice exams of medical knowledge. Based on modern validity theory, we find the most problematic validity assumptions in this setting to be related to ‘extrapolation’ and ‘decision’, i.e.: to the assumptions that: 1) the test tasks require the competencies developed in the course, and 2) there are no skill-irrelevant sources of variability (e.g. information seeking skills) which bias the interpretation of test scores as measures of level of subject expertise), and 3) students with no/very low levels of subject expertise will not pass this test and progress in the course. The aim of this study was to examine aspects of validity (extrapolation and decision) for an OBOW multiple choice examination of medical knowledge.
Method:
In early February 2015, 71 non-experts students (medical and other) completed the same electronically administered OBOW multiple choice test of medical knowledge as 178 expert medical students did in June 2013. Following the test, the non-experts were surveyed on their subject expertise, their test strategy, and the usefulness of the OBOW format in completing the test. Differences in test scores, aberrant response patterns, and pass/fail rates for the groups will be compared.
Results: The results will be available for and discussed at the AMEE 2015 conference.
Open book tests have been suggested to lower test anxiety and promote deeper learning strategies. In the Aarhus University medical program, ¼ of the curriculum assess students’ medical knowledge with ‘open book, open web’ (OBOW) multiple choice examinations. We found little existing evidence about the validity of OBOW multiple choice exams of medical knowledge. Based on modern validity theory, we find the most problematic validity assumptions in this setting to be related to ‘extrapolation’ and ‘decision’, i.e.: to the assumptions that: 1) the test tasks require the competencies developed in the course, and 2) there are no skill-irrelevant sources of variability (e.g. information seeking skills) which bias the interpretation of test scores as measures of level of subject expertise), and 3) students with no/very low levels of subject expertise will not pass this test and progress in the course. The aim of this study was to examine aspects of validity (extrapolation and decision) for an OBOW multiple choice examination of medical knowledge.
Method:
In early February 2015, 71 non-experts students (medical and other) completed the same electronically administered OBOW multiple choice test of medical knowledge as 178 expert medical students did in June 2013. Following the test, the non-experts were surveyed on their subject expertise, their test strategy, and the usefulness of the OBOW format in completing the test. Differences in test scores, aberrant response patterns, and pass/fail rates for the groups will be compared.
Results: The results will be available for and discussed at the AMEE 2015 conference.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 2015 |
Status | Udgivet - 2015 |
Udgivet eksternt | Ja |