Assessment in general practice: the predictive value
of written-knowledge tests and a multiple-station examination
for actual medical performance in daily practice
A Coggle Diagram about Introduction (Knowledge tests such as multiple choice questions (MCQs) are efficient in handling large numbers of GPs and can easily cover a wide range of subjects. , Subjective selection of topics in continuing medical education (CME) is not very effective in changing the practice behaviour of doctors., Knowledge tests such as multiple choice questions (MCQs) are efficient in handling large numbers of GPs and can easily cover a wide range of subjects. and Better controlled competence-based tests closely linked to professional reality, called multiple-station examination or objective structured clinical examination (OSCE), could be alternatives.), Methods (Subjects, Instruments and procedure and Analysis), Results (Predictive values, Drop out and Scores and differences in scores, within and between both groups) and Discussion (Both the general medical knowledge test and the knowledge test on skills proved to predict actual medical performance to the same extent as the multiplestation examination.
These findings contrast with the hypothesis that competence-based tests using direct observation, such as multiple station examinations and OSCEs, will have a stronger relationship with actual performance than knowledge tests.
These findings may be explained by the reported
audience-effect', i.e. the influence of observation on performance, in the multiple-station examination.
In a questionnaire, taken after the study, a majority of GPs reported they felt inhibited by the observation throughout the entire station examination, judged as an artificial and unfamiliar setting, whereas a minority said they were influenced in the practice video assessment.
As a consequence, a majority of GPs judged the videotaped consultations of daily surgery asnatural'.
In the videotaped consultations of daily surgeries they recognized their normal `working style' better than in the standardized station consultations.
However, since our study is a methodological study this non-representativeness is not a major problem.
Compared with these effects in the multiple-station examinations, written tests and observation in daily practice may be less intrusive. and The knowledge tests were sent to the participants, which could provide scope for cheating.
Cheating GPs probably would have a high knowledge test score and lower performance scores, having less ready knowledge available.
At variance with knowledge tests, GPs' (professional) characteristics did not contribute to the explanation of variation in performance scores.
This suggests that knowledge tests are much better predictors for strengths and weaknesses in actual performance for groups of GPs than GPs' characteristics such as age, gender, being single-handed or being a GP-trainer.
Since well-developed knowledge tests are available for screening purposes, the current system of postgraduate medical education, allowing GPs to choose on subjective needs, is highly questionable and should become a serious matter of debate.
In addition, these predict actual performance relatively well. Nevertheless, the explained variance in actual performance by knowledge tests and station examination reported here, is too low to bridge the gap between competence and actual performance assessment. In assessment of practicing GPs a combination of different methods, including observation in daily practice, is probably the most valid and reliable approach.
assessment of practicing GPs.
Finally, we conclude that medical knowledge tests should be developed for use in )